Deep learning

Editorial Comment: Usefulness of a Deep-Learning Model for Pediatric Abdominal Organ Segmentation

Wed, 2024-05-15 06:00

AJR Am J Roentgenol. 2024 May 15. doi: 10.2214/AJR.24.31408. Online ahead of print.

NO ABSTRACT

PMID:38748729 | DOI:10.2214/AJR.24.31408

Categories: Literature Watch

Specific emitter identification based on multiple sequence feature learning

Wed, 2024-05-15 06:00

PLoS One. 2024 May 15;19(5):e0299664. doi: 10.1371/journal.pone.0299664. eCollection 2024.

ABSTRACT

The specific emitter identification is widely used in electronic countermeasures, spectrum control, wireless network security and other civil and military fields. In response to the problems that the traditional specific emitter identification algorithm relies on a priori knowledge and has poor generalizability, and the existing specific emitter identification algorithm based on deep learning has poor feature selection and the adopted feature extraction network is not targeted, etc., the specific emitter identification algorithm based on multi-sequence feature learning is proposed. Firstly, multiple sequence features of the emitted signal of the communication radiation source are extracted, and these features are combined into multiple sequence features. Secondly, the multiple sequence fusion convolutional network is constructed to fuse and deeply extract the multiple sequence features and complete the classification of individual communication radiation sources through the classifier of neural network. The selected sequence features of this algorithm contain more and more essential RFF information, while the targeted design of the multi-sequence feature fusion learning network can effectively extract the essential RFF information. The results show that the algorithm can significantly improve the performance of SEI compared with the benchmark algorithm, with a recognition rate gain of about 17%.

PMID:38748654 | DOI:10.1371/journal.pone.0299664

Categories: Literature Watch

An Interpretable Adaptive Multiscale Attention Deep Neural Network for Tabular Data

Wed, 2024-05-15 06:00

IEEE Trans Neural Netw Learn Syst. 2024 May 15;PP. doi: 10.1109/TNNLS.2024.3392355. Online ahead of print.

ABSTRACT

Deep learning (DL) has been demonstrated to be a valuable tool for analyzing signals such as sounds and images, thanks to its capabilities of automatically extracting relevant patterns as well as its end-to-end training properties. When applied to tabular structured data, DL has exhibited some performance limitations compared to shallow learning techniques. This work presents a novel technique for tabular data called adaptive multiscale attention deep neural network architecture (also named excited attention). By exploiting parallel multilevel feature weighting, the adaptive multiscale attention can successfully learn the feature attention and thus achieve high levels of F1-score on seven different classification tasks (on small, medium, large, and very large datasets) and low mean absolute errors on four regression tasks of different size. In addition, adaptive multiscale attention provides four levels of explainability (i.e., comprehension of its learning process and therefore of its outcomes): 1) calculates attention weights to determine which layers are most important for given classes; 2) shows each feature's attention across all instances; 3) understands learned feature attention for each class to explore feature attention and behavior for specific classes; and 4) finds nonlinear correlations between co-behaving features to reduce dataset dimensionality and improve interpretability. These interpretability levels, in turn, allow for employing adaptive multiscale attention as a useful tool for feature ranking and feature selection.

PMID:38748522 | DOI:10.1109/TNNLS.2024.3392355

Categories: Literature Watch

An effective role-oriented binary Walrus Grey Wolf approach for feature selection in early-stage chronic kidney disease detection

Wed, 2024-05-15 06:00

Int Urol Nephrol. 2024 May 15. doi: 10.1007/s11255-024-04067-9. Online ahead of print.

ABSTRACT

In clinical decision-making for chronic disorders like chronic kidney disease, high variability often leads to uncertainty and negative outcomes. Deep learning techniques have been developed as useful tools for minimizing the chance and improving clinical decision-making. Moreover, traditional techniques for chronic kidney disease recognition frequently the accuracy is compromised as it relies on limited sets of biological attributes. Therefore, in the proposed work, a combination of deep radial bias network and the puma optimization algorithm is suggested for precised chronic kidney disease classification. Initially, the accessed data undergo preprocessing using Spectral Z score Bag Boost K-Means SMOTE transformation, which includes robust scaling, data cleaning, balancing, encoding, handling missing values, min-max scaling, and z-standardization. Feature selection is then conducted using the hybrid methodology of Role-oriented Binary Walrus Grey Wolf Algorithm to choose discriminative features for improving classification accuracy. Then, Auto Encoder with Patch-Based Principal Component Analysis is employed for dimensionality reduction to minimize the processing time. Finally, the proposed classification method utilizes deep radial bias and the puma optimization search algorithm for effective chronic kidney disease classification. The introduced scheme is tested on two datasets: the risk factor prediction of chronic kidney disease dataset and chronic kidney disease dataset, which provides accuracies of 99.02%, and 99.15%, respectively. Experiments demonstrate that the proposed model identifies chronic kidney disease more accurately than the existing approaches.

PMID:38748365 | DOI:10.1007/s11255-024-04067-9

Categories: Literature Watch

Deep-learning enhanced high-quality imaging in metalens-integrated camera

Wed, 2024-05-15 06:00

Opt Lett. 2024 May 15;49(10):2853-2856. doi: 10.1364/OL.521393.

ABSTRACT

Because of their ultra-light, ultra-thin, and flexible design, metalenses exhibit significant potential in the development of highly integrated cameras. However, the performances of metalens-integrated camera are constrained by their fixed architectures. Here we proposed a high-quality imaging method based on deep learning to overcome this constraint. We employed a multi-scale convolutional neural network (MSCNN) to train an extensive pair of high-quality and low-quality images obtained from a convolutional imaging model. Through our method, the imaging resolution, contrast, and distortion have all been improved, resulting in a noticeable overall image quality with SSIM over 0.9 and an improvement in PSNR over 3 dB. Our approach enables cameras to combine the advantages of high integration with enhanced imaging performances, revealing tremendous potential for a future groundbreaking imaging technology.

PMID:38748176 | DOI:10.1364/OL.521393

Categories: Literature Watch

Deep learning-based inverse design of multi-functional metasurface absorbers

Wed, 2024-05-15 06:00

Opt Lett. 2024 May 15;49(10):2733-2736. doi: 10.1364/OL.518786.

ABSTRACT

A novel approach-integrating a simulated annealing (SA) algorithm with deep learning (DL) acceleration-is presented for the rapid and accurate development of terahertz perfect absorbers through forward prediction and backward design. The forward neural network (FNN) effectively deduces the absorption spectrum based on metasurface geometry, resulting in an 80,000-fold increase in computational speed compared to a full-wave solver. Furthermore, the absorber's structure can be precisely and promptly derived from the desired response. The incorporation of the SA algorithm significantly enhances design efficiency. We successfully designed low-frequency, high-frequency, and broadband absorbers spanning the 4 to 16 THz range with an error margin below 0.02 and a remarkably short design time of only 10 min. Additionally, the proposed model in this Letter introduces a novel, to our knowledge, method for metasurface design at terahertz frequencies such as the design of metamaterials across optical, thermal, and mechanical domains.

PMID:38748148 | DOI:10.1364/OL.518786

Categories: Literature Watch

NoiseNet, a fully automatic noise assessment tool that can identify non-diagnostic CCTA examinations

Wed, 2024-05-15 06:00

Int J Cardiovasc Imaging. 2024 May 15. doi: 10.1007/s10554-024-03130-x. Online ahead of print.

ABSTRACT

Image noise and vascular attenuation are important factors affecting image quality and diagnostic accuracy of coronary computed tomography angiography (CCTA). The aim of this study was to develop an algorithm that automatically performs noise and attenuation measurements in CCTA and to evaluate the ability of the algorithm to identify non-diagnostic examinations. The algorithm, "NoiseNet", was trained and tested on 244 CCTA studies from the Swedish CArdioPulmonary BioImage Study. The model is a 3D U-Net that automatically segments the aortic root and measures attenuation (Hounsfield Units, HU), noise (standard deviation of HU, HUsd) and signal-to-noise ratio (SNR, HU/HUsd) in the aortic lumen, close to the left coronary ostium. NoiseNet was then applied to 529 CCTA studies previously categorized into three subgroups: fully diagnostic, diagnostic with excluded parts and non-diagnostic. There was excellent correlation between NoiseNet and manual measurements of noise (r = 0.948; p < 0.001) and SNR (r = 0.948; <0.001). There was a significant difference in noise levels between the image quality subgroups: fully diagnostic 33.1 (29.8-37.9); diagnostic with excluded parts 36.1 (31.5-40.3) and non-diagnostic 42.1 (35.2-47.7; p < 0.001). Corresponding values for SNR were 16.1 (14.0-18.0); 14.0 (12.4-16.2) and 11.1 (9.6-14.0; p < 0.001). ROC analysis for prediction of a non-diagnostic study showed an AUC for noise of 0.73 (CI 0.64-0.83) and for SNR of 0.80 (CI 0.71-0.89). In conclusion, NoiseNet can perform noise and SNR measurements with high accuracy. Noise and SNR impact image quality and automatic measurements may be used to identify CCTA studies with low image quality.

PMID:38748056 | DOI:10.1007/s10554-024-03130-x

Categories: Literature Watch

Shape completion in the dark: completing vertebrae morphology from 3D ultrasound

Wed, 2024-05-15 06:00

Int J Comput Assist Radiol Surg. 2024 May 15. doi: 10.1007/s11548-024-03126-x. Online ahead of print.

ABSTRACT

PURPOSE: Ultrasound (US) imaging, while advantageous for its radiation-free nature, is challenging to interpret due to only partially visible organs and a lack of complete 3D information. While performing US-based diagnosis or investigation, medical professionals therefore create a mental map of the 3D anatomy. In this work, we aim to replicate this process and enhance the visual representation of anatomical structures.

METHODS: We introduce a point cloud-based probabilistic deep learning (DL) method to complete occluded anatomical structures through 3D shape completion and choose US-based spine examinations as our application. To enable training, we generate synthetic 3D representations of partially occluded spinal views by mimicking US physics and accounting for inherent artifacts.

RESULTS: The proposed model performs consistently on synthetic and patient data, with mean and median differences of 2.02 and 0.03 in Chamfer Distance (CD), respectively. Our ablation study demonstrates the importance of US physics-based data generation, reflected in the large mean and median difference of 11.8 CD and 9.55 CD, respectively. Additionally, we demonstrate that anatomical landmarks, such as the spinous process (with reconstruction CD of 4.73) and the facet joints (mean distance to ground truth (GT) of 4.96 mm), are preserved in the 3D completion.

CONCLUSION: Our work establishes the feasibility of 3D shape completion for lumbar vertebrae, ensuring the preservation of level-wise characteristics and successful generalization from synthetic to real data. The incorporation of US physics contributes to more accurate patient data completions. Notably, our method preserves essential anatomical landmarks and reconstructs crucial injections sites at their correct locations.

PMID:38748052 | DOI:10.1007/s11548-024-03126-x

Categories: Literature Watch

Revealing the reconstruction mechanism of AgPd nanoalloys under fluorination based on a multiscale deep learning potential

Wed, 2024-05-15 06:00

J Chem Phys. 2024 May 7;160(17):174313. doi: 10.1063/5.0205616.

ABSTRACT

The design of heterogeneous catalysts generally involves optimizing the reactivity descriptor of adsorption energy, which is inevitably governed by the structure of surface-active sites. A prerequisite for understanding the structure-properties relationship is the precise identification of real surface-active site structures, rather than relying on conceived structures derived from bulk alloy properties. However, it remains a formidable challenge due to the dynamic nature of nanoalloys during catalytic reactions and the lack of accurate and efficient interatomic potentials for simulations. Herein, a generalizable deep-learning potential for the Ag-Pd-F system is developed based on a dataset encompassing the bulk, surface, nanocluster, amorphous, and point defected configurations with diverse compositions to achieve a comprehensive description of interatomic interactions, facilitating precise prediction of adsorption energy, surface energy, formation energy, and diffusion energy barrier and is utilized to investigate the structural evolutions of AgPd nanoalloys during fluorination. The structural evolutions involve the inward diffusion of F, the outward diffusion of Ag in Ag@Pd nanoalloys, the formation of surface AgFx species in mixed and Janus AgPd nanoalloys, and the shape deformation from cuboctahedron to sphere in Ag and Pd@Ag nanoalloys. Moreover, the effects of atomic diffusion and dislocation formation and migration on the reconstructing pathway of nanoalloys are highlighted. It is demonstrated that the stress relaxation upon F adsorption serves as the intrinsic driving factor governing the surface reconstruction of AgPd nanoalloys.

PMID:38748027 | DOI:10.1063/5.0205616

Categories: Literature Watch

Deep learning path-like collective variable for enhanced sampling molecular dynamics

Wed, 2024-05-15 06:00

J Chem Phys. 2024 May 7;160(17):174109. doi: 10.1063/5.0202156.

ABSTRACT

Several enhanced sampling techniques rely on the definition of collective variables to effectively explore free energy landscapes. The existing variables that describe the progression along a reactive pathway offer an elegant solution but face a number of limitations. In this paper, we address these challenges by introducing a new path-like collective variable called the "deep-locally non-linear-embedding," which is inspired by principles of the locally linear embedding technique and is trained on a reactive trajectory. The variable mimics the ideal reaction coordinate by automatically generating a non-linear combination of features through a differentiable generalized autoencoder that combines a neural network with a continuous k-nearest neighbor selection. Among the key advantages of this method is its capability to automatically choose the metric for searching neighbors and to learn the path from state A to state B without the need to handpick landmarks a priori. We demonstrate the effectiveness of DeepLNE by showing that the progression along the path variable closely approximates the ideal reaction coordinate in toy models, such as the Müller-Brown potential and alanine dipeptide. Then, we use it in the molecular dynamics simulations of an RNA tetraloop, where we highlight its capability to accelerate transitions and estimate the free energy of folding.

PMID:38748013 | DOI:10.1063/5.0202156

Categories: Literature Watch

Unveiling interatomic distances influencing the reaction coordinates in alanine dipeptide isomerization: An explainable deep learning approach

Wed, 2024-05-15 06:00

J Chem Phys. 2024 May 7;160(17):174110. doi: 10.1063/5.0203346.

ABSTRACT

The present work shows that the free energy landscape associated with alanine dipeptide isomerization can be effectively represented by specific interatomic distances without explicit reference to dihedral angles. Conventionally, two stable states of alanine dipeptide in vacuum, i.e., C7eq (β-sheet structure) and C7ax (left handed α-helix structure), have been primarily characterized using the main chain dihedral angles, φ (C-N-Cα-C) and ψ (N-Cα-C-N). However, our recent deep learning combined with the "Explainable AI" (XAI) framework has shown that the transition state can be adequately captured by a free energy landscape using φ and θ (O-C-N-Cα) [Kikutsuji et al., J. Chem. Phys. 156, 154108 (2022)]. In the perspective of extending these insights to other collective variables, a more detailed characterization of the transition state is required. In this work, we employ interatomic distances and bond angles as input variables for deep learning rather than the conventional and more elaborate dihedral angles. Our approach utilizes deep learning to investigate whether changes in the main chain dihedral angle can be expressed in terms of interatomic distances and bond angles. Furthermore, by incorporating XAI into our predictive analysis, we quantified the importance of each input variable and succeeded in clarifying the specific interatomic distance that affects the transition state. The results indicate that constructing a free energy landscape based on the identified interatomic distance can clearly distinguish between the two stable states and provide a comprehensive explanation for the energy barrier crossing.

PMID:38748008 | DOI:10.1063/5.0203346

Categories: Literature Watch

Enhancing hydrological modeling with transformers: a case study for 24-h streamflow prediction

Wed, 2024-05-15 06:00

Water Sci Technol. 2024 May;89(9):2326-2341. doi: 10.2166/wst.2024.110. Epub 2024 Apr 4.

ABSTRACT

In this paper, we address the critical task of 24-h streamflow forecasting using advanced deep-learning models, with a primary focus on the transformer architecture which has seen limited application in this specific task. We compare the performance of five different models, including persistence, long short-term memory (LSTM), Seq2Seq, GRU, and transformer, across four distinct regions. The evaluation is based on three performance metrics: Nash-Sutcliffe Efficiency (NSE), Pearson's r, and normalized root mean square error (NRMSE). Additionally, we investigate the impact of two data extension methods: zero-padding and persistence, on the model's predictive capabilities. Our findings highlight the transformer's superiority in capturing complex temporal dependencies and patterns in the streamflow data, outperforming all other models in terms of both accuracy and reliability. Specifically, the transformer model demonstrated a substantial improvement in NSE scores by up to 20% compared to other models. The study's insights emphasize the significance of leveraging advanced deep learning techniques, such as the transformer, in hydrological modeling and streamflow forecasting for effective water resource management and flood prediction.

PMID:38747952 | DOI:10.2166/wst.2024.110

Categories: Literature Watch

Identifying pests in precision agriculture using low-cost image data acquisition

Wed, 2024-05-15 06:00

Braz J Biol. 2024 May 13;84:e281671. doi: 10.1590/1519-6984.281671. eCollection 2024.

ABSTRACT

Unmanned Aerial Vehicles (UAVs), often called drones, have gained progressive prevalence for their swift operational ability as well as their extensive applicability in diverse real-world situations. Of late, UAV usage in precision agriculture has attracted much interest from scientific community. This study will look at drone aid in precise farming. Big data has the ability to analyze enormous amounts of data. Due to this, it is one of the diverse crucial technologies of Information and Communication Technology (ICT) which had applied in precision agriculture for the abstraction of critical information as well as for assisting agricultural practitioners in the comprehension of the most feasible farming practices, and also for better decision-making. This work analyses communication protocols, as well as their application toward the challenge of commanding a drone fleet for protecting crops from infestations of parasites. For computer-vision tasks as well as data-intensive applications, the method of deep learning has shown much potential. Due to its vast potential, it can also be used in the field of agriculture. This research will employ several schemes to assess the efficacy of models includes Visual Geometry Group (VGG-16), the Convolutional Neural Network (CNN) as well as the Fully-Convolutional Network (FCN) in plant disease detection. The methods of Artificial Immune Systems (AIS) can be used in order to adapt deep neural networks to the immediate situation. Simulated outcomes demonstrate that the proposed method is providing superior performance over various other technologically-advanced methods.

PMID:38747863 | DOI:10.1590/1519-6984.281671

Categories: Literature Watch

Residual network improves the prediction accuracy of genomic selection

Wed, 2024-05-15 06:00

Anim Genet. 2024 May 15. doi: 10.1111/age.13445. Online ahead of print.

ABSTRACT

Genetic improvement of complex traits in animal and plant breeding depends on the efficient and accurate estimation of breeding values. Deep learning methods have been shown to be not superior over traditional genomic selection (GS) methods, partially due to the degradation problem (i.e. with the increase of the model depth, the performance of the deeper model deteriorates). Since the deep learning method residual network (ResNet) is designed to solve gradient degradation, we examined its performance and factors related to its prediction accuracy in GS. Here we compared the prediction accuracy of conventional genomic best linear unbiased prediction, Bayesian methods (BayesA, BayesB, BayesC, and Bayesian Lasso), and two deep learning methods, convolutional neural network and ResNet, on three datasets (wheat, simulated and real pig data). ResNet outperformed other methods in both Pearson's correlation coefficient (PCC) and mean squared error (MSE) on the wheat and simulated data. For the pig backfat depth trait, ResNet still had the lowest MSE, whereas Bayesian Lasso had the highest PCC. We further clustered the pig data into four groups and, on one separated group, ResNet had the highest prediction accuracy (both PCC and MSE). Transfer learning was adopted and capable of enhancing the performance of both convolutional neural network and ResNet. Taken together, our findings indicate that ResNet could improve GS prediction accuracy, affected potentially by factors such as the genetic architecture of complex traits, data volume, and heterogeneity.

PMID:38746973 | DOI:10.1111/age.13445

Categories: Literature Watch

MAMILNet: advancing precision oncology with multi-scale attentional multi-instance learning for whole slide image analysis

Wed, 2024-05-15 06:00

Front Oncol. 2024 Apr 30;14:1275769. doi: 10.3389/fonc.2024.1275769. eCollection 2024.

ABSTRACT

BACKGROUND: Whole Slide Image (WSI) analysis, driven by deep learning algorithms, has the potential to revolutionize tumor detection, classification, and treatment response prediction. However, challenges persist, such as limited model generalizability across various cancer types, the labor-intensive nature of patch-level annotation, and the necessity of integrating multi-magnification information to attain a comprehensive understanding of pathological patterns.

METHODS: In response to these challenges, we introduce MAMILNet, an innovative multi-scale attentional multi-instance learning framework for WSI analysis. The incorporation of attention mechanisms into MAMILNet contributes to its exceptional generalizability across diverse cancer types and prediction tasks. This model considers whole slides as "bags" and individual patches as "instances." By adopting this approach, MAMILNet effectively eliminates the requirement for intricate patch-level labeling, significantly reducing the manual workload for pathologists. To enhance prediction accuracy, the model employs a multi-scale "consultation" strategy, facilitating the aggregation of test outcomes from various magnifications.

RESULTS: Our assessment of MAMILNet encompasses 1171 cases encompassing a wide range of cancer types, showcasing its effectiveness in predicting complex tasks. Remarkably, MAMILNet achieved impressive results in distinct domains: for breast cancer tumor detection, the Area Under the Curve (AUC) was 0.8872, with an Accuracy of 0.8760. In the realm of lung cancer typing diagnosis, it achieved an AUC of 0.9551 and an Accuracy of 0.9095. Furthermore, in predicting drug therapy responses for ovarian cancer, MAMILNet achieved an AUC of 0.7358 and an Accuracy of 0.7341.

CONCLUSION: The outcomes of this study underscore the potential of MAMILNet in driving the advancement of precision medicine and individualized treatment planning within the field of oncology. By effectively addressing challenges related to model generalization, annotation workload, and multi-magnification integration, MAMILNet shows promise in enhancing healthcare outcomes for cancer patients. The framework's success in accurately detecting breast tumors, diagnosing lung cancer types, and predicting ovarian cancer therapy responses highlights its significant contribution to the field and paves the way for improved patient care.

PMID:38746682 | PMC:PMC11092915 | DOI:10.3389/fonc.2024.1275769

Categories: Literature Watch

A hybrid deep learning scheme for MRI-based preliminary multiclassification diagnosis of primary brain tumors

Wed, 2024-05-15 06:00

Front Oncol. 2024 Apr 30;14:1363756. doi: 10.3389/fonc.2024.1363756. eCollection 2024.

ABSTRACT

OBJECTIVES: The diagnosis and treatment of brain tumors have greatly benefited from extensive research in traditional radiomics, leading to improved efficiency for clinicians. With the rapid development of cutting-edge technologies, especially deep learning, further improvements in accuracy and automation are expected. In this study, we explored a hybrid deep learning scheme that integrates several advanced techniques to achieve reliable diagnosis of primary brain tumors with enhanced classification performance and interpretability.

METHODS: This study retrospectively included 230 patients with primary brain tumors, including 97 meningiomas, 66 gliomas and 67 pituitary tumors, from the First Affiliated Hospital of Yangtze University. The effectiveness of the proposed scheme was validated by the included data and a commonly used data. Based on super-resolution reconstruction and dynamic learning rate annealing strategies, we compared the classification results of several deep learning models. The multi-classification performance was further improved by combining feature transfer and machine learning. Classification performance metrics included accuracy (ACC), area under the curve (AUC), sensitivity (SEN), and specificity (SPE).

RESULTS: In the deep learning tests conducted on two datasets, the DenseNet121 model achieved the highest classification performance, with five-test accuracies of 0.989 ± 0.006 and 0.967 ± 0.013, and AUCs of 0.999 ± 0.001 and 0.994 ± 0.005, respectively. In the hybrid deep learning tests, LightGBM, a promising classifier, achieved accuracies of 0.989 and 0.984, which were improved from the original deep learning scheme of 0.987 and 0.965. Sensitivities for both datasets were 0.985, specificities were 0.988 and 0.984, respectively, and relatively desirable receiver operating characteristic (ROC) curves were obtained. In addition, model visualization studies further verified the reliability and interpretability of the results.

CONCLUSIONS: These results illustrated that deep learning models combining several advanced technologies can reliably improve the performance, automation, and interpretability of primary brain tumor diagnosis, which is crucial for further brain tumor diagnostic research and individualized treatment.

PMID:38746679 | PMC:PMC11091367 | DOI:10.3389/fonc.2024.1363756

Categories: Literature Watch

Three dimensional convolutional neural network-based automated detection of midline shift in traumatic brain injury cases from head computed tomography scans

Wed, 2024-05-15 06:00

J Neurosci Rural Pract. 2024 Apr-Jun;15(2):293-299. doi: 10.25259/JNRP_490_2023. Epub 2024 Feb 29.

ABSTRACT

OBJECTIVES: Midline shift (MLS) is a critical indicator of the severity of brain trauma and is even suggestive of changes in intracranial pressure. At present, radiologists have to manually measure the MLS using laborious techniques. Automatic detection of MLS using artificial intelligence can be a cutting-edge solution for emergency health-care personnel to help in prompt diagnosis and treatment. In this study, we sought to determine the accuracy and the prognostic value of our screening tool that automatically detects MLS on computed tomography (CT) images in patients with traumatic brain injuries (TBIs).

MATERIALS AND METHODS: The study enrolled TBI cases, who presented at the Department of Neurosurgery, All India Institute of Medical Sciences, New Delhi. Institutional ethics committee permission was taken before starting the study. The data collection was carried out for over nine months, i.e., from January 2020 to September 2020. The data collection included head CT scans, patient demographics, clinical details as well as radiologist's reports. The radiologist's reports were considered the "gold standard" for evaluating the MLS. A deep learning-based three dimensional (3D) convolutional neural network (CNN) model was developed using 176 head CT scans.

RESULTS: The developed 3D CNN model was trained using 156 scans and was tested on 20 head CTs to determine the accuracy and sensitivity of the model. The screening tool was correctly able to detect 7/10 MLS cases and 4/10 non-MLS cases. The model showed an accuracy of 55% with high specificity (70%) and moderate sensitivity of 40%.

CONCLUSION: An automated solution for screening the MLS can prove useful for neurosurgeons. The results are strong evidence that 3D CNN can assist clinicians in screening MLS cases in an emergency setting.

PMID:38746523 | PMC:PMC11090596 | DOI:10.25259/JNRP_490_2023

Categories: Literature Watch

MyoVision-US: an Artificial Intelligence-Powered Software for Automated Analysis of Skeletal Muscle Ultrasonography

Wed, 2024-05-15 06:00

medRxiv [Preprint]. 2024 Apr 30:2024.04.26.24306153. doi: 10.1101/2024.04.26.24306153.

ABSTRACT

INTRODUCTION/AIMS: Muscle ultrasound has high utility in clinical practice and research; however, the main challenges are the training and time required for manual analysis to achieve objective quantification of morphometry. This study aimed to develop and validate a software tool powered by artificial intelligence (AI) by measuring its consistency and predictability of expert manual analysis quantifying lower limb muscle ultrasound images across healthy, acute, and chronic illness subjects.

METHODS: Quadriceps complex (QC [rectus femoris and vastus intermedius]) and tibialis anterior (TA) muscle ultrasound images of healthy, intensive care unit, and/or lung cancer subjects were captured with portable devices. Automated analyses of muscle morphometry were performed using a custom-built deep-learning model (MyoVision-US), while manual analyses were performed by experts. Consistency between manual and automated analyses was determined using intraclass correlation coefficients (ICC), while predictability of MyoVision -US was calculated using adjusted linear regression (adj.R 2 ).

RESULTS: Manual analysis took approximately 24 hours to analyze all 180 images, while MyoVision - US took 247 seconds, saving roughly 99.8%. Consistency between the manual and automated analyses by ICC was good to excellent for all QC (ICC:0.85-0.99) and TA (ICC:0.93-0.99) measurements, even for critically ill (ICC:0.91-0.98) and lung cancer (ICC:0.85-0.99) images. The predictability of MyoVision-US was moderate to strong for QC (adj.R 2 :0.56-0.94) and TA parameters (adj.R 2 :0.81-0.97).

DISCUSSION: The application of AI automating lower limb muscle ultrasound analyses showed excellent consistency and strong predictability compared with human analysis. Future work needs to explore AI-powered models for the evaluation of other skeletal muscle groups.

PMID:38746458 | PMC:PMC11092729 | DOI:10.1101/2024.04.26.24306153

Categories: Literature Watch

Training Robust T1-Weighted Magnetic Resonance Imaging Liver Segmentation Models Using Ensembles of Datasets with Different Contrast Protocols and Liver Disease Etiologies

Wed, 2024-05-15 06:00

Res Sq [Preprint]. 2024 Apr 30:rs.3.rs-4259791. doi: 10.21203/rs.3.rs-4259791/v1.

ABSTRACT

Image segmentation of the liver is an important step in several treatments for liver cancer. However, manual segmentation at a large scale is not practical, leading to increasing reliance on deep learning models to automatically segment the liver. This manuscript develops a deep learning model to segment the liver on T1w MR images. We sought to determine the best architecture by training, validating, and testing three different deep learning architectures using a total of 819 T1w MR images gathered from six different datasets, both publicly and internally available. Our experiments compared each architecture's testing performance when trained on data from the same dataset via 5-fold cross validation to its testing performance when trained on all other datasets. Models trained using nnUNet achieved mean Dice-Sorensen similarity coefficients > 90% when tested on each of the six datasets individually. The performance of these models suggests that an nnUNet liver segmentation model trained on a large and diverse collection of T1w MR images would be robust to potential changes in contrast protocol and disease etiology.

PMID:38746406 | PMC:PMC11092841 | DOI:10.21203/rs.3.rs-4259791/v1

Categories: Literature Watch

An Anthropomorphic Diagnosis System of Pulmonary Nodules using Weak Annotation-Based Deep Learning

Wed, 2024-05-15 06:00

medRxiv [Preprint]. 2024 May 5:2024.05.03.24306828. doi: 10.1101/2024.05.03.24306828.

ABSTRACT

PURPOSE: To develop an anthropomorphic diagnosis system of pulmonary nodules (PN) based on Deep learning (DL) that is trained by weak annotation data and has comparable performance to full-annotation based diagnosis systems.

METHODS: The proposed system uses deep learning (DL) models to classify PNs (benign vs. malignant) with weak annotations, which eliminates the need for time-consuming and labor-intensive manual annotations of PNs. Moreover, the PN classification networks, augmented with handcrafted shape features acquired through the ball-scale transform technique, demonstrate capability to differentiate PNs with diverse labels, including pure ground-glass opacities, part-solid nodules, and solid nodules.

RESULTS: The experiments were conducted on two lung CT datasets: (1) public LIDC-IDRI dataset with 1,018 subjects, (2) In-house dataset with 2740 subjects. Through 5-fold cross-validation on two datasets, the system achieved the following results: (1) an Area Under Curve (AUC) of 0.938 for PN localization and an AUC of 0.912 for PN differential diagnosis on the LIDC-IDRI dataset of 814 testing cases, (2) an AUC of 0.943 for PN localization and an AUC of 0.815 for PN differential diagnosis on the in-house dataset of 822 testing cases. These results demonstrate comparable performance to full annotation-based diagnosis systems.

CONCLUSIONS: Our system can efficiently localize and differentially diagnose PNs even in resource-limited environments with good robustness across different grade and morphology sub-groups in the presence of deviations due to the size, shape, and texture of the nodule, indicating its potential for future clinical translation.

SUMMARY: An anthropomorphic diagnosis system of pulmonary nodules (PN) based on deep learning and weak annotation was found to achieve comparable performance to full-annotation dataset-based diagnosis systems, significantly reducing the time and the cost associated with the annotation.

KEY POINTS: A fully automatic system for the diagnosis of PN in CT scans using a suitable deep learning model and weak annotations was developed to achieve comparable performance (AUC = 0.938 for PN localization, AUC = 0.912 for PN differential diagnosis) with the full-annotation based deep learning models, reducing around 30%∼80% of annotation time for the experts.The integration of the hand-crafted feature acquired from human experts (natural intelligence) into the deep learning networks and the fusion of the classification results of multi-scale networks can efficiently improve the PN classification performance across different diameters and sub-groups of the nodule.

PMID:38746400 | PMC:PMC11092690 | DOI:10.1101/2024.05.03.24306828

Categories: Literature Watch

Pages