Deep learning
Increased methane emissions from oil and gas following the Soviet Union's collapse
Proc Natl Acad Sci U S A. 2024 Mar 19;121(12):e2314600121. doi: 10.1073/pnas.2314600121. Epub 2024 Mar 12.
ABSTRACT
Global atmospheric methane concentrations rose by 10 to 15 ppb/y in the 1980s before abruptly slowing to 2 to 8 ppb/y in the early 1990s. This period in the 1990s is known as the "methane slowdown" and has been attributed in part to the collapse of the former Soviet Union (USSR) in December 1991, which may have decreased the methane emissions from oil and gas operations. Here, we develop a methane plume detection system based on probabilistic deep learning and human-labeled training data. We use this method to detect methane plumes from Landsat 5 satellite observations over Turkmenistan from 1986 to 2011. We focus on Turkmenistan because economic data suggest it could account for half of the decline in oil and gas emissions from the former USSR. We find an increase in both the frequency of methane plume detections and the magnitude of methane emissions following the collapse of the USSR. We estimate a national loss rate from oil and gas infrastructure in Turkmenistan of more than 10% at times, which suggests the socioeconomic turmoil led to a lack of oversight and widespread infrastructure failure in the oil and gas sector. Our finding of increased oil and gas methane emissions from Turkmenistan following the USSR's collapse casts doubt on the long-standing hypothesis regarding the methane slowdown, begging the question: "what drove the 1992 methane slowdown?"
PMID:38470920 | DOI:10.1073/pnas.2314600121
Multi-modal deep learning methods for classification of chest diseases using different medical imaging and cough sounds
PLoS One. 2024 Mar 12;19(3):e0296352. doi: 10.1371/journal.pone.0296352. eCollection 2024.
ABSTRACT
Chest disease refers to a wide range of conditions affecting the lungs, such as COVID-19, lung cancer (LC), consolidation lung (COL), and many more. When diagnosing chest disorders medical professionals may be thrown off by the overlapping symptoms (such as fever, cough, sore throat, etc.). Additionally, researchers and medical professionals make use of chest X-rays (CXR), cough sounds, and computed tomography (CT) scans to diagnose chest disorders. The present study aims to classify the nine different conditions of chest disorders, including COVID-19, LC, COL, atelectasis (ATE), tuberculosis (TB), pneumothorax (PNEUTH), edema (EDE), pneumonia (PNEU). Thus, we suggested four novel convolutional neural network (CNN) models that train distinct image-level representations for nine different chest disease classifications by extracting features from images. Furthermore, the proposed CNN employed several new approaches such as a max-pooling layer, batch normalization layers (BANL), dropout, rank-based average pooling (RBAP), and multiple-way data generation (MWDG). The scalogram method is utilized to transform the sounds of coughing into a visual representation. Before beginning to train the model that has been developed, the SMOTE approach is used to calibrate the CXR and CT scans as well as the cough sound images (CSI) of nine different chest disorders. The CXR, CT scan, and CSI used for training and evaluating the proposed model come from 24 publicly available benchmark chest illness datasets. The classification performance of the proposed model is compared with that of seven baseline models, namely Vgg-19, ResNet-101, ResNet-50, DenseNet-121, EfficientNetB0, DenseNet-201, and Inception-V3, in addition to state-of-the-art (SOTA) classifiers. The effectiveness of the proposed model is further demonstrated by the results of the ablation experiments. The proposed model was successful in achieving an accuracy of 99.01%, making it superior to both the baseline models and the SOTA classifiers. As a result, the proposed approach is capable of offering significant support to radiologists and other medical professionals.
PMID:38470893 | DOI:10.1371/journal.pone.0296352
Development of a deep learning-based surveillance system for forest fire detection and monitoring using UAV
PLoS One. 2024 Mar 12;19(3):e0299058. doi: 10.1371/journal.pone.0299058. eCollection 2024.
ABSTRACT
This study presents a surveillance system developed for early detection of forest fires. Deep learning is utilized for aerial detection of fires using images obtained from a camera mounted on a designed four-rotor Unmanned Aerial Vehicle (UAV). The object detection performance of YOLOv8 and YOLOv5 was examined for identifying forest fires, and a CNN-RCNN network was constructed to classify images as containing fire or not. Additionally, this classification approach was compared with the YOLOv8 classification. Onboard NVIDIA Jetson Nano, an embedded artificial intelligence computer, is used as hardware for real-time forest fire detection. Also, a ground station interface was developed to receive and display fire-related data. Thus, access to fire images and coordinate information was provided for targeted intervention in case of a fire. The UAV autonomously monitored the designated area and captured images continuously. Embedded deep learning algorithms on the Nano board enable the UAV to detect forest fires within its operational area. The detection methods produced the following results: 96% accuracy for YOLOv8 classification, 89% accuracy for YOLOv8n object detection, 96% accuracy for CNN-RCNN classification, and 89% accuracy for YOLOv5n object detection.
PMID:38470887 | DOI:10.1371/journal.pone.0299058
Time series classification of multi-channel nerve cuff recordings using deep learning
PLoS One. 2024 Mar 12;19(3):e0299271. doi: 10.1371/journal.pone.0299271. eCollection 2024.
ABSTRACT
Neurostimulation and neural recording are crucial to develop neuroprostheses that can restore function to individuals living with disabilities. While neurostimulation has been successfully translated into clinical use for several applications, it remains challenging to robustly collect and interpret neural recordings, especially for chronic applications. Nerve cuff electrodes offer a viable option for recording nerve signals, with long-term implantation success. However, nerve cuff electrodes' signals have low signal-to-noise ratios, resulting in reduced selectivity between neural pathways. The objective of this study was to determine whether deep learning techniques, specifically networks tailored for time series applications, can increase the recording selectivity achievable using multi-contact nerve cuff electrodes. We compared several neural network architectures, the impact and trade-off of window length on classification performance, and the benefit of data augmentation. Evaluation was carried out using a previously collected dataset of 56-channel nerve cuff recordings from the sciatic nerve of Long-Evans rats, which included afferent signals evoked using three types of mechanical stimuli. Through this study, the best model achieved an accuracy of 0.936 ± 0.084 and an F1-score of 0.917 ± 0.103, using 50 ms windows of data and an augmented training set. These results demonstrate the effectiveness of applying CNNs designed for time-series data to peripheral nerve recordings, and provide insights into the relationship between window duration and classification performance in this application.
PMID:38470880 | DOI:10.1371/journal.pone.0299271
Public perceptions of synthetic cooling agents in electronic cigarettes on Twitter
PLoS One. 2024 Mar 12;19(3):e0292412. doi: 10.1371/journal.pone.0292412. eCollection 2024.
ABSTRACT
Amid a potential menthol ban, electronic cigarette (e-cigarette) companies are incorporating synthetic cooling agents like WS-3 and WS-23 to replicate menthol/mint sensations. This study examines public views on synthetic cooling agents in e-cigarettes via Twitter data. From May 2021 to March 2023, we used Twitter Streaming Application Programming Interface (API), to collect tweets related to synthetic cooling agents with keywords such as 'WS-23,' 'ice,' and 'frozen.' The deep learning RoBERTa (Robustly Optimized BERT-Pretraining Approach) model that can be optimized for contextual language understanding was used to classify attitudes expressed in tweets about synthetic cooling agents and identify e-cigarette users. The BERTopic (a topic modeling technique that leverages Bidirectional Encoder Representations from Transformers) deep-learning model, specializing in extracting and clustering topics from large texts, identified major topics of positive and negative tweets. Two proportion Z-tests were used to compare the proportions of positive and negative attitudes between e-cigarette users (vapers) and non-e-cigarette-users (non-vapers). Of 6,940,065 e-cigarettes-related tweets, 5,788 were non-commercial tweets related to synthetic cooling agents. The longitudinal trend analysis showed a clear upward trend in discussions. Vapers posted most of the tweets (73.05%, 4,228/5,788). Nearly half (47.87%, 2,771/5,788) held a positive attitude toward synthetic cooling agents, which is significantly higher than those with a negative attitude (19.92%,1,153/5,788) with a P-value < 0.0001. The likelihood of vapers expressing positive attitudes (60.17%, 2,544/4,228) was significantly higher (P < 0.0001) than that of non-vapers (14.55%, 227/1,560). Conversely, negative attitudes from non-vapers (30%, 468/1,560) were significantly (P < 0.0001) higher than vapers (16.2%, 685/4,228). Prevalent topics from positive tweets included "enjoyment of specific vape flavors," "preference for lush ice vapes," and "liking of minty/icy feelings." Major topics from negative tweets included "disliking certain vape flavors" and "dislike of others vaping around them." On Twitter, vapers are more likely to have a positive attitude toward synthetic cooling agents than non-vapers. Our study provides important insights into how the public perceives synthetic cooling agents in e-cigarettes. These insights are crucial for shaping future U.S. Food and Drug Administration (FDA) regulations aimed at safeguarding public health.
PMID:38470869 | DOI:10.1371/journal.pone.0292412
SCAC: A Semi-Supervised Learning Approach for Cervical Abnormal Cell Detection
IEEE J Biomed Health Inform. 2024 Mar 12;PP. doi: 10.1109/JBHI.2024.3375889. Online ahead of print.
ABSTRACT
Cervical abnormal cell detection plays a crucial role in the early screening of cervical cancer. In recent years, some deep learning-based methods have been proposed. However, these methods rely heavily on large amounts of annotated images, which are time-consuming and laborintensive to acquire, thus limiting the detection performance. In this paper, we present a novel Semi-supervised Cervical Abnormal Cell detector (SCAC), which effectively utilizes the abundant unlabeled data. We utilize Transformer as the backbone of SCAC to capture long-range dependencies to mimic the diagnostic process of pathologists. In addition, in SCAC, we design a Unified Strong and Weak Augment strategy (USWA) that unifies two data augmentation pipelines, implementing consistent regularization in semisupervised learning and enhancing the diversity of the training data. We also develop a Global Attention Feature Pyramid Network (GAFPN), which utilizes the attention mechanism to better extract multi-scale features from cervical cytology images. Notably, we have created an unlabeled cervical cytology image dataset, which can be leveraged by semi-supervised learning to enhance detection accuracy. To the best of our knowledge, this is the first publicly available large unlabeled cervical cytology image dataset. By combining this dataset with two publicly available annotated datasets, we demonstrate that SCAC outperforms other existing methods, achieving state-of-theart performance. Additionally, comprehensive ablation studies are conducted to validate the effectiveness of USWA and GAFPN. These promising results highlight the capability of SCAC to achieve high diagnostic accuracy and extensive clinical applications. The code and dataset are publicly available at https://github.com/Lewisonez/cc_detection.
PMID:38470598 | DOI:10.1109/JBHI.2024.3375889
MM-Net: A MixFormer-Based Multi-Scale Network for Anatomical and Functional Image Fusion
IEEE Trans Image Process. 2024 Mar 12;PP. doi: 10.1109/TIP.2024.3374072. Online ahead of print.
ABSTRACT
Anatomical and functional image fusion is an important technique in a variety of medical and biological applications. Recently, deep learning (DL)-based methods have become a mainstream direction in the field of multi-modal image fusion. However, existing DL-based fusion approaches have difficulty in effectively capturing local features and global contextual information simultaneously. In addition, the scale diversity of features, which is a crucial issue in image fusion, often lacks adequate attention in most existing works. In this paper, to address the above problems, we propose a MixFormer-based multi-scale network, termed as MM-Net, for anatomical and functional image fusion. In our method, an improved MixFormer-based backbone is introduced to sufficiently extract both local features and global contextual information at multiple scales from the source images. The features from different source images are fused at multiple scales based on a multi-source spatial attention-based cross-modality feature fusion (CMFF) module. The scale diversity of the fused features is further enriched by a series of multi-scale feature interaction (MSFI) modules and feature aggregation upsample (FAU) modules. Moreover, a loss function consisting of both spatial domain and frequency domain components is devised to train the proposed fusion model. Experimental results demonstrate that our method outperforms several state-of-the-art fusion methods on both qualitative and quantitative comparisons, and the proposed fusion model exhibits good generalization capability. The source code of our fusion method will be available at https://github.com/yuliu316316.
PMID:38470587 | DOI:10.1109/TIP.2024.3374072
Detection of urinary tract stones on submillisievert abdominopelvic CT imaging with deep-learning image reconstruction algorithm (DLIR)
Abdom Radiol (NY). 2024 Mar 12. doi: 10.1007/s00261-024-04223-w. Online ahead of print.
ABSTRACT
PURPOSE: Urolithiasis is a chronic condition that leads to repeated CT scans throughout the patient's life. The goal was to assess the diagnostic performance and image quality of submillisievert abdominopelvic computed tomography (CT) using deep learning-based image reconstruction (DLIR) in urolithiasis.
METHODS: 57 patients with suspected urolithiasis underwent both non-contrast low-dose (LD) and ULD abdominopelvic CT. Raw image data of ULD CT were reconstructed using hybrid iterative reconstruction (ASIR-V 70%) and high-strength-level DLIR (DLIR-H). The performance of ULD CT for the detection of urinary stones was assessed by two readers and compared with LD CT with ASIR-V 70% as a reference standard. Image quality was assessed subjectively and objectively.
RESULTS: 266 stones were detected in 38 patients. Mean effective dose was 0.59 mSv for ULD CT and 1.96 mSv for LD CT. For diagnostic performance, sensitivity and specificity were 89% and 94%, respectively, for ULDCT with DLIR-H. There was an almost perfect intra-observer concordance on ULD CT with DLIR-H versus LDCT with ASIR-V 70% (ICC = 0.90 and 0.90 for the two readers). Image noise was significantly lower and signal-to-noise ratio significantly higher with DLIR-H compared to ASIR-V 70%. Subjective image quality was also significantly better with ULDCT with DLIR-H.
CONCLUSION: ULD CT with Deep Learning Image Reconstruction maintains a good diagnostic performance in urolithiasis, with better image quality than hybrid iterative reconstruction and a significant radiation dose reduction.
PMID:38470506 | DOI:10.1007/s00261-024-04223-w
Development of a Social Risk Score in the Electronic Health Record to Identify Social Needs Among Underserved Populations: Retrospective Study
JMIR Form Res. 2024 Mar 12;8:e54732. doi: 10.2196/54732.
ABSTRACT
BACKGROUND: Patients with unmet social needs and social determinants of health (SDOH) challenges continue to face a disproportionate risk of increased prevalence of disease, health care use, higher health care costs, and worse outcomes. Some existing predictive models have used the available data on social needs and SDOH challenges to predict health-related social needs or the need for various social service referrals. Despite these one-off efforts, the work to date suggests that many technical and organizational challenges must be surmounted before SDOH-integrated solutions can be implemented on an ongoing, wide-scale basis within most US-based health care organizations.
OBJECTIVE: We aimed to retrieve available information in the electronic health record (EHR) relevant to the identification of persons with social needs and to develop a social risk score for use within clinical practice to better identify patients at risk of having future social needs.
METHODS: We conducted a retrospective study using EHR data (2016-2021) and data from the US Census American Community Survey. We developed a prospective model using current year-1 risk factors to predict future year-2 outcomes within four 2-year cohorts. Predictors of interest included demographics, previous health care use, comorbidity, previously identified social needs, and neighborhood characteristics as reflected by the area deprivation index. The outcome variable was a binary indicator reflecting the likelihood of the presence of a patient with social needs. We applied a generalized estimating equation approach, adjusting for patient-level risk factors, the possible effect of geographically clustered data, and the effect of multiple visits for each patient.
RESULTS: The study population of 1,852,228 patients included middle-aged (mean age range 53.76-55.95 years), White (range 324,279/510,770, 63.49% to 290,688/488,666, 64.79%), and female (range 314,741/510,770, 61.62% to 278,488/448,666, 62.07%) patients from neighborhoods with high socioeconomic status (mean area deprivation index percentile range 28.76-30.31). Between 8.28% (37,137/448,666) and 11.55% (52,037/450,426) of patients across the study cohorts had at least 1 social need documented in their EHR, with safety issues and economic challenges (ie, financial resource strain, employment, and food insecurity) being the most common documented social needs (87,152/1,852,228, 4.71% and 58,242/1,852,228, 3.14% of overall patients, respectively). The model had an area under the curve of 0.702 (95% CI 0.699-0.705) in predicting prospective social needs in the overall study population. Previous social needs (odds ratio 3.285, 95% CI 3.237-3.335) and emergency department visits (odds ratio 1.659, 95% CI 1.634-1.684) were the strongest predictors of future social needs.
CONCLUSIONS: Our model provides an opportunity to make use of available EHR data to help identify patients with high social needs. Our proposed social risk score could help identify the subset of patients who would most benefit from further social needs screening and data collection to avoid potentially more burdensome primary data collection on all patients in a target population of interest.
PMID:38470477 | DOI:10.2196/54732
Improving Quality of ICD-10 (International Statistical Classification of Diseases, Tenth Revision) Coding Using AI: Protocol for a Crossover Randomized Controlled Trial
JMIR Res Protoc. 2024 Mar 12;13:e54593. doi: 10.2196/54593.
ABSTRACT
BACKGROUND: Computer-assisted clinical coding (CAC) tools are designed to help clinical coders assign standardized codes, such as the ICD-10 (International Statistical Classification of Diseases, Tenth Revision), to clinical texts, such as discharge summaries. Maintaining the integrity of these standardized codes is important both for the functioning of health systems and for ensuring data used for secondary purposes are of high quality. Clinical coding is an error-prone cumbersome task, and the complexity of modern classification systems such as the ICD-11 (International Classification of Diseases, Eleventh Revision) presents significant barriers to implementation. To date, there have only been a few user studies; therefore, our understanding is still limited regarding the role CAC systems can play in reducing the burden of coding and improving the overall quality of coding.
OBJECTIVE: The objective of the user study is to generate both qualitative and quantitative data for measuring the usefulness of a CAC system, Easy-ICD, that was developed for recommending ICD-10 codes. Specifically, our goal is to assess whether our tool can reduce the burden on clinical coders and also improve coding quality.
METHODS: The user study is based on a crossover randomized controlled trial study design, where we measure the performance of clinical coders when they use our CAC tool versus when they do not. Performance is measured by the time it takes them to assign codes to both simple and complex clinical texts as well as the coding quality, that is, the accuracy of code assignment.
RESULTS: We expect the study to provide us with a measurement of the effectiveness of the CAC system compared to manual coding processes, both in terms of time use and coding quality. Positive outcomes from this study will imply that CAC tools hold the potential to reduce the burden on health care staff and will have major implications for the adoption of artificial intelligence-based CAC innovations to improve coding practice. Expected results to be published summer 2024.
CONCLUSIONS: The planned user study promises a greater understanding of the impact CAC systems might have on clinical coding in real-life settings, especially with regard to coding time and quality. Further, the study may add new insights on how to meaningfully exploit current clinical text mining capabilities, with a view to reducing the burden on clinical coders, thus lowering the barriers and paving a more sustainable path to the adoption of modern coding systems, such as the new ICD-11.
TRIAL REGISTRATION: clinicaltrials.gov NCT06286865; https://clinicaltrials.gov/study/NCT06286865.
INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/54593.
PMID:38470476 | DOI:10.2196/54593
Integrating data distribution prior via Langevin dynamics for end-to-end MR reconstruction
Magn Reson Med. 2024 Mar 12. doi: 10.1002/mrm.30065. Online ahead of print.
ABSTRACT
PURPOSE: To develop a novel deep learning-based method inheriting the advantages of data distribution prior and end-to-end training for accelerating MRI.
METHODS: Langevin dynamics is used to formulate image reconstruction with data distribution before facilitate image reconstruction. The data distribution prior is learned implicitly through the end-to-end adversarial training to mitigate the hyper-parameter selection and shorten the testing time compared to traditional probabilistic reconstruction. By seamlessly integrating the deep equilibrium model, the iteration of Langevin dynamics culminates in convergence to a fix-point, ensuring the stability of the learned distribution.
RESULTS: The feasibility of the proposed method is evaluated on the brain and knee datasets. Retrospective results with uniform and random masks show that the proposed method demonstrates superior performance both quantitatively and qualitatively than the state-of-the-art.
CONCLUSION: The proposed method incorporating Langevin dynamics with end-to-end adversarial training facilitates efficient and robust reconstruction for MRI. Empirical evaluations conducted on brain and knee datasets compellingly demonstrate the superior performance of the proposed method in terms of artifact removing and detail preserving.
PMID:38469985 | DOI:10.1002/mrm.30065
TensorFit: A torch-based tool for ultrafast metabolite fitting of large MRSI data sets
Magn Reson Med. 2024 Mar 12. doi: 10.1002/mrm.30084. Online ahead of print.
ABSTRACT
PURPOSE: To introduce a tool (TensorFit) for ultrafast and robust metabolite fitting of MRSI data based on Torch's auto-differentiation and optimization framework.
METHODS: TensorFit was implemented in Python based on Torch's auto-differentiation to fit individual metabolites in MRS spectra. The underlying time domain and/or frequency domain fitting model is based on a linear combination of metabolite spectroscopic response. The computational time efficiency and accuracy of TensorFit were tested on simulated and in vivo MRS data and compared against TDFDFit and QUEST.
RESULTS: TensorFit demonstrates a significant improvement in computation speed, achieving a 165-times acceleration compared with TDFDFit and 115 times against QUEST. TensorFit showed smaller percentual errors on simulated data compared with TDFDFit and QUEST. When tested on in vivo data, it performed similarly to TDFDFit with a 2% better fit in terms of mean squared error while obtaining a 169-fold speedup.
CONCLUSION: TensorFit enables fast and robust metabolite fitting in large MRSI data sets compared with conventional metabolite fitting methods. This tool could boost the clinical applicability of large 3D MRSI by enabling the fitting of large MRSI data sets within computation times acceptable in a clinical environment.
PMID:38469890 | DOI:10.1002/mrm.30084
Natalizumab reduces loss of gray matter and thalamic volume in patients with relapsing-remitting multiple sclerosis: A post hoc analysis from the randomized, placebo-controlled AFFIRM trial
Mult Scler. 2024 Mar 12:13524585241235055. doi: 10.1177/13524585241235055. Online ahead of print.
ABSTRACT
BACKGROUND: Loss of brain gray matter fractional volume predicts multiple sclerosis (MS) progression and is associated with worsening physical and cognitive symptoms. Within deep gray matter, thalamic damage is evident in early stages of MS and correlates with physical and cognitive impairment. Natalizumab is a highly effective treatment that reduces disease progression and the number of inflammatory lesions in patients with relapsing-remitting MS (RRMS).
OBJECTIVE: To evaluate the effect of natalizumab on gray matter and thalamic atrophy.
METHODS: A combination of deep learning-based image segmentation and data augmentation was applied to MRI data from the AFFIRM trial.
RESULTS: This post hoc analysis identified a reduction of 64.3% (p = 0.0044) and 64.3% (p = 0.0030) in mean percentage gray matter volume loss from baseline at treatment years 1 and 2, respectively, in patients treated with natalizumab versus placebo. The reduction in thalamic fraction volume loss from baseline with natalizumab versus placebo was 57.0% at year 2 (p < 0.0001) and 41.2% at year 1 (p = 0.0147). Similar findings resulted from analyses of absolute gray matter and thalamic fraction volume loss.
CONCLUSION: These analyses represent the first placebo-controlled evidence supporting a role for natalizumab treatment in mitigating gray matter and thalamic fraction atrophy among patients with RRMS.
CLINICALTRIALS.GOV IDENTIFIER: NCT00027300URL: https://clinicaltrials.gov/ct2/show/NCT00027300.
PMID:38469809 | DOI:10.1177/13524585241235055
Deep learning for precise diagnosis and subtype triage of drug-resistant tuberculosis on chest computed tomography
MedComm (2020). 2024 Mar 10;5(3):e487. doi: 10.1002/mco2.487. eCollection 2024 Mar.
ABSTRACT
Deep learning, transforming input data into target prediction through intricate network structures, has inspired novel exploration in automated diagnosis based on medical images. The distinct morphological characteristics of chest abnormalities between drug-resistant tuberculosis (DR-TB) and drug-sensitive tuberculosis (DS-TB) on chest computed tomography (CT) are of potential value in differential diagnosis, which is challenging in the clinic. Hence, based on 1176 chest CT volumes from the equal number of patients with tuberculosis (TB), we presented a Deep learning-based system for TB drug resistance identification and subtype classification (DeepTB), which could automatically diagnose DR-TB and classify crucial subtypes, including rifampicin-resistant tuberculosis, multidrug-resistant tuberculosis, and extensively drug-resistant tuberculosis. Moreover, chest lesions were manually annotated to endow the model with robust power to assist radiologists in image interpretation and the Circos revealed the relationship between chest abnormalities and specific types of DR-TB. Finally, DeepTB achieved an area under the curve (AUC) up to 0.930 for thoracic abnormality detection and 0.943 for DR-TB diagnosis. Notably, the system demonstrated instructive value in DR-TB subtype classification with AUCs ranging from 0.880 to 0.928. Meanwhile, class activation maps were generated to express a human-understandable visual concept. Together, showing a prominent performance, DeepTB would be impactful in clinical decision-making for DR-TB.
PMID:38469547 | PMC:PMC10925488 | DOI:10.1002/mco2.487
Micro-CT and deep learning: Modern techniques and applications in insect morphology and neuroscience
Front Insect Sci. 2023 Jan 23;3:1016277. doi: 10.3389/finsc.2023.1016277. eCollection 2023.
ABSTRACT
Advances in modern imaging and computer technologies have led to a steady rise in the use of micro-computed tomography (µCT) in many biological areas. In zoological research, this fast and non-destructive method for producing high-resolution, two- and three-dimensional images is increasingly being used for the functional analysis of the external and internal anatomy of animals. µCT is hereby no longer limited to the analysis of specific biological tissues in a medical or preclinical context but can be combined with a variety of contrast agents to study form and function of all kinds of tissues and species, from mammals and reptiles to fish and microscopic invertebrates. Concurrently, advances in the field of artificial intelligence, especially in deep learning, have revolutionised computer vision and facilitated the automatic, fast and ever more accurate analysis of two- and three-dimensional image datasets. Here, I want to give a brief overview of both micro-computed tomography and deep learning and present their recent applications, especially within the field of insect science. Furthermore, the combination of both approaches to investigate neural tissues and the resulting potential for the analysis of insect sensory systems, from receptor structures via neuronal pathways to the brain, are discussed.
PMID:38469492 | PMC:PMC10926430 | DOI:10.3389/finsc.2023.1016277
Optimised deep k-nearest neighbour's based diabetic retinopathy diagnosis(ODeep-NN) using retinal images
Health Inf Sci Syst. 2024 Mar 9;12(1):23. doi: 10.1007/s13755-024-00282-x. eCollection 2024 Dec.
ABSTRACT
Diabetes mellitus has been regarded as one of the prime health issues in present days, which can often lead to diabetic retinopathy, a complication of the disease that affects the eyes, causing loss of vision. For precisely detecting the condition's existence, clinicians are required to recognise the presence of lesions in colour fundus images, making it an arduous and time-consuming task. To deal with this problem, a lot of work has been undertaken to develop deep learning-based computer-aided diagnosis systems that assist clinicians in making accurate diagnoses of the diseases in medical images. Contrariwise, the basic operations involved in deep learning models lead to the extraction of a bulky set of features, further taking a long period of training to predict the existence of the disease. For effective execution of these models, feature selection becomes an important task that aids in selecting the most appropriate features, with an aim to increase the classification accuracy. This research presents an optimised deep k-nearest neighbours'-based pipeline model in a bid to amalgamate the feature extraction capability of deep learning models with nature-inspired metaheuristic algorithms, further using k-nearest neighbour algorithm for classification. The proposed model attains an accuracy of 97.67 and 98.05% on two different datasets considered, outperforming Resnet50 and AlexNet deep learning models. Additionally, the experimental results also portray an analysis of five different nature-inspired metaheuristic algorithms, considered for feature selection on the basis of various evaluation parameters.
PMID:38469456 | PMC:PMC10924814 | DOI:10.1007/s13755-024-00282-x
Artificial intelligence-based framework to identify the abnormalities in the COVID-19 disease and other common respiratory diseases from digital stethoscope data using deep CNN
Health Inf Sci Syst. 2024 Mar 9;12(1):22. doi: 10.1007/s13755-024-00283-w. eCollection 2024 Dec.
ABSTRACT
The utilization of lung sounds to diagnose lung diseases using respiratory sound features has significantly increased in the past few years. The Digital Stethoscope data has been examined extensively by medical researchers and technical scientists to diagnose the symptoms of respiratory diseases. Artificial intelligence-based approaches are applied in the real universe to distinguish respiratory disease signs from human pulmonary auscultation sounds. The Deep CNN model is implemented with combined multi-feature channels (Modified MFCC, Log Mel, and Soft Mel) to obtain the sound parameters from lung-based Digital Stethoscope data. The model analysis is observed with max-pooling and without max-pool operations using multi-feature channels on respiratory digital stethoscope data. In addition, COVID-19 sound data and enriched data, which are recently acquired data to enhance model performance using a combination of L2 regularization to overcome the risk of overfitting because of less respiratory sound data, are included in the work. The suggested DCNN with Max-Pooling on the improved dataset demonstrates cutting-edge performance employing a multi-feature channels spectrogram. The model has been developed with different convolutional filter sizes (1×12, 1×24, 1×36, 1×48, and 1×60) that helped to test the proposed neural network. According to the experimental findings, the suggested DCNN architecture with a max-pooling function performs better to identify respiratory disease symptoms than DCNN without max-pooling. In order to demonstrate the model's effectiveness in categorization, it is trained and tested with the DCNN model that extract several modalities of respiratory sound data.
PMID:38469455 | PMC:PMC10924857 | DOI:10.1007/s13755-024-00283-w
A cascaded nested network for 3T brain MR image segmentation guided by 7T labeling
Pattern Recognit. 2022 Apr;124:108420. doi: 10.1016/j.patcog.2021.108420. Epub 2021 Nov 6.
ABSTRACT
Accurate segmentation of the brain into gray matter, white matter, and cerebrospinal fluid using magnetic resonance (MR) imaging is critical for visualization and quantification of brain anatomy. Compared to 3T MR images, 7T MR images exhibit higher tissue contrast that is contributive to accurate tissue delineation for training segmentation models. In this paper, we propose a cascaded nested network (CaNes-Net) for segmentation of 3T brain MR images, trained by tissue labels delineated from the corresponding 7T images. We first train a nested network (Nes-Net) for a rough segmentation. The second Nes-Net uses tissue-specific geodesic distance maps as contextual information to refine the segmentation. This process is iterated to build CaNes-Net with a cascade of Nes-Net modules to gradually refine the segmentation. To alleviate the misalignment between 3T and corresponding 7T MR images, we incorporate a correlation coefficient map to allow well-aligned voxels to play a more important role in supervising the training process. We compared CaNes-Net with SPM and FSL tools, as well as four deep learning models on 18 adult subjects and the ADNI dataset. Our results indicate that CaNes-Net reduces segmentation errors caused by the misalignment and improves segmentation accuracy substantially over the competing methods.
PMID:38469076 | PMC:PMC10927017 | DOI:10.1016/j.patcog.2021.108420
COVID-19 detection in lung CT slices using Brownian-butterfly-algorithm optimized lightweight deep features
Heliyon. 2024 Mar 2;10(5):e27509. doi: 10.1016/j.heliyon.2024.e27509. eCollection 2024 Mar 15.
ABSTRACT
Several deep-learning assisted disease assessment schemes (DAS) have been proposed to enhance accurate detection of COVID-19, a critical medical emergency, through the analysis of clinical data. Lung imaging, particularly from CT scans, plays a pivotal role in identifying and assessing the severity of COVID-19 infections. Existing automated methods leveraging deep learning contribute significantly to reducing the diagnostic burden associated with this process. This research aims in developing a simple DAS for COVID-19 detection using the pre-trained lightweight deep learning methods (LDMs) applied to lung CT slices. The use of LDMs contributes to a less complex yet highly accurate detection system. The key stages of the developed DAS include image collection and initial processing using Shannon's thresholding, deep-feature mining supported by LDMs, feature optimization utilizing the Brownian Butterfly Algorithm (BBA), and binary classification through three-fold cross-validation. The performance evaluation of the proposed scheme involves assessing individual, fused, and ensemble features. The investigation reveals that the developed DAS achieves a detection accuracy of 93.80% with individual features, 96% accuracy with fused features, and an impressive 99.10% accuracy with ensemble features. These outcomes affirm the effectiveness of the proposed scheme in significantly enhancing COVID-19 detection accuracy in the chosen lung CT database.
PMID:38468955 | PMC:PMC10926136 | DOI:10.1016/j.heliyon.2024.e27509
Uncovering the subtype-specific disease module and the development of drug response prediction models for glioma
Heliyon. 2024 Mar 1;10(5):e27190. doi: 10.1016/j.heliyon.2024.e27190. eCollection 2024 Mar 15.
ABSTRACT
The poor prognosis of glioma patients brought attention to the need for effective therapeutic approaches for precision therapy. Here, we deployed algorithms relying on network medicine and artificial intelligence to design the framework for subtype-specific target identification and drug response prediction in glioma. We identified the driver mutations that were differentially expressed in each subtype of lower-grade glioma and glioblastoma multiforme and were linked to cancer-specific processes. Driver mutations that were differentially expressed were also subjected to subtype-specific disease module identification. The drugs from the drug bank database were retrieved to target these disease modules. However, the efficacy of anticancer drugs depends on the molecular profile of the cancer and varies among cancer patients due to intratumor heterogeneity. Hence, we developed a deep-learning-based drug response prediction framework using the experimental drug screening data. Models for 30 drugs that can target the disease module were developed, where drug response measured by IC50 was considered a response and gene expression and mutation data were considered predictor variables. The model construction consists of three steps: feature selection, data integration, and classification. We observed the consistent performance of the models in training, test, and validation datasets. Drug responses were predicted for particular cell lines derived from distinct subtypes of gliomas. We found that subtypes of gliomas respond differently to the drug, highlighting the importance of subtype-specific drug response prediction. Therefore, the development of personalized therapy by integrating network medicine and a deep learning-based approach can lead to cancer-specific treatment and improved patient care.
PMID:38468932 | PMC:PMC10926146 | DOI:10.1016/j.heliyon.2024.e27190