Deep learning
A dataset for fine-grained seed recognition
Sci Data. 2024 Apr 6;11(1):344. doi: 10.1038/s41597-024-03176-5.
ABSTRACT
The research of plant seeds has always been a focus of agricultural and forestry research, and seed identification is an indispensable part of it. With the continuous application of artificial intelligence technology in the field of agriculture, seed identification through computer vision can effectively promote the development of agricultural and forestry wisdom. Data is the foundation of computer vision, but there is a lack of suitable datasets in the agricultural field. In this paper, a seed dataset named LZUPSD is established. A device based on mobile phones and macro lenses was established to acquire images. The dataset contains 4496 images of 88 different seeds. This dataset can not only be used as data for training deep learning models in the computer field, but also provide important data support for agricultural and forestry research. As an important resource in this field, this dataset plays a positive role in modernizing agriculture and forestry.
PMID:38582756 | DOI:10.1038/s41597-024-03176-5
The potential role of artificial intelligence in sustainability of nuclear medicine
Radiography (Lond). 2024 Apr 5:S1078-8174(24)00067-1. doi: 10.1016/j.radi.2024.03.005. Online ahead of print.
ABSTRACT
BACKGROUND: Strategies targeted at the five pillars of sustainability (social, human, economic, ecological and environmental) can be used to improve sustainability of clinical or research practices in nuclear medicine.
KEY FINDINGS: While the core principle of sustainability is ensuring depletion does not exceed regeneration, this manuscript considers the balance of benefits and detriments of artificial intelligence (AI) technologies across the five pillars of sustainability. Specifically, innovations such as AI, generative AI and digital twins could enhance sustainability. While AI has the potential to address social asymmetry and inequity to drive the social and human pillars of sustainability, there is potential for widening the equity gap. AI augmentation and generative AI present economic and environmental sustainability opportunities. Deep digital twins offers clinical and research benefits in economic, ecological and environmental sustainability pillars.
CONCLUSION: AI, digital twins and generative AI offer potential benefits to sustainability in nuclear medicine. Despite the benefits, caution is advised because these technologies confront a number of challenges that could potentially threaten sustainability.
IMPLICATIONS FOR PRACTICE: AI presents opportunities for improving sustainability of nuclear medicine practice although caution is recommended to avoid unintentional undermining of sustainability across the five pillars.
PMID:38582701 | DOI:10.1016/j.radi.2024.03.005
Deep Learning during burn prehospital care: An evolving perspective
Burns. 2024 Mar 13:S0305-4179(24)00087-1. doi: 10.1016/j.burns.2024.03.015. Online ahead of print.
NO ABSTRACT
PMID:38582694 | DOI:10.1016/j.burns.2024.03.015
Influence of Deep Learning Based Image Reconstruction on Quantitative Results of Coronary Artery Calcium Scoring
Acad Radiol. 2024 Apr 5:S1076-6332(24)00161-2. doi: 10.1016/j.acra.2024.03.020. Online ahead of print.
ABSTRACT
RATIONALE AND OBJECTIVES: To assess the impact of deep learning-based imaging reconstruction (DLIR) on quantitative results of coronary artery calcium scoring (CACS) and to evaluate the potential of DLIR for radiation dose reduction in CACS.
METHODS: For a retrospective cohort of 100 consecutive patients (mean age 62 ±10 years, 40% female), CACS scans were reconstructed with filtered back projection (FBP), adaptive statistical iterative reconstruction (ASiR-V in 30%, 60% and 90% strength) and DLIR in low, medium and high strength. CACS was quantified semi-automatically and compared between image reconstructions. In a phantom study, a cardiac calcification insert was scanned inside an anthropomorphic thorax phantom at standard dose, 50% dose and 25% dose. FBP reconstructions at standard dose served as the reference standard.
RESULTS: In the patient study, DLIR led to a mean underestimation of Agatston score by 3.5, 6.4 and 11.6 points at low, medium and high strength, respectively. This underestimation of Agatston score was less pronounced for DLIR than for ASiR-V. In the phantom study, quantitative CACS results increased with reduced radiation dose and decreased with increasing strength of DLIR. Medium strength DLIR reconstruction at 50% dose reduction and high strength DLIR reconstruction at 75% dose reduction resulted in quantitative CACS results that were comparable to FBP reconstructions at standard dose.
CONCLUSION: Compared to FBP as the historical reference standard, DLIR leads to an underestimation of CACS but this underestimation is more moderate than with ASiR-V. DLIR can offset the increase in image noise and calcium score at reduced dose and may thus allow for substantial radiation dose reductions in CACS studies.
PMID:38582685 | DOI:10.1016/j.acra.2024.03.020
A novel stochastic resonance based deep residual network for fault diagnosis of rolling bearing system
ISA Trans. 2024 Mar 25:S0019-0578(24)00128-9. doi: 10.1016/j.isatra.2024.03.020. Online ahead of print.
ABSTRACT
Rolling bearings constitute one of the most vital components in mechanical equipment, monitoring and diagnosing the condition of rolling bearings is essential to ensure safe operation. In actual production, the collected fault signals typically contain noise and cannot be accurately identified. In the paper, stochastic resonance (SR) is introduced into a spiking neural network (SNN) as a feature enhancement method for fault signals with varying noise intensities, combining deep learning with SR to enhance classification accuracy. The output signal-to-noise ratio(SNR) can be enhanced with the SR effect when the noise-affected fault signal input into neurons. Validation of the method is carried out through experiments on the CWRU dataset, achieving classification accuracy of 99.9%. In high-noise environments, with SNR equal to -8 dB, SRDNs achieve over 92% accuracy, exhibiting better robustness and adaptability.
PMID:38582635 | DOI:10.1016/j.isatra.2024.03.020
DockingGA: enhancing targeted molecule generation using transformer neural network and genetic algorithm with docking simulation
Brief Funct Genomics. 2024 Apr 6:elae011. doi: 10.1093/bfgp/elae011. Online ahead of print.
ABSTRACT
Generative molecular models generate novel molecules with desired properties by searching chemical space. Traditional combinatorial optimization methods, such as genetic algorithms, have demonstrated superior performance in various molecular optimization tasks. However, these methods do not utilize docking simulation to inform the design process, and heavy dependence on the quality and quantity of available data, as well as require additional structural optimization to become candidate drugs. To address this limitation, we propose a novel model named DockingGA that combines Transformer neural networks and genetic algorithms to generate molecules with better binding affinity for specific targets. In order to generate high quality molecules, we chose the Self-referencing Chemical Structure Strings to represent the molecule and optimize the binding affinity of the molecules to different targets. Compared to other baseline models, DockingGA proves to be the optimal model in all docking results for the top 1, 10 and 100 molecules, while maintaining 100% novelty. Furthermore, the distribution of physicochemical properties demonstrates the ability of DockingGA to generate molecules with favorable and appropriate properties. This innovation creates new opportunities for the application of generative models in practical drug discovery.
PMID:38582610 | DOI:10.1093/bfgp/elae011
Automated detection of small bowel lesions based on capsule endoscopy using deep learning algorithm
Clin Res Hepatol Gastroenterol. 2024 Apr 4:102334. doi: 10.1016/j.clinre.2024.102334. Online ahead of print.
ABSTRACT
BACKGROUND: In order to overcome the challenges of lesion detection in capsule endoscopy (CE), we improved the YOLOv5-based deep learning algorithm and established the CE-YOLOv5 algorithm to identify small bowel lesions captured by CE.
METHODS: A total of 124,678 typical abnormal images from 1,452 patients were enrolled to train the CE-YOLOv5 model. Then 298 patients with suspected small bowel lesions detected by CE were prospectively enrolled in the testing phase of the study. Small bowel images and videos from the above 298 patients were interpreted by the experts, non-experts and CE-YOLOv5, respectively.
RESULTS: The sensitivity of CE-YOLOv5 in diagnosing vascular lesions, ulcerated/erosive lesions, protruding lesions, parasite, diverticulum, active bleeding and villous lesions based on CE videos was 91.9%, 92.2%, 91.4%, 93.1%, 93.3%, 95.1%, and 100% respectively. Furthermore, CE-YOLOv5 achieved specificity and accuracy of more than 90% for all lesions. Compared with experts, the CE-YOLOv5 showed comparable overall sensitivity, specificity and accuracy (all P > 0.05). Compared with non-experts, the CE-YOLOv5 showed significantly higher overall sensitivity (P < 0.0001) and overall accuracy (P < 0.0001), and a moderately higher overall specificity (P = 0.0351). Furthermore, the time for AI-reading (5.62 ± 2.81 minutes) was significantly shorter than that for the other two groups (both P<0.0001).
CONCLUSIONS: CE-YOLOv5 diagnosed small bowel lesions in CE videos with high sensitivity, specificity and accuracy, providing a reliable approach for automated lesion detection in real-world clinical practice.
PMID:38582328 | DOI:10.1016/j.clinre.2024.102334
Leveraging Artificial Intelligence to Optimize the Care of Peripheral Artery Disease Patients
Ann Vasc Surg. 2024 Apr 4:S0890-5096(24)00143-2. doi: 10.1016/j.avsg.2023.11.057. Online ahead of print.
ABSTRACT
Peripheral artery disease is a major atherosclerotic disease that is associated with poor outcomes such as limb loss, cardiovascular morbidity, and death. Artificial intelligence has seen increasing integration in medicine, and its various applications can optimize the care of peripheral artery disease patients in diagnosis, predicting patient outcomes, and imaging interpretation. In this review, we introduce various artificial intelligence applications such as natural language processing, supervised machine learning, and deep learning, and we analyze the current literature in which these algorithms have been applied to peripheral artery disease.
PMID:38582202 | DOI:10.1016/j.avsg.2023.11.057
A deep learning approach for Direct Immunofluorescence pattern recognition of Autoimmune Bullous Diseases
Br J Dermatol. 2024 Apr 6:ljae142. doi: 10.1093/bjd/ljae142. Online ahead of print.
ABSTRACT
BACKGROUND: Artificial intelligence (AI) is reshaping healthcare, using machine and deep learning to enhance disease management. Dermatology has seen improved diagnostics, particularly in skin cancer detection, through the integration of AI. However, the potential of AI in automating immunofluorescence imaging for autoimmune bullous skin diseases remains untapped. While direct immunofluorescence (DIF) supports diagnosis, its manual interpretation can hinder efficiency. The use of deep learning to automatically classify DIF patterns, including the Intercellular Pattern (ICP) and the Linear Pattern (LP), holds promise for improving the diagnosis of autoimmune bullous skin diseases.
OBJECTIVES: The objectives of this study are to develop AI algorithms for automated classification of autoimmune bullous skin disease DIF patterns, such as ICP and LP. This aims to enhance diagnostic accuracy, streamline disease management, and improve patient outcomes through deep learning-driven immunofluorescence interpretation.
METHODS: We collected immunofluorescence images from skin biopsies of patients suspected of AIBD between January 2022 and January 2024. Skin tissue was obtained via 5-mm punch biopsy, prepared for direct immunofluorescence. Experienced dermatologists classified the images into three classes: ICP, LP, and negative. To evaluate our deep learning approach, we divided the images into training (436) and test sets (93). We employed transfer learning with pre-trained deep neural networks and conducted 5-fold cross-validation to assess model performance. Our dataset's class imbalance was addressed using weighted loss and data augmentation strategies. The models were trained for 50 epochs using Pytorch, achieving an image size of 224x224 for both CNNs and the Swin Transformer.
RESULTS: Our study compared six CNNs and the Swin transformer for AIBDs image classification, with the Swin transformer achieving the highest average validation accuracy of 98.5%. On a separate test set, the best model attained an accuracy of 94.6%, demonstrating 95.3% sensitivity and 97.5% specificity across AIBDs classes. Visualization with Grad-CAM highlighted the model's reliance on characteristic patterns for accurate classification.
CONCLUSIONS: The study highlighted CNN's accuracy in identifying DIF features. This approach aids automated analysis and reporting, offering reproducibility, speed, data handling, and cost-efficiency. Integrating deep learning in skin immunofluorescence promises precise diagnostics and streamlined reporting in this branch of dermatology.
PMID:38581445 | DOI:10.1093/bjd/ljae142
A new paradigm for applying deep learning to protein-ligand interaction prediction
Brief Bioinform. 2024 Mar 27;25(3):bbae145. doi: 10.1093/bib/bbae145.
ABSTRACT
Protein-ligand interaction prediction presents a significant challenge in drug design. Numerous machine learning and deep learning (DL) models have been developed to accurately identify docking poses of ligands and active compounds against specific targets. However, current models often suffer from inadequate accuracy or lack practical physical significance in their scoring systems. In this research paper, we introduce IGModel, a novel approach that utilizes the geometric information of protein-ligand complexes as input for predicting the root mean square deviation of docking poses and the binding strength (pKd, the negative value of the logarithm of binding affinity) within the same prediction framework. This ensures that the output scores carry intuitive meaning. We extensively evaluate the performance of IGModel on various docking power test sets, including the CASF-2016 benchmark, PDBbind-CrossDocked-Core and DISCO set, consistently achieving state-of-the-art accuracies. Furthermore, we assess IGModel's generalizability and robustness by evaluating it on unbiased test sets and sets containing target structures generated by AlphaFold2. The exceptional performance of IGModel on these sets demonstrates its efficacy. Additionally, we visualize the latent space of protein-ligand interactions encoded by IGModel and conduct interpretability analysis, providing valuable insights. This study presents a novel framework for DL-based prediction of protein-ligand interactions, contributing to the advancement of this field. The IGModel is available at GitHub repository https://github.com/zchwang/IGModel.
PMID:38581420 | DOI:10.1093/bib/bbae145
From tradition to innovation: conventional and deep learning frameworks in genome annotation
Brief Bioinform. 2024 Mar 27;25(3):bbae138. doi: 10.1093/bib/bbae138.
ABSTRACT
Following the milestone success of the Human Genome Project, the 'Encyclopedia of DNA Elements (ENCODE)' initiative was launched in 2003 to unearth information about the numerous functional elements within the genome. This endeavor coincided with the emergence of numerous novel technologies, accompanied by the provision of vast amounts of whole-genome sequences, high-throughput data such as ChIP-Seq and RNA-Seq. Extracting biologically meaningful information from this massive dataset has become a critical aspect of many recent studies, particularly in annotating and predicting the functions of unknown genes. The core idea behind genome annotation is to identify genes and various functional elements within the genome sequence and infer their biological functions. Traditional wet-lab experimental methods still rely on extensive efforts for functional verification. However, early bioinformatics algorithms and software primarily employed shallow learning techniques; thus, the ability to characterize data and features learning was limited. With the widespread adoption of RNA-Seq technology, scientists from the biological community began to harness the potential of machine learning and deep learning approaches for gene structure prediction and functional annotation. In this context, we reviewed both conventional methods and contemporary deep learning frameworks, and highlighted novel perspectives on the challenges arising during annotation underscoring the dynamic nature of this evolving scientific landscape.
PMID:38581418 | DOI:10.1093/bib/bbae138
DeepFGRN: inference of gene regulatory network with regulation type based on directed graph embedding
Brief Bioinform. 2024 Mar 27;25(3):bbae143. doi: 10.1093/bib/bbae143.
ABSTRACT
The inference of gene regulatory networks (GRNs) from gene expression profiles has been a key issue in systems biology, prompting many researchers to develop diverse computational methods. However, most of these methods do not reconstruct directed GRNs with regulatory types because of the lack of benchmark datasets or defects in the computational methods. Here, we collect benchmark datasets and propose a deep learning-based model, DeepFGRN, for reconstructing fine gene regulatory networks (FGRNs) with both regulation types and directions. In addition, the GRNs of real species are always large graphs with direction and high sparsity, which impede the advancement of GRN inference. Therefore, DeepFGRN builds a node bidirectional representation module to capture the directed graph embedding representation of the GRN. Specifically, the source and target generators are designed to learn the low-dimensional dense embedding of the source and target neighbors of a gene, respectively. An adversarial learning strategy is applied to iteratively learn the real neighbors of each gene. In addition, because the expression profiles of genes with regulatory associations are correlative, a correlation analysis module is designed. Specifically, this module not only fully extracts gene expression features, but also captures the correlation between regulators and target genes. Experimental results show that DeepFGRN has a competitive capability for both GRN and FGRN inference. Potential biomarkers and therapeutic drugs for breast cancer, liver cancer, lung cancer and coronavirus disease 2019 are identified based on the candidate FGRNs, providing a possible opportunity to advance our knowledge of disease treatments.
PMID:38581416 | DOI:10.1093/bib/bbae143
Assessing the Influence of B-US, CDFI, SE, and Patient Age on Predicting Molecular Subtypes in Breast Lesions Using Deep Learning Algorithms
J Ultrasound Med. 2024 Apr 6. doi: 10.1002/jum.16460. Online ahead of print.
ABSTRACT
OBJECTIVES: Our study aims to investigate the impact of B-mode ultrasound (B-US) imaging, color Doppler flow imaging (CDFI), strain elastography (SE), and patient age on the prediction of molecular subtypes in breast lesions.
METHODS: Totally 2272 multimodal ultrasound imaging was collected from 198 patients. The ResNet-18 network was employed to predict four molecular subtypes from B-US imaging, CDFI, and SE of patients with different ages. All the images were split into training and testing datasets by the ratio of 80%:20%. The predictive performance on testing dataset was evaluated through 5 metrics including mean accuracy, precision, recall, F1-scores, and confusion matrix.
RESULTS: Based on B-US imaging, the test mean accuracy is 74.50%, the precision is 74.84%, the recall is 72.48%, and the F1-scores is 0.73. By combining B-US imaging with CDFI, the results were increased to 85.41%, 85.03%, 85.05%, and 0.84, respectively. With the integration of B-US imaging and SE, the results were changed to 75.64%, 74.69%, 73.86%, and 0.74, respectively. Using images from patients under 40 years old, the results were 90.48%, 90.88%, 88.47%, and 0.89. When images from patients who are above 40 years old, they were changed to 81.96%, 83.12%, 80.5%, and 0.81, respectively.
CONCLUSION: Multimodal ultrasound imaging can be used to accurately predict the molecular subtypes of breast lesions. In addition to B-US imaging, CDFI rather than SE contribute further to improve predictive performance. The predictive performance is notably better for patients under 40 years old compared with those who are 40 years old and above.
PMID:38581195 | DOI:10.1002/jum.16460
AI Applications to Breast MRI: Today and Tomorrow
J Magn Reson Imaging. 2024 Apr 5. doi: 10.1002/jmri.29358. Online ahead of print.
ABSTRACT
In breast imaging, there is an unrelenting increase in the demand for breast imaging services, partly explained by continuous expanding imaging indications in breast diagnosis and treatment. As the human workforce providing these services is not growing at the same rate, the implementation of artificial intelligence (AI) in breast imaging has gained significant momentum to maximize workflow efficiency and increase productivity while concurrently improving diagnostic accuracy and patient outcomes. Thus far, the implementation of AI in breast imaging is at the most advanced stage with mammography and digital breast tomosynthesis techniques, followed by ultrasound, whereas the implementation of AI in breast magnetic resonance imaging (MRI) is not moving along as rapidly due to the complexity of MRI examinations and fewer available dataset. Nevertheless, there is persisting interest in AI-enhanced breast MRI applications, even as the use of and indications of breast MRI continue to expand. This review presents an overview of the basic concepts of AI imaging analysis and subsequently reviews the use cases for AI-enhanced MRI interpretation, that is, breast MRI triaging and lesion detection, lesion classification, prediction of treatment response, risk assessment, and image quality. Finally, it provides an outlook on the barriers and facilitators for the adoption of AI in breast MRI. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY: Stage 6.
PMID:38581127 | DOI:10.1002/jmri.29358
Artificial intelligence in lung cancer screening: Detection, classification, prediction, and prognosis
Cancer Med. 2024 Apr;13(7):e7140. doi: 10.1002/cam4.7140.
ABSTRACT
BACKGROUND: The exceptional capabilities of artificial intelligence (AI) in extracting image information and processing complex models have led to its recognition across various medical fields. With the continuous evolution of AI technologies based on deep learning, particularly the advent of convolutional neural networks (CNNs), AI presents an expanded horizon of applications in lung cancer screening, including lung segmentation, nodule detection, false-positive reduction, nodule classification, and prognosis.
METHODOLOGY: This review initially analyzes the current status of AI technologies. It then explores the applications of AI in lung cancer screening, including lung segmentation, nodule detection, and classification, and assesses the potential of AI in enhancing the sensitivity of nodule detection and reducing false-positive rates. Finally, it addresses the challenges and future directions of AI in lung cancer screening.
RESULTS: AI holds substantial prospects in lung cancer screening. It demonstrates significant potential in improving nodule detection sensitivity, reducing false-positive rates, and classifying nodules, while also showing value in predicting nodule growth and pathological/genetic typing.
CONCLUSIONS: AI offers a promising supportive approach to lung cancer screening, presenting considerable potential in enhancing nodule detection sensitivity, reducing false-positive rates, and classifying nodules. However, the universality and interpretability of AI results need further enhancement. Future research should focus on the large-scale validation of new deep learning-based algorithms and multi-center studies to improve the efficacy of AI in lung cancer screening.
PMID:38581113 | DOI:10.1002/cam4.7140
Advancing microplastic surveillance through photoacoustic imaging and deep learning techniques
J Hazard Mater. 2024 Apr 2;470:134188. doi: 10.1016/j.jhazmat.2024.134188. Online ahead of print.
ABSTRACT
Microplastic contamination presents a significant global environmental threat, yet scientific understanding of its morphological distribution within ecosystems remains limited. This study introduces a pioneering method for comprehensive microplastic assessment and environmental monitoring, integrating photoacoustic imaging and advanced deep learning techniques. Rigorous curation of diverse microplastic datasets enhances model training, yielding a high-resolution imaging dataset focused on shape-based discrimination. The introduction of the Vector-Quantized Variational Auto Encoder (VQVAE2) deep learning model signifies a substantial advancement, demonstrating exceptional proficiency in image dimensionality reduction and clustering. Furthermore, the utilization of Vector Quantization Microplastic Photoacoustic imaging (VQMPA) with a proxy task before decoding enhances feature extraction, enabling simultaneous microplastic analysis and discrimination. Despite inherent limitations, this study lays a robust foundation for future research, suggesting avenues for enhancing microplastic identification precision through expanded sample sizes and complementary methodologies like spectroscopy. In conclusion, this innovative approach not only advances microplastic monitoring but also provides valuable insights for future environmental investigations, highlighting the potential of photoacoustic imaging and deep learning in bolstering sustainable environmental monitoring efforts.
PMID:38579587 | DOI:10.1016/j.jhazmat.2024.134188
An aberration-free line scan confocal Raman imager and type classification and distribution detection of microplastics
J Hazard Mater. 2024 Apr 1;470:134191. doi: 10.1016/j.jhazmat.2024.134191. Online ahead of print.
ABSTRACT
An aberration-free line scanning confocal Raman imager (named AFLSCRI) is developed to achieve rapid Raman imaging. As an application example, various types and sizes of MPs are identified through Raman imaging combined with a machine learning algorithm. The system has excellent performance with a spatial resolution of 2 µm and spectral resolution of 4 cm-1. Compared to traditional point-scanning Raman imaging systems, the detection speed is improved by 2 orders of magnitude. The pervasive nature of MPs results in their infiltration into the food chain, raising concerns for human health due to the potential for chemical leaching and the introduction of persistent organic pollutants. We conducted a series of experiments on various types and sizes of MPs. The system can give a classification accuracy of 98% for seven different types of plastics, and Raman imaging and species identification for MPs as small as 1 µm in diameter were achieved. We also identified toxic and harmful substances remaining in plastics, such as Dioctyl Phthalate (DOP) residues. This demonstrates a strong performance in microplastic species identification, size recognition and identification of hazardous substance contamination in microplastics.
PMID:38579584 | DOI:10.1016/j.jhazmat.2024.134191
Medical image segmentation network based on multi-scale frequency domain filter
Neural Netw. 2024 Mar 28;175:106280. doi: 10.1016/j.neunet.2024.106280. Online ahead of print.
ABSTRACT
With the development of deep learning, medical image segmentation in computer-aided diagnosis has become a research hotspot. Recently, UNet and its variants have become the most powerful medical image segmentation methods. However, these methods suffer from (1) insufficient sensing field and insufficient depth; (2) computational nonlinearity and redundancy of channel features; and (3) ignoring the interrelationships among feature channels. These problems lead to poor network segmentation performance and weak generalization ability. Therefore, first of all, we propose an effective replacement scheme of UNet base block, Double residual depthwise atrous convolution (DRDAC) block, to effectively improve the deficiency of receptive field and depth. Secondly, a new linear module, the Multi-scale frequency domain filter (MFDF), is designed to capture global information from the frequency domain. The high order multi-scale relationship is extracted by combining the depthwise atrous separable convolution with the frequency domain filter. Finally, a channel attention called Axial selection channel attention (ASCA) is redesigned to enhance the network's ability to model feature channel interrelationships. Further, we design a novel frequency domain medical image segmentation baseline method FDFUNet based on the above modules. We conduct extensive experiments on five publicly available medical image datasets and demonstrate that the present method has stronger segmentation performance as well as generalization ability compared to other state-of-the-art baseline methods.
PMID:38579574 | DOI:10.1016/j.neunet.2024.106280
Brain-machine interface based on deep learning to control asynchronously a lower-limb robotic exoskeleton: a case-of-study
J Neuroeng Rehabil. 2024 Apr 5;21(1):48. doi: 10.1186/s12984-024-01342-9.
ABSTRACT
BACKGROUND: This research focused on the development of a motor imagery (MI) based brain-machine interface (BMI) using deep learning algorithms to control a lower-limb robotic exoskeleton. The study aimed to overcome the limitations of traditional BMI approaches by leveraging the advantages of deep learning, such as automated feature extraction and transfer learning. The experimental protocol to evaluate the BMI was designed as asynchronous, allowing subjects to perform mental tasks at their own will.
METHODS: A total of five healthy able-bodied subjects were enrolled in this study to participate in a series of experimental sessions. The brain signals from two of these sessions were used to develop a generic deep learning model through transfer learning. Subsequently, this model was fine-tuned during the remaining sessions and subjected to evaluation. Three distinct deep learning approaches were compared: one that did not undergo fine-tuning, another that fine-tuned all layers of the model, and a third one that fine-tuned only the last three layers. The evaluation phase involved the exclusive closed-loop control of the exoskeleton device by the participants' neural activity using the second deep learning approach for the decoding.
RESULTS: The three deep learning approaches were assessed in comparison to an approach based on spatial features that was trained for each subject and experimental session, demonstrating their superior performance. Interestingly, the deep learning approach without fine-tuning achieved comparable performance to the features-based approach, indicating that a generic model trained on data from different individuals and previous sessions can yield similar efficacy. Among the three deep learning approaches compared, fine-tuning all layer weights demonstrated the highest performance.
CONCLUSION: This research represents an initial stride toward future calibration-free methods. Despite the efforts to diminish calibration time by leveraging data from other subjects, complete elimination proved unattainable. The study's discoveries hold notable significance for advancing calibration-free approaches, offering the promise of minimizing the need for training trials. Furthermore, the experimental evaluation protocol employed in this study aimed to replicate real-life scenarios, granting participants a higher degree of autonomy in decision-making regarding actions such as walking or stopping gait.
PMID:38581031 | DOI:10.1186/s12984-024-01342-9
DPI_CDF: druggable protein identifier using cascade deep forest
BMC Bioinformatics. 2024 Apr 5;25(1):145. doi: 10.1186/s12859-024-05744-3.
ABSTRACT
BACKGROUND: Drug targets in living beings perform pivotal roles in the discovery of potential drugs. Conventional wet-lab characterization of drug targets is although accurate but generally expensive, slow, and resource intensive. Therefore, computational methods are highly desirable as an alternative to expedite the large-scale identification of druggable proteins (DPs); however, the existing in silico predictor's performance is still not satisfactory.
METHODS: In this study, we developed a novel deep learning-based model DPI_CDF for predicting DPs based on protein sequence only. DPI_CDF utilizes evolutionary-based (i.e., histograms of oriented gradients for position-specific scoring matrix), physiochemical-based (i.e., component protein sequence representation), and compositional-based (i.e., normalized qualitative characteristic) properties of protein sequence to generate features. Then a hierarchical deep forest model fuses these three encoding schemes to build the proposed model DPI_CDF.
RESULTS: The empirical outcomes on 10-fold cross-validation demonstrate that the proposed model achieved 99.13 % accuracy and 0.982 of Matthew's-correlation-coefficient (MCC) on the training dataset. The generalization power of the trained model is further examined on an independent dataset and achieved 95.01% of maximum accuracy and 0.900 MCC. When compared to current state-of-the-art methods, DPI_CDF improves in terms of accuracy by 4.27% and 4.31% on training and testing datasets, respectively. We believe, DPI_CDF will support the research community to identify druggable proteins and escalate the drug discovery process.
AVAILABILITY: The benchmark datasets and source codes are available in GitHub: http://github.com/Muhammad-Arif-NUST/DPI_CDF .
PMID:38580921 | DOI:10.1186/s12859-024-05744-3