Deep learning

Signature-based intrusion detection using machine learning and deep learning approaches empowered with fuzzy clustering

Sat, 2025-01-11 06:00

Sci Rep. 2025 Jan 11;15(1):1726. doi: 10.1038/s41598-025-85866-7.

ABSTRACT

Network security is crucial in today's digital world, since there are multiple ongoing threats to sensitive data and vital infrastructure. The aim of this study to improve network security by combining methods for instruction detection from machine learning (ML) and deep learning (DL). Attackers have tried to breach security systems by accessing networks and obtaining sensitive information.Intrusion detection systems (IDSs) are one of the significant aspect of cybersecurity that involve the monitoring and analysis, with the intention of identifying and reporting of dangerous activities that would help to prevent the attack.Support Vector Machine (SVM), K-Nearest Neighbors (KNN), Random Forest (RF), Decision Tree (DT), Long Short-Term Memory (LSTM), and Artificial Neural Network (ANN) are the vector figures incorporated into the study through the results. These models are subjected to various test to established the best results on the identification and prevention of network violation. Based on the obtained results, it can be stated that all the tested models are capable of organizing data originating from network traffic. thus, recognizing the difference between normal and intrusive behaviors, models such as SVM, KNN, RF, and DT showed effective results. Deep learning models LSTM and ANN rapidly find long-term and complex pattern in network data. It is extremely effective when dealing with complex intrusions since it is characterised by high precision, accuracy and recall.Based on our study, SVM and Random Forest are considered promising solutions for real-world IDS applications because of their versatility and explainability. For the companies seeking IDS solutions which are reliable and at the same time more interpretable, these models can be promising. Additionally, LSTM and ANN, with their ability to catch successive conditions, are suitable for situations involving nuanced, advancing dangers.

PMID:39799225 | DOI:10.1038/s41598-025-85866-7

Categories: Literature Watch

Importance of neural network complexity for the automatic segmentation of individual thigh muscles in MRI images from patients with neuromuscular diseases

Sat, 2025-01-11 06:00

MAGMA. 2025 Jan 11. doi: 10.1007/s10334-024-01221-3. Online ahead of print.

ABSTRACT

OBJECTIVE: Segmentation of individual thigh muscles in MRI images is essential for monitoring neuromuscular diseases and quantifying relevant biomarkers such as fat fraction (FF). Deep learning approaches such as U-Net have demonstrated effectiveness in this field. However, the impact of reducing neural network complexity remains unexplored in the FF quantification in individual muscles.

MATERIAL AND METHODS: U-Net architectures with different complexities have been compared for the quantification of the fat fraction in each muscle group selected in the central part of the thigh region. The corresponding performance has been assessed in terms of Dice score (DSC) and FF quantification error. The database contained 1450 thigh images from 59 patients and 14 healthy subjects (age: 47 ± 17 years, sex: 36F, 37M). Ten individual muscles were segmented in each image. The performance of each model was compared to nnU-Net, a complex architecture with 4.35 × 107 parameters, 12.8 Gigabytes of peak memory usage and 167 h of training time.

RESULTS: As expected, nnU-Net achieved the highest DSC (94.77 ± 0.13%). A simpler U-Net (5.81 × 105 parameters, 2.37 Gigabytes, 14 h of training time) achieved a lower DSC but still above 90%. Surprisingly, both models achieved a comparable FF estimation.

DISCUSSION: The poor correlation between observed DSC and FF indicates that less complex architectures, reducing GPU memory utilization and training time, can still accurately quantify FF.

PMID:39798067 | DOI:10.1007/s10334-024-01221-3

Categories: Literature Watch

Deep learning multi-classification of middle ear diseases using synthetic tympanic images

Sat, 2025-01-11 06:00

Acta Otolaryngol. 2025 Jan 10:1-6. doi: 10.1080/00016489.2024.2448829. Online ahead of print.

ABSTRACT

BACKGROUND: Recent advances in artificial intelligence have facilitated the automatic diagnosis of middle ear diseases using endoscopic tympanic membrane imaging.

AIM: We aimed to develop an automated diagnostic system for middle ear diseases by applying deep learning techniques to tympanic membrane images obtained during routine clinical practice.

MATERIAL AND METHODS: To augment the training dataset, we explored the use of generative adversarial networks (GANs) to produce high-quality synthetic tympanic images that were subsequently added to the training data. Between 2016 and 2021, we collected 472 endoscopic images representing four tympanic membrane conditions: normal, acute otitis media, otitis media with effusion, and chronic suppurative otitis media. These images were utilized for machine learning based on the InceptionV3 model, which was pretrained on ImageNet. Additionally, 200 synthetic images generated using StyleGAN3 and considered appropriate for each disease category were incorporated for retraining.

RESULTS: The inclusion of synthetic images alongside real endoscopic images did not significantly improve the diagnostic accuracy compared to training solely with real images. However, when trained solely on synthetic images, the model achieved a diagnostic accuracy of approximately 70%.

CONCLUSIONS AND SIGNIFICANCE: Synthetic images generated by GANs have potential utility in the development of machine-learning models for medical diagnosis.

PMID:39797517 | DOI:10.1080/00016489.2024.2448829

Categories: Literature Watch

Integrating Model-Informed Drug Development With AI: A Synergistic Approach to Accelerating Pharmaceutical Innovation

Sat, 2025-01-11 06:00

Clin Transl Sci. 2025 Jan;18(1):e70124. doi: 10.1111/cts.70124.

ABSTRACT

The pharmaceutical industry constantly strives to improve drug development processes to reduce costs, increase efficiencies, and enhance therapeutic outcomes for patients. Model-Informed Drug Development (MIDD) uses mathematical models to simulate intricate processes involved in drug absorption, distribution, metabolism, and excretion, as well as pharmacokinetics and pharmacodynamics. Artificial intelligence (AI), encompassing techniques such as machine learning, deep learning, and Generative AI, offers powerful tools and algorithms to efficiently identify meaningful patterns, correlations, and drug-target interactions from big data, enabling more accurate predictions and novel hypothesis generation. The union of MIDD with AI enables pharmaceutical researchers to optimize drug candidate selection, dosage regimens, and treatment strategies through virtual trials to help derisk drug candidates. However, several challenges, including the availability of relevant, labeled, high-quality datasets, data privacy concerns, model interpretability, and algorithmic bias, must be carefully managed. Standardization of model architectures, data formats, and validation processes is imperative to ensure reliable and reproducible results. Moreover, regulatory agencies have recognized the need to adapt their guidelines to evaluate recommendations from AI-enhanced MIDD methods. In conclusion, integrating model-driven drug development with AI offers a transformative paradigm for pharmaceutical innovation. By integrating the predictive power of computational models and the data-driven insights of AI, the synergy between these approaches has the potential to accelerate drug discovery, optimize treatment strategies, and usher in a new era of personalized medicine, benefiting patients, researchers, and the pharmaceutical industry as a whole.

PMID:39797502 | DOI:10.1111/cts.70124

Categories: Literature Watch

Self-Driving Microscopes: AI Meets Super-Resolution Microscopy

Sat, 2025-01-11 06:00

Small Methods. 2025 Jan 10:e2401757. doi: 10.1002/smtd.202401757. Online ahead of print.

ABSTRACT

The integration of Machine Learning (ML) with super-resolution microscopy represents a transformative advancement in biomedical research. Recent advances in ML, particularly deep learning (DL), have significantly enhanced image processing tasks, such as denoising and reconstruction. This review explores the growing potential of automation in super-resolution microscopy, focusing on how DL can enable autonomous imaging tasks. Overcoming the challenges of automation, particularly in adapting to dynamic biological processes and minimizing manual intervention, is crucial for the future of microscopy. Whilst still in its infancy, automation in super-resolution can revolutionize drug discovery and disease phenotyping leading to similar breakthroughs as have been recognized in this year's Nobel Prizes for Physics and Chemistry.

PMID:39797467 | DOI:10.1002/smtd.202401757

Categories: Literature Watch

Semi-Automatic Refinement of Myocardial Segmentations for Better LVNC Detection

Sat, 2025-01-11 06:00

J Clin Med. 2025 Jan 6;14(1):271. doi: 10.3390/jcm14010271.

ABSTRACT

Background: Accurate segmentation of the left ventricular myocardium in cardiac MRI is essential for developing reliable deep learning models to diagnose left ventricular non-compaction cardiomyopathy (LVNC). This work focuses on improving the segmentation database used to train these models, enhancing the quality of myocardial segmentation for more precise model training. Methods: We present a semi-automatic framework that refines segmentations through three fundamental approaches: (1) combining neural network outputs with expert-driven corrections, (2) implementing a blob-selection method to correct segmentation errors and neural network hallucinations, and (3) employing a cross-validation process using the baseline U-Net model. Results: Applied to datasets from three hospitals, these methods demonstrate improved segmentation accuracy, with the blob-selection technique boosting the Dice coefficient for the Trabecular Zone by up to 0.06 in certain populations. Conclusions: Our approach enhances the dataset's quality, providing a more robust foundation for future LVNC diagnostic models.

PMID:39797353 | DOI:10.3390/jcm14010271

Categories: Literature Watch

Bird Species Detection Net: Bird Species Detection Based on the Extraction of Local Details and Global Information Using a Dual-Feature Mixer

Sat, 2025-01-11 06:00

Sensors (Basel). 2025 Jan 6;25(1):291. doi: 10.3390/s25010291.

ABSTRACT

Bird species detection is critical for applications such as the analysis of bird population dynamics and species diversity. However, this task remains challenging due to local structural similarities and class imbalances among bird species. Currently, most deep learning algorithms focus on designing local feature extraction modules while ignoring the importance of global information. However, this global information is essential for accurate bird species detection. To address this limitation, we propose BSD-Net, a bird species detection network. BSD-Net efficiently learns local and global information in pixels to accurately detect bird species. BSD-Net consists of two main components: a dual-branch feature mixer (DBFM) and a prediction balancing module (PBM). The dual-branch feature mixer extracts features from dichotomous feature segments using global attention and deep convolution, expanding the network's receptive field and achieving a strong inductive bias, allowing the network to distinguish between similar local details. The prediction balance module balances the difference in feature space based on the pixel values of each category, thereby resolving category imbalances and improving the network's detection accuracy. The experimental results using two public benchmarks and a self-constructed Poyang Lake Bird dataset demonstrate that BSD-Net outperforms existing methods, achieving 45.71% and 80.00% mAP50 with the CUB-200-2011 and Poyang Lake Bird datasets, respectively, and 66.03% AP with FBD-SV-2024, allowing for more accurate location and species information for bird detection tasks in video surveillance.

PMID:39797082 | DOI:10.3390/s25010291

Categories: Literature Watch

Munsell Soil Colour Prediction from the Soil and Soil Colour Book Using Patching Method and Deep Learning Techniques

Sat, 2025-01-11 06:00

Sensors (Basel). 2025 Jan 6;25(1):287. doi: 10.3390/s25010287.

ABSTRACT

Soil colour is a key indicator of soil health and the associated properties. In agriculture, soil colour provides farmers and advises with a visual guide to interpret soil functions and performance. Munsell colour charts have been used to determine soil colour for many years, but the process is fallible, as it depends on the user's perception. As smartphones are widely used and come with high-quality cameras, a popular one was used for capturing images for this study. This study aims to predict Munsell soil colour (MSC) from the Munsell soil colour book (MSCB) by using deep learning techniques on mobile-captured images. MSCB contains 14 pages and 443 colour chips. So, the number of classes for chip-by-chip prediction is very high, and the captured images are inadequate to train and validate using deep learning methods; thus, a patch-based mechanism was proposed to enrich the dataset. So, the course of action is to find the prediction accuracy of MSC for both page level and chip level by evaluating multiple deep learning methods combined with a patch-based mechanism. The analysis also provides knowledge about the best deep learning technique for MSC prediction. Without patching, the accuracy for chip-level prediction is below 40%, the page-level prediction is below 65%, and the accuracy with patching is around 95% for both, which is significant. Lastly, this study provides insights into the application of the proposed techniques and analysis within real-world soil and provides results with higher accuracy with a limited number of soil samples, indicating the proposed method's potential scalability and effectiveness with larger datasets.

PMID:39797078 | DOI:10.3390/s25010287

Categories: Literature Watch

CTHNet: A CNN-Transformer Hybrid Network for Landslide Identification in Loess Plateau Regions Using High-Resolution Remote Sensing Images

Sat, 2025-01-11 06:00

Sensors (Basel). 2025 Jan 6;25(1):273. doi: 10.3390/s25010273.

ABSTRACT

The Loess Plateau in northwest China features fragmented terrain and is prone to landslides. However, the complex environment of the Loess Plateau, combined with the inherent limitations of convolutional neural networks (CNNs), often results in false positives and missed detection for deep learning models based on CNNs when identifying landslides from high-resolution remote sensing images. To deal with this challenge, our research introduced a CNN-transformer hybrid network. Specifically, we first constructed a database consisting of 1500 loess landslides and non-landslide samples. Subsequently, we proposed a neural network architecture that employs a CNN-transformer hybrid as an encoder, with the ability to extract high-dimensional, local-scale features using CNNs and global-scale features using a multi-scale lightweight transformer module, thereby enabling the automatic identification of landslides. The results demonstrate that this model can effectively detect loess landslides in such complex environments. Compared to approaches based on CNNs or transformers, such as U-Net, HCNet and TransUNet, our proposed model achieved greater accuracy, with an improvement of at least 3.81% in the F1-score. This study contributes to the automatic and intelligent identification of landslide locations and ranges on the Loess Plateau, which has significant practicality in terms of landslide investigation, risk assessment, disaster management, and related fields.

PMID:39797065 | DOI:10.3390/s25010273

Categories: Literature Watch

A Comparison Study of Person Identification Using IR Array Sensors and LiDAR

Sat, 2025-01-11 06:00

Sensors (Basel). 2025 Jan 6;25(1):271. doi: 10.3390/s25010271.

ABSTRACT

Person identification is a critical task in applications such as security and surveillance, requiring reliable systems that perform robustly under diverse conditions. This study evaluates the Vision Transformer (ViT) and ResNet34 models across three modalities-RGB, thermal, and depth-using datasets collected with infrared array sensors and LiDAR sensors in controlled scenarios and varying resolutions (16 × 12 to 640 × 480) to explore their effectiveness in person identification. Preprocessing techniques, including YOLO-based cropping, were employed to improve subject isolation. Results show a similar identification performance between the three modalities, in particular in high resolution (i.e., 640 × 480), with RGB image classification reaching 100.0%, depth images reaching 99.54% and thermal images reaching 97.93%. However, upon deeper investigation, thermal images show more robustness and generalizability by maintaining focus on subject-specific features even at low resolutions. In contrast, RGB data performs well at high resolutions but exhibits reliance on background features as resolution decreases. Depth data shows significant degradation at lower resolutions, suffering from scattered attention and artifacts. These findings highlight the importance of modality selection, with thermal imaging emerging as the most reliable. Future work will explore multi-modal integration, advanced preprocessing, and hybrid architectures to enhance model adaptability and address current limitations. This study highlights the potential of thermal imaging and the need for modality-specific strategies in designing robust person identification systems.

PMID:39797062 | DOI:10.3390/s25010271

Categories: Literature Watch

Attention Score-Based Multi-Vision Transformer Technique for Plant Disease Classification

Sat, 2025-01-11 06:00

Sensors (Basel). 2025 Jan 6;25(1):270. doi: 10.3390/s25010270.

ABSTRACT

This study proposes an advanced plant disease classification framework leveraging the Attention Score-Based Multi-Vision Transformer (Multi-ViT) model. The framework introduces a novel attention mechanism to dynamically prioritize relevant features from multiple leaf images, overcoming the limitations of single-leaf-based diagnoses. Building on the Vision Transformer (ViT) architecture, the Multi-ViT model aggregates diverse feature representations by combining outputs from multiple ViTs, each capturing unique visual patterns. This approach allows for a holistic analysis of spatially distributed symptoms, crucial for accurately diagnosing diseases in trees. Extensive experiments conducted on apple, grape, and tomato leaf disease datasets demonstrate the model's superior performance, achieving over 99% accuracy and significantly improving F1 scores compared to traditional methods such as ResNet, VGG, and MobileNet. These findings underscore the effectiveness of the proposed model for precise and reliable plant disease classification.

PMID:39797061 | DOI:10.3390/s25010270

Categories: Literature Watch

The role of chromatin state in intron retention: A case study in leveraging large scale deep learning models

Fri, 2025-01-10 06:00

PLoS Comput Biol. 2025 Jan 10;21(1):e1012755. doi: 10.1371/journal.pcbi.1012755. Online ahead of print.

ABSTRACT

Complex deep learning models trained on very large datasets have become key enabling tools for current research in natural language processing and computer vision. By providing pre-trained models that can be fine-tuned for specific applications, they enable researchers to create accurate models with minimal effort and computational resources. Large scale genomics deep learning models come in two flavors: the first are large language models of DNA sequences trained in a self-supervised fashion, similar to the corresponding natural language models; the second are supervised learning models that leverage large scale genomics datasets from ENCODE and other sources. We argue that these models are the equivalent of foundation models in natural language processing in their utility, as they encode within them chromatin state in its different aspects, providing useful representations that allow quick deployment of accurate models of gene regulation. We demonstrate this premise by leveraging the recently created Sei model to develop simple, interpretable models of intron retention, and demonstrate their advantage over models based on the DNA language model DNABERT-2. Our work also demonstrates the impact of chromatin state on the regulation of intron retention. Using representations learned by Sei, our model is able to discover the involvement of transcription factors and chromatin marks in regulating intron retention, providing better accuracy than a recently published custom model developed for this purpose.

PMID:39792954 | DOI:10.1371/journal.pcbi.1012755

Categories: Literature Watch

Cardiac MR image reconstruction using cascaded hybrid dual domain deep learning framework

Fri, 2025-01-10 06:00

PLoS One. 2025 Jan 10;20(1):e0313226. doi: 10.1371/journal.pone.0313226. eCollection 2025.

ABSTRACT

Recovering diagnostic-quality cardiac MR images from highly under-sampled data is a current research focus, particularly in addressing cardiac and respiratory motion. Techniques such as Compressed Sensing (CS) and Parallel Imaging (pMRI) have been proposed to accelerate MRI data acquisition and improve image quality. However, these methods have limitations in high spatial-resolution applications, often resulting in blurring or residual artifacts. Recently, deep learning-based techniques have gained attention for their accuracy and efficiency in image reconstruction. Deep learning-based MR image reconstruction methods are divided into two categories: (a) single domain methods (image domain learning and k-space domain learning) and (b) cross/dual domain methods. Single domain methods, which typically use U-Net in either the image or k-space domain, fail to fully exploit the correlation between these domains. This paper introduces a dual-domain deep learning approach that incorporates multi-coil data consistency (MCDC) layers for reconstructing cardiac MR images from 1-D Variable Density (VD) random under-sampled data. The proposed hybrid dual-domain deep learning models integrate data from both the domains to improve image quality, reduce artifacts, and enhance overall robustness and accuracy of the reconstruction process. Experimental results demonstrate that the proposed methods outperform than conventional deep learning and CS techniques, as evidenced by higher Structural Similarity Index (SSIM), lower Root Mean Square Error (RMSE), and higher Peak Signal-to-Noise Ratio (PSNR).

PMID:39792851 | DOI:10.1371/journal.pone.0313226

Categories: Literature Watch

Precise Sizing and Collision Detection of Functional Nanoparticles by Deep Learning Empowered Plasmonic Microscopy

Fri, 2025-01-10 06:00

Adv Sci (Weinh). 2025 Jan 10:e2407432. doi: 10.1002/advs.202407432. Online ahead of print.

ABSTRACT

Single nanoparticle analysis is crucial for various applications in biology, materials, and energy. However, precisely profiling and monitoring weakly scattering nanoparticles remains challenging. Here, it is demonstrated that deep learning-empowered plasmonic microscopy (Deep-SM) enables precise sizing and collision detection of functional chemical and biological nanoparticles. Image sequences are recorded by the state-of-the-art plasmonic microscopy during single nanoparticle collision onto the sensor surface. Deep-SM can enhance signal detection and suppresses noise by leveraging spatio-temporal correlations of the unique signal and noise characteristics in plasmonic microscopy image sequences. Deep-SM can provide significant scattering signal enhancement and noise reduction in dynamic imaging of biological nanoparticles as small as 10 nm, as well as the collision detection of metallic nanoparticle electrochemistry and quantum coupling with plasmonic microscopy. The high sensitivity and simplicity make this approach promising for routine use in nanoparticle analysis across diverse scientific fields.

PMID:39792780 | DOI:10.1002/advs.202407432

Categories: Literature Watch

A hybrid dual-branch model with recurrence plots and transposed transformer for stock trend prediction

Fri, 2025-01-10 06:00

Chaos. 2025 Jan 1;35(1):013125. doi: 10.1063/5.0233275.

ABSTRACT

Stock trend prediction is a significant challenge due to the inherent uncertainty and complexity of stock market time series. In this study, we introduce an innovative dual-branch network model designed to effectively address this challenge. The first branch constructs recurrence plots (RPs) to capture the nonlinear relationships between time points from historical closing price sequences and computes the corresponding recurrence quantifification analysis measures. The second branch integrates transposed transformers to identify subtle interconnections within the multivariate time series derived from stocks. Features extracted from both branches are concatenated and fed into a fully connected layer for binary classification, determining whether the stock price will rise or fall the next day. Our experimental results based on historical data from seven randomly selected stocks demonstrate that our proposed dual-branch model achieves superior accuracy (ACC) and F1-score compared to traditional machine learning and deep learning approaches. These findings underscore the efficacy of combining RPs with deep learning models to enhance stock trend prediction, offering considerable potential for refining decision-making in financial markets and investment strategies.

PMID:39792696 | DOI:10.1063/5.0233275

Categories: Literature Watch

Artificial Intelligence for Predicting HER2 Status of Gastric Cancer Based on Whole-Slide Histopathology Images: A Retrospective Multicenter Study

Fri, 2025-01-10 06:00

Adv Sci (Weinh). 2025 Jan 10:e2408451. doi: 10.1002/advs.202408451. Online ahead of print.

ABSTRACT

Human epidermal growth factor receptor 2 (HER2) positive gastric cancer (GC) shows a robust response to the combined therapy based HER2-targeted therapy. The application of these therapies is highly dependent on the evaluation of tumor HER2 status. However, there are many risks and challenges in HER2 assessment in GC. Therefore, an economically viable and readily available instrument is requisite for distinguishing HER2 status among patients diagnosed with GC. The study has innovatively developed a deep learning model, HER2Net, which can predict the HER2 status by quantitatively calculating the proportion of HER2 high-expression regions. The HER2Net is trained on an internal training set derived from 531 hematoxylin & eosin (H&E) whole slide images (WSI) of 520 patients. Subsequently, the performance of HER2Net is validated on an internal test set from 115 H&E WSI of 111 patients and an external multi-center test set from 102 H&E WSI of 101 patients. The HER2Net achieves an accuracy of 0.9043 on the internal test set, and an accuracy of 0.8922 on an external test set from multiple institutes. This discovery indicates that the HER2Net can potentially offer a novel methodology for the identification of HER2-positive GC.

PMID:39792693 | DOI:10.1002/advs.202408451

Categories: Literature Watch

Semi-Supervised Learning Allows for Improved Segmentation With Reduced Annotations of Brain Metastases Using Multicenter MRI Data

Fri, 2025-01-10 06:00

J Magn Reson Imaging. 2025 Jan 10. doi: 10.1002/jmri.29686. Online ahead of print.

ABSTRACT

BACKGROUND: Deep learning-based segmentation of brain metastases relies on large amounts of fully annotated data by domain experts. Semi-supervised learning offers potential efficient methods to improve model performance without excessive annotation burden.

PURPOSE: This work tests the viability of semi-supervision for brain metastases segmentation.

STUDY TYPE: Retrospective.

SUBJECTS: There were 156, 65, 324, and 200 labeled scans from four institutions and 519 unlabeled scans from a single institution. All subjects included in the study had diagnosed with brain metastases.

FIELD STRENGTH/SEQUENCES: 1.5 T and 3 T, 2D and 3D T1-weighted pre- and post-contrast, and fluid-attenuated inversion recovery (FLAIR).

ASSESSMENT: Three semi-supervision methods (mean teacher, cross-pseudo supervision, and interpolation consistency training) were adapted with the U-Net architecture. The three semi-supervised methods were compared to their respective supervised baseline on the full and half-sized training.

STATISTICAL TESTS: Evaluation was performed on a multinational test set from four different institutions using 5-fold cross-validation. Method performance was evaluated by the following: the number of false-positive predictions, the number of true positive predictions, the 95th Hausdorff distance, and the Dice similarity coefficient (DSC). Significance was tested using a paired samples t test for a single fold, and across all folds within a given cohort.

RESULTS: Semi-supervision outperformed the supervised baseline for all sites with the best-performing semi-supervised method achieved an on average DSC improvement of 6.3% ± 1.6%, 8.2% ± 3.8%, 8.6% ± 2.6%, and 15.4% ± 1.4%, when trained on half the dataset and 3.6% ± 0.7%, 2.0% ± 1.5%, 1.8% ± 5.7%, and 4.7% ± 1.7%, compared to the supervised baseline on four test cohorts. In addition, in three of four datasets, the semi-supervised training produced equal or better results than the supervised models trained on twice the labeled data.

DATA CONCLUSION: Semi-supervised learning allows for improved segmentation performance over the supervised baseline, and the improvement was particularly notable for independent external test sets when trained on small amounts of labeled data.

PLAIN LANGUAGE SUMMARY: Artificial intelligence requires extensive datasets with large amounts of annotated data from medical experts which can be difficult to acquire due to the large workload. To compensate for this, it is possible to utilize large amounts of un-annotated clinical data in addition to annotated data. However, this method has not been widely tested for the most common intracranial brain tumor, brain metastases. This study shows that this approach allows for data efficient deep learning models across multiple institutions with different clinical protocols and scanners.

LEVEL OF EVIDENCE: 3 TECHNICAL EFFICACY: Stage 2.

PMID:39792624 | DOI:10.1002/jmri.29686

Categories: Literature Watch

Visualizing Preosteoarthritis: Updates on UTE-Based Compositional MRI and Deep Learning Algorithms

Fri, 2025-01-10 06:00

J Magn Reson Imaging. 2025 Jan 10. doi: 10.1002/jmri.29710. Online ahead of print.

ABSTRACT

Osteoarthritis (OA) is heterogeneous and involves structural changes in the whole joint, such as cartilage, meniscus/labrum, ligaments, and tendons, mainly with short T2 relaxation times. Detecting OA before the onset of irreversible changes is crucial for early proactive management and limit growing disease burden. The more recent advanced quantitative imaging techniques and deep learning (DL) algorithms in musculoskeletal imaging have shown great potential for visualizing "pre-OA." In this review, we first focus on ultrashort echo time-based magnetic resonance imaging (MRI) techniques for direct visualization as well as quantitative morphological and compositional assessment of both short- and long-T2 musculoskeletal tissues, and second explore how DL revolutionize the way of MRI analysis (eg, automatic tissue segmentation and extraction of quantitative image biomarkers) and the classification, prediction, and management of OA. PLAIN LANGUAGE SUMMARY: Detecting osteoarthritis (OA) before the onset of irreversible changes is crucial for early proactive management. OA is heterogeneous and involves structural changes in the whole joint, such as cartilage, meniscus/labrum, ligaments, and tendons, mainly with short T2 relaxation times. Ultrashort echo time-based magnetic resonance imaging (MRI), in particular, enables direct visualization and quantitative compositional assessment of short-T2 tissues. Deep learning is revolutionizing the way of MRI analysis (eg, automatic tissue segmentation and extraction of quantitative image biomarkers) and the detection, classification, and prediction of disease. They together have made further advances toward identification of imaging biomarkers/features for pre-OA. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY: Stage 2.

PMID:39792443 | DOI:10.1002/jmri.29710

Categories: Literature Watch

deep-AMPpred: A Deep Learning Method for Identifying Antimicrobial Peptides and Their Functional Activities

Fri, 2025-01-10 06:00

J Chem Inf Model. 2025 Jan 10. doi: 10.1021/acs.jcim.4c01913. Online ahead of print.

ABSTRACT

Antimicrobial peptides (AMPs) are small peptides that play an important role in disease defense. As the problem of pathogen resistance caused by the misuse of antibiotics intensifies, the identification of AMPs as alternatives to antibiotics has become a hot topic. Accurately identifying AMPs using computational methods has been a key issue in the field of bioinformatics in recent years. Although there are many machine learning-based AMP identification tools, most of them do not focus on or only focus on a few functional activities. Predicting the multiple activities of antimicrobial peptides can help discover candidate peptides with broad-spectrum antimicrobial ability. We propose a two-stage AMP predictor deep-AMPpred, in which the first stage distinguishes AMP from other peptides, and the second stage solves the multilabel problem of 13 common functional activities of AMP. deep-AMPpred combines the ESM-2 model to encode the features of AMP and integrates CNN, BiLSTM, and CBAM models to discover AMP and its functional activities. The ESM-2 model captures the global contextual features of the peptide sequence, while CNN, BiLSTM, and CBAM combine local feature extraction, long-term and short-term dependency modeling, and attention mechanisms to improve the performance of deep-AMPpred in AMP and its function prediction. Experimental results demonstrate that deep-AMPpred performs well in accurately identifying AMPs and predicting their functional activities. This confirms the effectiveness of using the ESM-2 model to capture meaningful peptide sequence features and integrating multiple deep learning models for AMP identification and activity prediction.

PMID:39792442 | DOI:10.1021/acs.jcim.4c01913

Categories: Literature Watch

Addendum to: The effectiveness of deep learning model in differentiating benign and malignant pulmonary nodules on spiral CT

Fri, 2025-01-10 06:00

Technol Health Care. 2025;33(1):695. doi: 10.3233/THC-249001.

NO ABSTRACT

PMID:39792355 | DOI:10.3233/THC-249001

Categories: Literature Watch

Pages