Deep learning
Chemistry of Street Art: Neural Network for the Spectral Analysis of Berlin Wall Colors
J Am Chem Soc. 2024 Dec 11. doi: 10.1021/jacs.4c12611. Online ahead of print.
ABSTRACT
This research starts with the analysis of some fragments of the Berlin Wall street art for the characterization of the painting materials. The spectroscopic results provide a general description of the paint executive technique but more importantly open the way to a new advantage of Raman application to the analytic analysis of acrylic colors. The study highlights the correlation between peak intensity and compound percentage and explores the powerful application of deep learning for the quantification of a pigment mixture in the acrylic commercial products from Raman spectra acquired with hand-held equipment (BRAVO by Bruker). The study reveals the ability of the convolutional neural network (CNN) algorithm to analyze the spectra and predict the ratio between the coloring compounds. The reference materials for calibration and training were obtained by the dilution of commercial acrylic colors commonly practiced by street artists, using Schmincke brand paints. For the first time, Raman investigation provides valuable insights into calibrations for determining dye dilution in mixtures of commercial products, offering a new opportunity for analytical quantification with Raman hand-held spectrometers and contributing to a comprehensive understanding of artists' techniques and materials in street art.
PMID:39660736 | DOI:10.1021/jacs.4c12611
Stress testing deep learning models for prostate cancer detection on biopsies and surgical specimens
J Pathol. 2024 Dec 11. doi: 10.1002/path.6373. Online ahead of print.
ABSTRACT
The presence, location, and extent of prostate cancer is assessed by pathologists using H&E-stained tissue slides. Machine learning approaches can accomplish these tasks for both biopsies and radical prostatectomies. Deep learning approaches using convolutional neural networks (CNNs) have been shown to identify cancer in pathologic slides, some securing regulatory approval for clinical use. However, differences in sample processing can subtly alter the morphology between sample types, making it unclear whether deep learning algorithms will consistently work on both types of slide images. Our goal was to investigate whether morphological differences between sample types affected the performance of biopsy-trained cancer detection CNN models when applied to radical prostatectomies and vice versa using multiple cohorts (N = 1,000). Radical prostatectomies (N = 100) and biopsies (N = 50) were acquired from The University of Pennsylvania to train (80%) and validate (20%) a DenseNet CNN for biopsies (MB), radical prostatectomies (MR), and a combined dataset (MB+R). On a tile level, MB and MR achieved F1 scores greater than 0.88 when applied to their own sample type but less than 0.65 when applied across sample types. On a whole-slide level, models achieved significantly better performance on their own sample type compared to the alternative model (p < 0.05) for all metrics. This was confirmed by external validation using digitized biopsy slide images from a clinical trial [NRG Radiation Therapy Oncology Group (RTOG)] (NRG/RTOG 0521, N = 750) via both qualitative and quantitative analyses (p < 0.05). A comprehensive review of model outputs revealed morphologically driven decision making that adversely affected model performance. MB appeared to be challenged with the analysis of open gland structures, whereas MR appeared to be challenged with closed gland structures, indicating potential morphological variation between the training sets. These findings suggest that differences in morphology and heterogeneity necessitate the need for more tailored, sample-specific (i.e. biopsy and surgical) machine learning models. © 2024 The Author(s). The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
PMID:39660731 | DOI:10.1002/path.6373
Examples of implementations and the future of AI in medical diagnostics
Przegl Epidemiol. 2024 Dec 10;78(3):303-317. doi: 10.32394/pe/195240. Epub 2024 Oct 25.
ABSTRACT
AI is revolutionizing medical diagnostics around the world, innovating in a variety of contexts, from leading US hospitals to facilities in developing countries. Below we present examples of AI implementations in medical diagnostics from different regions, taking into account the effectiveness and results of these solutions and forecasts for the development of this technology. Regarding the future of artificial intelligence in medical diagnostics, the article considered potential innovations such as the development of deep learning algorithms and integration with 5G technologies and the Internet. Attention is paid to the possibilities of further personalization of healthcare and to the challenges related to the need to adapt legal regulations and data management. It also indicates the directions of future research that may contribute to the further development of AI in medical diagnostics and the improvement of the quality of healthcare not only in Poland, but around the world.
PMID:39660712 | DOI:10.32394/pe/195240
Predicting gene expression from histone marks using chromatin deep learning models depends on histone mark function, regulatory distance and cellular states
Nucleic Acids Res. 2024 Dec 11:gkae1212. doi: 10.1093/nar/gkae1212. Online ahead of print.
ABSTRACT
To understand the complex relationship between histone mark activity and gene expression, recent advances have used in silico predictions based on large-scale machine learning models. However, these approaches have omitted key contributing factors like cell state, histone mark function or distal effects, which impact the relationship, limiting their findings. Moreover, downstream use of these models for new biological insight is lacking. Here, we present the most comprehensive study of this relationship to date - investigating seven histone marks in eleven cell types across a diverse range of cell states. We used convolutional and attention-based models to predict transcription from histone mark activity at promoters and distal regulatory elements. Our work shows that histone mark function, genomic distance and cellular states collectively influence a histone mark's relationship with transcription. We found that no individual histone mark is consistently the strongest predictor of gene expression across all genomic and cellular contexts. This highlights the need to consider all three factors when determining the effect of histone mark activity on transcriptional state. Furthermore, we conducted in silico histone mark perturbation assays, uncovering functional and disease related loci and highlighting frameworks for the use of chromatin deep learning models to uncover new biological insight.
PMID:39660643 | DOI:10.1093/nar/gkae1212
Intelligent optoelectrowetting digital microfluidic system for real-time selective parallel manipulation of biological droplet arrays
Lab Chip. 2024 Dec 11. doi: 10.1039/d4lc00804a. Online ahead of print.
ABSTRACT
Optoelectrowetting technology generates virtual electrodes to manipulate droplets by projecting optical patterns onto the photoconductive layer. This method avoids the complex design of the physical circuitry of dielectricwetting chips, compensating for the inability to reconstruct the electrode. However, the current technology relies on operators to manually position the droplets, draw optical patterns, and preset the droplet movement paths. It lacks real-time feedback on droplet information and the ability for independent droplet control, which can lead to droplet miscontrol and contamination. This paper presents a combination of optoelectrowetting with deep learning algorithms, integrating software and a photoelectric detection platform, and develops an optoelectrowetting intelligent control system. First, a target detection algorithm identifies droplet characteristics in real-time and automatically generate virtual electrodes to control movement. Simultaneously, a tracking algorithm outputs trajectories and ID information for efficient droplet arrays tracking. The results show that the system can automatically control the movement and fusion of multiple droplets in parallel and realize the automatic arrangement and storage of disordered droplet arrays without any additional electrodes and sensing devices. Additionally, through the automated control of the system, the cell suspension can be precisely cultured in the specified medium according to experimental requirements, and the growth trend is consistent with that observed in the well plate, significantly enhancing the experiment's flexibility and accuracy. In this paper, we propose an intelligent method applicable to the automated manipulation of discrete droplets. This method would play a crucial role in advancing the applications of digital microfluidic technology in biomedicine and other fields.
PMID:39660615 | DOI:10.1039/d4lc00804a
Decoding Depth of Meditation: Electroencephalography Insights From Expert Vipassana Practitioners
Biol Psychiatry Glob Open Sci. 2024 Oct 16;5(1):100402. doi: 10.1016/j.bpsgos.2024.100402. eCollection 2025 Jan.
ABSTRACT
BACKGROUND: Meditation practices have demonstrated numerous psychological and physiological benefits, but capturing the neural correlates of varying meditative depths remains challenging. In this study, we aimed to decode self-reported time-varying meditative depth in expert practitioners using electroencephalography (EEG).
METHODS: Expert Vipassana meditators (n = 34) participated in 2 separate sessions. Participants reported their meditative depth on a personally defined 1 to 5 scale using both traditional probing and a novel spontaneous emergence method. EEG activity and effective connectivity in theta, alpha, and gamma bands were used to predict meditative depth using machine/deep learning, including a novel method that fused source activity and connectivity information.
RESULTS: We achieved significant accuracy in decoding self-reported meditative depth across unseen sessions. The spontaneous emergence method yielded improved decoding performance compared with traditional probing and correlated more strongly with postsession outcome measures. Best performance was achieved by a novel machine learning method that fused spatial, spectral, and connectivity information. Conventional EEG channel-level methods and preselected default mode network regions fell short in capturing the complex neural dynamics associated with varying meditation depths.
CONCLUSIONS: This study demonstrates the feasibility of decoding personally defined meditative depth using EEG. The findings highlight the complex, multivariate nature of neural activity during meditation and introduce spontaneous emergence as an ecologically valid and less obtrusive experiential sampling method. These results have implications for advancing neurofeedback techniques and enhancing our understanding of meditative practices.
PMID:39660274 | PMC:PMC11629179 | DOI:10.1016/j.bpsgos.2024.100402
IndoHerb: Indonesia medicinal plants recognition using transfer learning and deep learning
Heliyon. 2024 Nov 22;10(23):e40606. doi: 10.1016/j.heliyon.2024.e40606. eCollection 2024 Dec 15.
ABSTRACT
The rich diversity of herbal plants in Indonesia holds immense potential as alternative resources for traditional healing and ethnobotanical practices. However, the dwindling recog-nition of herbal plants due to modernization poses a significant challenge in preserving this valuable heritage. The accurate identification of these plants is crucial for the continuity of traditional practices and the utilization of their nutritional benefits. Nevertheless, the manual identification of herbal plants remains a time-consuming task, demanding expert knowledge and meticulous examination of plant characteristics. In response, the application of computer vision emerges as a promising solution to facilitate the efficient identification of herbal plants. This research addresses the task of classifying Indonesian herbal plants through the implementation of transfer learning of Convolutional Neural Networks (CNN). To support our study, we curated an extensive dataset of herbal plant images from Indonesia with careful manual selection. Subsequently, we conducted rigorous data preprocessing, and classification utilizing transfer learning methodologies with five distinct models: ResNet, DenseNet, VGG, ConvNeXt, and Swin Transformer. Our comprehensive analysis revealed that ConvNeXt achieved the highest accuracy, standing at an impressive 92.5 %. Additionally, we conducted testing using a scratch model, resulting in an accuracy of 53.9 %. The experimental setup featured essential hyperparameters, including the ExponentialLR scheduler with a gamma value of 0.9, a learning rate of 0.001, the Cross-Entropy Loss function, the Adam optimizer, and a training epoch count of 50. This study's outcomes offer valuable insights and practical implications for the automated identification of Indonesian medicinal plants, contributing not only to the preservation of ethnobotanical knowledge but also to the enhancement of agricultural practices through the cultivation of these valuable resources. The Indonesia Medicinal Plant Dataset utilized in this research is openly accessible at the following link: https://github.com/Salmanim20/indomedicinalplant.
PMID:39660181 | PMC:PMC11629298 | DOI:10.1016/j.heliyon.2024.e40606
Artificial intelligence in respiratory care: perspectives on critical opportunities and challenges
Breathe (Sheff). 2024 Dec 10;20(3):230189. doi: 10.1183/20734735.0189-2023. eCollection 2024 Oct.
ABSTRACT
Artificial intelligence (AI) is transforming respiratory healthcare through a wide range of deep learning and generative tools, and is increasingly integrated into both patients' lives and routine respiratory care. The implications of AI in respiratory care are vast and multifaceted, presenting both promises and uncertainties from the perspectives of clinicians, patients and society. Clinicians contemplate whether AI will streamline or complicate their daily tasks, while patients weigh the potential benefits of personalised self-management support against risks such as data privacy concerns and misinformation. The impact of AI on the clinician-patient relationship remains a pivotal consideration, with the potential to either enhance collaborative care or create depersonalised interactions. Societally, there is an imperative to leverage AI in respiratory care to bridge healthcare disparities, while safeguarding against the widening of inequalities. Strategic efforts to promote transparency and prioritise inclusivity and ease of understanding in algorithm co-design will be crucial in shaping future AI to maximise benefits and minimise risks for all stakeholders.
PMID:39660082 | PMC:PMC11629173 | DOI:10.1183/20734735.0189-2023
Improving genome-scale metabolic models of incomplete genomes with deep learning
iScience. 2024 Nov 7;27(12):111349. doi: 10.1016/j.isci.2024.111349. eCollection 2024 Dec 20.
ABSTRACT
Deciphering microbial metabolism is essential for understanding ecosystem functions. Genome-scale metabolic models (GSMMs) predict metabolic traits from genomic data, but constructing GSMMs for uncultured bacteria is challenging due to incomplete metagenome-assembled genomes, resulting in many gaps. We introduce the deep neural network guided imputation of reactomes (DNNGIOR), which uses AI to improve gap-filling by learning from the presence and absence of metabolic reactions across diverse bacterial genomes. Key factors for prediction accuracy are: (1) reaction frequency across all bacteria and (2) phylogenetic distance of the query to the training genomes. DNNGIOR predictions achieve an average F1 score of 0.85 for reactions present in over 30% of training genomes. DNNGIOR guided gap-filling was 14 times more accurate for draft reconstructions and 2-9 times for curated models than unweighted gap-filling.
PMID:39660058 | PMC:PMC11629236 | DOI:10.1016/j.isci.2024.111349
Artificial intelligence algorithms for real-time detection of colorectal polyps during colonoscopy: a review
Am J Cancer Res. 2024 Nov 15;14(11):5456-5470. doi: 10.62347/BZIZ6358. eCollection 2024.
ABSTRACT
Colorectal cancer (CRC) is one of the most common cancers worldwide. Early detection and removal of colorectal polyps during colonoscopy are crucial for preventing such cancers. With the development of artificial intelligence (AI) technology, it has become possible to detect and localize colorectal polyps in real time during colonoscopy using computer-aided diagnosis (CAD). This provides a reliable endoscopist reference and leads to more accurate diagnosis and treatment. This paper reviews AI-based algorithms for real-time detection of colorectal polyps, with a particular focus on the development of deep learning algorithms aimed at optimizing both efficiency and correctness. Furthermore, the challenges and prospects of AI-based colorectal polyp detection are discussed.
PMID:39659923 | PMC:PMC11626263 | DOI:10.62347/BZIZ6358
Clinical application of machine-based deep learning in patients with radiologically presumed adult-type diffuse glioma grades 2 or 3
Neurooncol Adv. 2024 Nov 10;6(1):vdae192. doi: 10.1093/noajnl/vdae192. eCollection 2024 Jan-Dec.
ABSTRACT
BACKGROUND: Radiologically presumed diffuse lower-grade glioma (dLGG) are typically non or minimal enhancing tumors, with hyperintensity in T2w-images. The aim of this study was to test the clinical usefulness of deep learning (DL) in IDH mutation prediction in patients with radiologically presumed dLGG.
METHODS: Three hundred and fourteen patients were retrospectively recruited from 6 neurosurgical departments in Sweden, Norway, France, Austria, and the United States. Collected data included patients' age, sex, tumor molecular characteristics (IDH, and 1p19q), and routine preoperative radiological images. A clinical model was built using multivariable logistic regression with the variables age and tumor location. DL models were built using MRI data only, and 4 DL architectures used in glioma research. In the final validation test, the clinical model and the best DL model were scored on an external validation cohort with 155 patients from the Erasmus Glioma Dataset.
RESULTS: The mean age in the recruited and external cohorts was 45.0 (SD 14.3) and 44.3 years (SD 14.6). The cohorts were rather similar, except for sex distribution (53.5% vs 64.5% males, P-value = .03) and IDH status (30.9% vs 12.9% IDH wild-type, P-value <.01). Overall, the area under the curve for the prediction of IDH mutations in the external validation cohort was 0.86, 0.82, and 0.87 for the clinical model, the DL model, and the model combining both models' probabilities.
CONCLUSIONS: In their current state, when these complex models were applied to our clinical scenario, they did not seem to provide a net gain compared to our baseline clinical model.
PMID:39659833 | PMC:PMC11631182 | DOI:10.1093/noajnl/vdae192
Deep learning-based postoperative glioblastoma segmentation and extent of resection evaluation: Development, external validation, and model comparison
Neurooncol Adv. 2024 Nov 16;6(1):vdae199. doi: 10.1093/noajnl/vdae199. eCollection 2024 Jan-Dec.
ABSTRACT
BACKGROUND: The pursuit of automated methods to assess the extent of resection (EOR) in glioblastomas is challenging, requiring precise measurement of residual tumor volume. Many algorithms focus on preoperative scans, making them unsuitable for postoperative studies. Our objective was to develop a deep learning-based model for postoperative segmentation using magnetic resonance imaging (MRI). We also compared our model's performance with other available algorithms.
METHODS: To develop the segmentation model, a training cohort from 3 research institutions and 3 public databases was used. Multiparametric MRI scans with ground truth labels for contrast-enhancing tumor (ET), edema, and surgical cavity, served as training data. The models were trained using MONAI and nnU-Net frameworks. Comparisons were made with currently available segmentation models using an external cohort from a research institution and a public database. Additionally, the model's ability to classify EOR was evaluated using the RANO-Resect classification system. To further validate our best-trained model, an additional independent cohort was used.
RESULTS: The study included 586 scans: 395 for model training, 52 for model comparison, and 139 scans for independent validation. The nnU-Net framework produced the best model with median Dice scores of 0.81 for contrast ET, 0.77 for edema, and 0.81 for surgical cavities. Our best-trained model classified patients into maximal and submaximal resection categories with 96% accuracy in the model comparison dataset and 84% in the independent validation cohort.
CONCLUSIONS: Our nnU-Net-based model outperformed other algorithms in both segmentation and EOR classification tasks, providing a freely accessible tool with promising clinical applicability.
PMID:39659831 | PMC:PMC11631186 | DOI:10.1093/noajnl/vdae199
A multimodal travel route recommendation system leveraging visual Transformers and self-attention mechanisms
Front Neurorobot. 2024 Nov 26;18:1439195. doi: 10.3389/fnbot.2024.1439195. eCollection 2024.
ABSTRACT
INTRODUCTION: With the rapid development of the tourism industry, the demand for accurate and personalized travel route recommendations has significantly increased. However, traditional methods often fail to effectively integrate visual and sequential information, leading to recommendations that are both less accurate and less personalized.
METHODS: This paper introduces SelfAM-Vtrans, a novel algorithm that leverages multimodal data-combining visual Transformers, LSTMs, and self-attention mechanisms-to enhance the accuracy and personalization of travel route recommendations. SelfAM-Vtrans integrates visual and sequential information by employing a visual Transformer to extract features from travel images, thereby capturing spatial relationships within them. Concurrently, a Long Short-Term Memory (LSTM) network encodes sequential data to capture the temporal dependencies within travel sequences. To effectively merge these two modalities, a self-attention mechanism fuses the visual features and sequential encodings, thoroughly accounting for their interdependencies. Based on this fused representation, a classification or regression model is trained using real travel datasets to recommend optimal travel routes.
RESULTS AND DISCUSSION: The algorithm was rigorously evaluated through experiments conducted on real-world travel datasets, and its performance was benchmarked against other route recommendation methods. The results demonstrate that SelfAM-Vtrans significantly outperforms traditional approaches in terms of both recommendation accuracy and personalization. By comprehensively incorporating both visual and sequential data, this method offers travelers more tailored and precise route suggestions, thereby enriching the overall travel experience.
PMID:39659756 | PMC:PMC11628496 | DOI:10.3389/fnbot.2024.1439195
Spontaneous breaking of symmetry in overlapping cell instance segmentation using diffusion models
Biol Methods Protoc. 2024 Nov 9;9(1):bpae084. doi: 10.1093/biomethods/bpae084. eCollection 2024.
ABSTRACT
Instance segmentation is the task of assigning unique identifiers to individual objects in images. Solving this task requires breaking the inherent symmetry that semantically similar objects must result in distinct outputs. Deep learning algorithms bypass this break-of-symmetry by training specialized predictors or by utilizing intermediate label representations. However, many of these approaches break down when faced with overlapping labels that are ubiquitous in biomedical imaging, for instance for segmenting cell layers. Here, we discuss the reason for this failure and offer a novel approach for instance segmentation based on diffusion models that breaks this symmetry spontaneously. Our method outputs pixel-level instance segmentations matching the performance of models such as cellpose on the cellpose fluorescent cell dataset, while also permitting overlapping labels.
PMID:39659670 | PMC:PMC11631529 | DOI:10.1093/biomethods/bpae084
Deep learning and transfer learning for brain tumor detection and classification
Biol Methods Protoc. 2024 Nov 19;9(1):bpae080. doi: 10.1093/biomethods/bpae080. eCollection 2024.
ABSTRACT
Convolutional neural networks (CNNs) are powerful tools that can be trained on image classification tasks and share many structural and functional similarities with biological visual systems and mechanisms of learning. In addition to serving as a model of biological systems, CNNs possess the convenient feature of transfer learning where a network trained on one task may be repurposed for training on another, potentially unrelated, task. In this retrospective study of public domain MRI data, we investigate the ability of neural network models to be trained on brain cancer imaging data while introducing a unique camouflage animal detection transfer learning step as a means of enhancing the networks' tumor detection ability. Training on glioma and normal brain MRI data, post-contrast T1-weighted and T2-weighted, we demonstrate the potential success of this training strategy for improving neural network classification accuracy. Qualitative metrics such as feature space and DeepDreamImage analysis of the internal states of trained models were also employed, which showed improved generalization ability by the models following camouflage animal transfer learning. Image saliency maps further this investigation by allowing us to visualize the most important image regions from a network's perspective while learning. Such methods demonstrate that the networks not only 'look' at the tumor itself when deciding, but also at the impact on the surrounding tissue in terms of compressions and midline shifts. These results suggest an approach to brain tumor MRIs that is comparable to that of trained radiologists while also exhibiting a high sensitivity to subtle structural changes resulting from the presence of a tumor.
PMID:39659666 | PMC:PMC11631523 | DOI:10.1093/biomethods/bpae080
Adaptive Multicore Dual-Path Fusion Multimodel Extraction of Heterogeneous Features for FAIMS Spectral Analysis
Rapid Commun Mass Spectrom. 2025 Mar;39(5):e9967. doi: 10.1002/rcm.9967.
ABSTRACT
With the increasing application scenarios and detection needs of high-field asymmetric waveform ion mobility spectrometry (FAIMS) analysis, deep learning-assisted spectral analysis has become an important method to improve the analytical effect and work efficiency. However, a single model has limitations in generalizing to different types of tasks, and a model trained from one batch of spectral data is difficult to achieve good results on another task with large differences. To address this problem, this study proposes an adaptive multicore dual-path fusion multimodel extraction of heterogeneous features for FAIMS spectral analysis model in conjunction with FAIMS small-sample data analysis scenarios. Multinetwork complementarity is achieved through multimodel feature extraction, adaptive feature fusion module adjusts feature size and dimension fusion to heterogeneous features, and multicore dual-path fusion can capture and integrate information at all scales and levels. The model's performance improves dramatically when performing complex mixture multiclassification tasks: accuracy, precision, recall, f1-score, and micro-AUC reach 98.11%, 98.66%, 98.33%, 98.30%, and 98.98%. The metrics for the generalization test using the untrained xylene isomer data were 96.42%, 96.66%, 96.96%, 96.65%, and 97.60%. The model not only exhibits excellent analytical results on preexisting data but also demonstrates good generalization ability on untrained data.
PMID:39658821 | DOI:10.1002/rcm.9967
FlavorMiner: a machine learning platform for extracting molecular flavor profiles from structural data
J Cheminform. 2024 Dec 10;16(1):140. doi: 10.1186/s13321-024-00935-9.
ABSTRACT
Flavor is the main factor driving consumers acceptance of food products. However, tracking the biochemistry of flavor is a formidable challenge due to the complexity of food composition. Current methodologies for linking individual molecules to flavor in foods and beverages are expensive and time-consuming. Predictive models based on machine learning (ML) are emerging as an alternative to speed up this process. Nonetheless, the optimal approach to predict flavor features of molecules remains elusive. In this work we present FlavorMiner, an ML-based multilabel flavor predictor. FlavorMiner seamlessly integrates different combinations of algorithms and mathematical representations, augmented with class balance strategies to address the inherent class of the input dataset. Notably, Random Forest and K-Nearest Neighbors combined with Extended Connectivity Fingerprint and RDKit molecular descriptors consistently outperform other combinations in most cases. Resampling strategies surpass weight balance methods in mitigating bias associated with class imbalance. FlavorMiner exhibits remarkable accuracy, with an average ROC AUC score of 0.88. This algorithm was used to analyze cocoa metabolomics data, unveiling its profound potential to help extract valuable insights from intricate food metabolomics data. FlavorMiner can be used for flavor mining in any food product, drawing from a diverse training dataset that spans over 934 distinct food products.Scientific Contribution FlavorMiner is an advanced machine learning (ML)-based tool designed to predict molecular flavor features with high accuracy and efficiency, addressing the complexity of food metabolomics. By leveraging robust algorithmic combinations paired with mathematical representations FlavorMiner achieves high predictive performance. Applied to cocoa metabolomics, FlavorMiner demonstrated its capacity to extract meaningful insights, showcasing its versatility for flavor analysis across diverse food products. This study underscores the transformative potential of ML in accelerating flavor biochemistry research, offering a scalable solution for the food and beverage industry.
PMID:39658805 | DOI:10.1186/s13321-024-00935-9
Evaluation of the mandibular canal and the third mandibular molar relationship by CBCT with a deep learning approach
Oral Radiol. 2024 Dec 11. doi: 10.1007/s11282-024-00793-z. Online ahead of print.
ABSTRACT
OBJECTIVE: The mandibular canal (MC) houses the inferior alveolar nerve. Extraction of the mandibular third molar (MM3) is a common dental surgery, often complicated by nerve damage. CBCT is the most effective imaging method to assess the relationship between MM3 and MC. With advancements in artificial intelligence, deep learning has shown promising results in dentistry. The aim of this study is to evaluate the MC-MM3 relationship using CBCT and a deep learning technique, as well as to automatically segment the mandibular impacted third molar, mandibular canal, mental and mandibular foramen.
METHODS: This retrospective study analyzed CBCT data from 300 patients. Segmentation was used for labeling, dividing the data into training (n = 270) and test (n = 30) sets. The nnU-NetV2 architecture was employed to develop an optimal deep learning model. The model's success was validated using the test set, with metrics including accuracy, sensitivity, precision, Dice score, Jaccard index, and AUC.
RESULTS: For the MM3 annotated on CBCT, the accuracy was 0.99, sensitivity 0.90, precision 0.85, Dice score 0.85, Jaccard index 0.78, AUC value 0.95. In MC evaluation, accuracy was 0.99, sensitivity 0.75, precision 0.78, Dice score 0.76, Jaccard index 0.62, AUC value 0.88. For the evaluation of mental foramen; accuracy 0.99, sensitivity 0.64, precision 0.66, Dice score 0.64, Jaccard index 0.57, AUC value 0.82. In the evaluation of mandibular foramen, accuracy was found to be 0.99, sensitivity 0.79, precision 0.68, Dice score 0.71, and AUC value 0.90. Evaluating the MM3-MC relationship, the model showed an 80% correlation with observer assessments.
CONCLUSION: The nnU-NetV2 deep learning architecture reliably identifies the MC-MM3 relationship in CBCT images, aiding in diagnosis, surgical planning, and complication prediction.
PMID:39658743 | DOI:10.1007/s11282-024-00793-z
Evaluating deep learning and radiologist performance in volumetric prostate cancer analysis with biparametric MRI and histopathologically mapped slides
Abdom Radiol (NY). 2024 Dec 11. doi: 10.1007/s00261-024-04734-6. Online ahead of print.
NO ABSTRACT
PMID:39658736 | DOI:10.1007/s00261-024-04734-6
Artificial intelligence-guided design of lipid nanoparticles for pulmonary gene therapy
Nat Biotechnol. 2024 Dec 10. doi: 10.1038/s41587-024-02490-y. Online ahead of print.
ABSTRACT
Ionizable lipids are a key component of lipid nanoparticles, the leading nonviral messenger RNA delivery technology. Here, to advance the identification of ionizable lipids beyond current methods, which rely on experimental screening and/or rational design, we introduce lipid optimization using neural networks, a deep-learning strategy for ionizable lipid design. We created a dataset of >9,000 lipid nanoparticle activity measurements and used it to train a directed message-passing neural network for prediction of nucleic acid delivery with diverse lipid structures. Lipid optimization using neural networks predicted RNA delivery in vitro and in vivo and extrapolated to structures divergent from the training set. We evaluated 1.6 million lipids in silico and identified two structures, FO-32 and FO-35, with local mRNA delivery to the mouse muscle and nasal mucosa. FO-32 matched the state of the art for nebulized mRNA delivery to the mouse lung, and both FO-32 and FO-35 efficiently delivered mRNA to ferret lungs. Overall, this work shows the utility of deep learning for improving nanoparticle delivery.
PMID:39658727 | DOI:10.1038/s41587-024-02490-y