Deep learning

Application of deep ensemble learning for palm disease detection in smart agriculture

Wed, 2024-09-25 06:00

Heliyon. 2024 Aug 29;10(17):e37141. doi: 10.1016/j.heliyon.2024.e37141. eCollection 2024 Sep 15.

ABSTRACT

Agriculture has notably become one of the fields experiencing intensive digital transformation. Leveraging state-of-the-art techniques in this domain has provided numerous advantages for agricultural activities. Deep learning (DL) algorithms have proven beneficial in addressing various agricultural challenges. This study presents a comprehensive investigation into applying DL models for palm disease detection and classification in the context of smart agriculture. The research aims to address the limitations observed in previous studies and improve the robustness and generalizability of the results. To achieve this, a two-stage optimization methodology is employed. First, transfer learning and fine-tuning techniques are applied using various pre-trained deep neural network models. The experiments show promising results, with all models achieving high accuracy rates during training and validation. Furthermore, their performance on unseen test data is also assessed to ensure practical applicability. The top-performing models are MobileNetV2 (92.48 %), ResNet (92.42 %), ResNetRS50 (92.30 %), and DenseNet121 (92.01 %). Second, a deep ensemble learning approach is applied to enhance the models' generalization capability further. The best-performing models with different criteria are combined using the ensemble technique, resulting in remarkable improvements in disease detection tasks. DELM1 emerges as the most successful ensemble model, achieving an ROC AUC Score of 99 %. This study demonstrates the effectiveness of deep ensemble learning models in palm disease detection and classification for smart agriculture applications. The findings contribute to advancing disease detection systems and emphasize the potential of ensemble learning. The study provides valuable insights for future research, guiding the application of DL techniques to address critical agricultural challenges and improve crop health monitoring systems. Another contribution is combining various plant diseases and insect pest classes using diverse datasets. A comprehensive classification system is achieved by considering different disease classes and stages within the white scale category, improving the model's robustness.

PMID:39319161 | PMC:PMC11419929 | DOI:10.1016/j.heliyon.2024.e37141

Categories: Literature Watch

Incorporating patient-specific information for the development of rectal tumor auto-segmentation models for online adaptive magnetic resonance Image-guided radiotherapy

Wed, 2024-09-25 06:00

Phys Imaging Radiat Oncol. 2024 Sep 16;32:100648. doi: 10.1016/j.phro.2024.100648. eCollection 2024 Oct.

ABSTRACT

BACKGROUND AND PURPOSE: In online adaptive magnetic resonance image (MRI)-guided radiotherapy (MRIgRT), manual contouring of rectal tumors on daily images is labor-intensive and time-consuming. Automation of this task is complex due to substantial variation in tumor shape and location between patients. The aim of this work was to investigate different approaches of propagating patient-specific prior information to the online adaptive treatment fractions to improve deep-learning based auto-segmentation of rectal tumors.

MATERIALS AND METHODS: 243 T2-weighted MRI scans of 49 rectal cancer patients treated on the 1.5T MR-Linear accelerator (MR-Linac) were utilized to train models to segment rectal tumors. As benchmark, an MRI_only auto-segmentation model was trained. Three approaches of including a patient-specific prior were studied: 1. include the segmentations of fraction 1 as extra input channel for the auto-segmentation of subsequent fractions, 2. fine-tuning of the MRI_only model to fraction 1 (PSF_1) and 3. fine-tuning of the MRI_only model on all earlier fractions (PSF_cumulative). Auto-segmentations were compared to the manual segmentation using geometric similarity metrics. Clinical impact was assessed by evaluating post-treatment target coverage.

RESULTS: All patient-specific methods outperformed the MRI_only segmentation approach. Median 95th percentile Hausdorff (95HD) were 22.0 (range: 6.1-76.6) mm for MRI_only segmentation, 9.9 (range: 2.5-38.2) mm for MRI+prior segmentation, 6.4 (range: 2.4-17.8) mm for PSF_1 and 4.8 (range: 1.7-26.9) mm for PSF_cumulative. PSF_cumulative was found to be superior to PSF_1 from fraction 4 onward (p = 0.014).

CONCLUSION: Patient-specific fine-tuning of automatically segmented rectal tumors, using images and segmentations from all previous fractions, yields superior quality compared to other auto-segmentation approaches.

PMID:39319094 | PMC:PMC11421252 | DOI:10.1016/j.phro.2024.100648

Categories: Literature Watch

Improving remote sensing scene classification using dung Beetle optimization with enhanced deep learning approach

Wed, 2024-09-25 06:00

Heliyon. 2024 Aug 30;10(18):e37154. doi: 10.1016/j.heliyon.2024.e37154. eCollection 2024 Sep 30.

ABSTRACT

Remote sensing (RS) scene classification has received significant consideration because of its extensive use by the RS community. Scene classification in satellite images has widespread uses in remote surveillance, environmental observation, remote scene analysis, urban planning, and earth observations. Because of the immense benefits of the land scene classification task, various approaches have been presented recently for automatically classifying land scenes from remote sensing images (RSIs). Several approaches dependent upon convolutional neural networks (CNNs) are presented for classifying brutal RS scenes; however, they could only partially capture the context from RSIs due to the problematic texture, cluttered context, tiny size of objects, and considerable differences in object scale. This article designs a Remote Sensing Scene Classification using Dung Beetle Optimization with Enhanced Deep Learning (RSSC-DBOEDL) approach. The purpose of the RSSC-DBOEDL technique is to categorize different varieties of scenes that exist in the RSI. In the presented RSSC-DBOEDL technique, the enhanced MobileNet model is primarily deployed as a feature extractor. The DBO method could be implemented in this study for hyperparameter tuning of the enhanced MobileNet model. The RSSC-DBOEDL technique uses a multi-head attention-based long short-term memory (MHA-LSTM) technique to classify the scenes in the RSI. The simulation evaluation of the RSSC-DBOEDL approach has been examined under the benchmark RSI datasets. The simulation results of the RSSC-DBOEDL approach exhibited a more excellent accuracy outcome of 98.75 % and 95.07 % under UC Merced and EuroSAT datasets with other existing methods regarding distinct measures.

PMID:39318799 | PMC:PMC11420495 | DOI:10.1016/j.heliyon.2024.e37154

Categories: Literature Watch

Development and validation of a deep learning algorithm for the prediction of serum creatinine in critically ill patients

Wed, 2024-09-25 06:00

JAMIA Open. 2024 Sep 19;7(3):ooae097. doi: 10.1093/jamiaopen/ooae097. eCollection 2024 Oct.

ABSTRACT

OBJECTIVES: Serum creatinine (SCr) is the primary biomarker for assessing kidney function; however, it may lag behind true kidney function, especially in instances of acute kidney injury (AKI). The objective of the work is to develop Nephrocast, a deep-learning model to predict next-day SCr in adult patients treated in the intensive care unit (ICU).

MATERIALS AND METHODS: Nephrocast was trained and validated, temporally and prospectively, using electronic health record data of adult patients admitted to the ICU in the University of California San Diego Health (UCSDH) between January 1, 2016 and June 22, 2024. The model features consisted of demographics, comorbidities, vital signs and laboratory measurements, and medications. Model performance was evaluated by mean absolute error (MAE) and root-mean-square error (RMSE) and compared against the prediction day's SCr as a reference.

RESULTS: A total of 28 191 encounters met the eligibility criteria, corresponding to 105 718 patient-days. The median (interquartile range [IQR]) MAE and RMSE in the internal test set were 0.09 (0.085-0.09) mg/dL and 0.15 (0.146-0.152) mg/dL, respectively. In the prospective validation, the MAE and RMSE were 0.09 mg/dL and 0.14 mg/dL, respectively. The model's performance was superior to the reference SCr.

DISCUSSION AND CONCLUSION: Our model demonstrated good performance in predicting next-day SCr by leveraging clinical data routinely collected in the ICU. The model could aid clinicians in in identifying high-risk patients for AKI, predicting AKI trajectory, and informing the dosing of renally eliminated drugs.

PMID:39318762 | PMC:PMC11421473 | DOI:10.1093/jamiaopen/ooae097

Categories: Literature Watch

Computed tomography-based radial endobronchial ultrasound image simulation of peripheral pulmonary lesions using deep learning

Wed, 2024-09-25 06:00

Endosc Ultrasound. 2024 Jul-Aug;13(4):239-247. doi: 10.1097/eus.0000000000000079. Epub 2024 Aug 20.

ABSTRACT

BACKGROUND AND OBJECTIVES: Radial endobronchial ultrasound (R-EBUS) plays an important role during transbronchial sampling of peripheral pulmonary lesions (PPLs). However, existing navigational bronchoscopy systems provide no guidance for R-EBUS. To guide intraoperative R-EBUS probe manipulation, we aimed to simulate R-EBUS images of PPLs from preoperative computed tomography (CT) data using deep learning.

MATERIALS AND METHODS: Preoperative CT and intraoperative ultrasound data of PPLs in 250 patients who underwent R-EBUS-guided transbronchial lung biopsy were retrospectively collected. Two-dimensional CT sections perpendicular to the biopsy path were transformed into ultrasonic reflection and transmission images using an ultrasound propagation model to obtain the initial simulated R-EBUS images. A cycle generative adversarial network was trained to improve the realism of initial simulated images. Objective and subjective indicators were used to evaluate the similarity between real and simulated images.

RESULTS: Wasserstein distances showed that utilizing the cycle generative adversarial network significantly improved the similarity between real and simulated R-EBUS images. There was no statistically significant difference in the long axis, short axis, and area between real and simulated lesions (all P > 0.05). Based on the experts' evaluation, a median similarity score of ≥4 on a 5-point scale was obtained for lesion size, shape, margin, internal echoes, and overall similarity.

CONCLUSIONS: Simulated R-EBUS images of PPLs generated by our method can closely mimic the corresponding real images, demonstrating the potential of our method to provide guidance for intraoperative R-EBUS probe manipulation.

PMID:39318751 | PMC:PMC11419460 | DOI:10.1097/eus.0000000000000079

Categories: Literature Watch

Reasoning cartographic knowledge in deep learning-based map generalization with explainable AI

Wed, 2024-09-25 06:00

Int J Geogr Inf Sci. 2024 Jun 20;38(10):2061-2082. doi: 10.1080/13658816.2024.2369535. eCollection 2024.

ABSTRACT

Cartographic map generalization involves complex rules, and a full automation has still not been achieved, despite many efforts over the past few decades. Pioneering studies show that some map generalization tasks can be partially automated by deep neural networks (DNNs). However, DNNs are still used as black-box models in previous studies. We argue that integrating explainable AI (XAI) into a DL-based map generalization process can give more insights to develop and refine the DNNs by understanding what cartographic knowledge exactly is learned. Following an XAI framework for an empirical case study, visual analytics and quantitative experiments were applied to explain the importance of input features regarding the prediction of a pre-trained ResU-Net model. This experimental case study finds that the XAI-based visualization results can easily be interpreted by human experts. With the proposed XAI workflow, we further find that the DNN pays more attention to the building boundaries than the interior parts of the buildings. We thus suggest that boundary intersection over union is a better evaluation metric than commonly used intersection over union in qualifying raster-based map generalization results. Overall, this study shows the necessity and feasibility of integrating XAI as part of future DL-based map generalization development frameworks.

PMID:39318700 | PMC:PMC11418907 | DOI:10.1080/13658816.2024.2369535

Categories: Literature Watch

Clinical and genetic associations of asymmetric apical and septal left ventricular hypertrophy

Wed, 2024-09-25 06:00

Eur Heart J Digit Health. 2024 Aug 9;5(5):591-600. doi: 10.1093/ehjdh/ztae060. eCollection 2024 Sep.

ABSTRACT

AIMS: Increased left ventricular mass has been associated with adverse cardiovascular outcomes including incident cardiomyopathy and atrial fibrillation. Such associations have been studied in relation to total left ventricular hypertrophy, while the regional distribution of myocardial hypertrophy is extremely variable. The clinically significant and genetic associations of such variability require further study.

METHODS AND RESULTS: Here, we use deep learning-derived phenotypes of disproportionate patterns of hypertrophy, namely, apical and septal hypertrophy, to study genome-wide and clinical associations in addition to and independent from total left ventricular mass within 35 268 UK Biobank participants. Using polygenic risk score and Cox regression, we quantified the relationship between incident cardiovascular outcomes and genetically determined phenotypes in the UK Biobank. Adjusting for total left ventricular mass, apical hypertrophy is associated with elevated risk for cardiomyopathy and atrial fibrillation. Cardiomyopathy risk was increased for subjects with increased apical or septal mass, even in the absence of global hypertrophy. We identified 17 genome-wide associations for left ventricular mass, 3 unique associations with increased apical mass, and 3 additional unique associations with increased septal mass. An elevated polygenic risk score for apical mass corresponded with an increased risk of cardiomyopathy and implantable cardioverter-defibrillator implantation.

CONCLUSION: Apical and septal mass may be driven by genes distinct from total left ventricular mass, suggesting unique genetic profiles for patterns of hypertrophy. Focal hypertrophy confers independent and additive risk to incident cardiovascular disease. Our findings emphasize the significance of characterizing distinct subtypes of left ventricular hypertrophy. Further studies are needed in multi-ethnic cohorts.

PMID:39318696 | PMC:PMC11417484 | DOI:10.1093/ehjdh/ztae060

Categories: Literature Watch

Evaluating Performance of Different RNA Secondary Structure Prediction Programs Using Self-cleaving Ribozymes

Tue, 2024-09-24 06:00

Genomics Proteomics Bioinformatics. 2024 Sep 13;22(3):qzae043. doi: 10.1093/gpbjnl/qzae043.

ABSTRACT

Accurate identification of the correct, biologically relevant RNA structures is critical to understanding various aspects of RNA biology since proper folding represents the key to the functionality of all types of RNA molecules and plays pivotal roles in many essential biological processes. Thus, a plethora of approaches have been developed to predict, identify, or solve RNA structures based on various computational, molecular, genetic, chemical, or physicochemical strategies. Purely computational approaches hold distinct advantages over all other strategies in terms of the ease of implementation, time, speed, cost, and throughput, but they strongly underperform in terms of accuracy that significantly limits their broader application. Nonetheless, the advantages of these methods led to a steady development of multiple in silico RNA secondary structure prediction approaches including recent deep learning-based programs. Here, we compared the accuracy of predictions of biologically relevant secondary structures of dozens of self-cleaving ribozyme sequences using seven in silico RNA folding prediction tools with tasks of varying complexity. We found that while many programs performed well in relatively simple tasks, their performance varied significantly in more complex RNA folding problems. However, in general, a modern deep learning method outperformed the other programs in the complex tasks in predicting the RNA secondary structures, at least based on the specific class of sequences tested, suggesting that it may represent the future of RNA structure prediction algorithms.

PMID:39317944 | DOI:10.1093/gpbjnl/qzae043

Categories: Literature Watch

Neural network-assisted humanisation of COVID-19 hamster transcriptomic data reveals matching severity states in human disease

Tue, 2024-09-24 06:00

EBioMedicine. 2024 Aug 31:105312. doi: 10.1016/j.ebiom.2024.105312. Online ahead of print.

ABSTRACT

BACKGROUND: Translating findings from animal models to human disease is essential for dissecting disease mechanisms, developing and testing precise therapeutic strategies. The coronavirus disease 2019 (COVID-19) pandemic has highlighted this need, particularly for models showing disease severity-dependent immune responses.

METHODS: Single-cell transcriptomics (scRNAseq) is well poised to reveal similarities and differences between species at the molecular and cellular level with unprecedented resolution. However, computational methods enabling detailed matching are still scarce. Here, we provide a structured scRNAseq-based approach that we applied to scRNAseq from blood leukocytes originating from humans and hamsters affected with moderate or severe COVID-19.

FINDINGS: Integration of data from patients with COVID-19 with two hamster models that develop moderate (Syrian hamster, Mesocricetus auratus) or severe (Roborovski hamster, Phodopus roborovskii) disease revealed that most cellular states are shared across species. A neural network-based analysis using variational autoencoders quantified the overall transcriptomic similarity across species and severity levels, showing highest similarity between neutrophils of Roborovski hamsters and patients with severe COVID-19, while Syrian hamsters better matched patients with moderate disease, particularly in classical monocytes. We further used transcriptome-wide differential expression analysis to identify which disease stages and cell types display strongest transcriptional changes.

INTERPRETATION: Consistently, hamsters' response to COVID-19 was most similar to humans in monocytes and neutrophils. Disease-linked pathways found in all species specifically related to interferon response or inhibition of viral replication. Analysis of candidate genes and signatures supported the results. Our structured neural network-supported workflow could be applied to other diseases, allowing better identification of suitable animal models with similar pathomechanisms across species.

FUNDING: This work was supported by German Federal Ministry of Education and Research, (BMBF) grant IDs: 01ZX1304B, 01ZX1604B, 01ZX1906A, 01ZX1906B, 01KI2124, 01IS18026B and German Research Foundation (DFG) grant IDs: 14933180, 431232613.

PMID:39317638 | DOI:10.1016/j.ebiom.2024.105312

Categories: Literature Watch

Deep Learning for Distinguishing Mucinous Breast Carcinoma From Fibroadenoma on Ultrasound

Tue, 2024-09-24 06:00

Clin Breast Cancer. 2024 Sep 4:S1526-8209(24)00237-4. doi: 10.1016/j.clbc.2024.09.001. Online ahead of print.

ABSTRACT

PURPOSE: Mucinous breast carcinoma (MBC) tends to be misdiagnosed as fibroadenomas (FA) due to its benign imaging characteristics. We aimed to develop a deep learning (DL) model to differentiate MBC and FA based on ultrasound (US) images. The model could contribute to the diagnosis of MBC for radiologists.

METHODS: In this retrospective study, 884 eligible patients (700 FA patients and 184 MBC patients) with 2257 US images were enrolled. The images were randomly divided into a training set (n = 1805 images) and a test set (n = 452 images) in a ratio of 8:2. First, we used the training set to establish DL model, DL+ age-cutoff model and DL+ age-tree model. Then, we compared the diagnostic performance of three models to get the optimal model. Finally, we evaluated the diagnostic performance of radiologists (4 junior and 4 senior radiologists) with and without the assistance of the optimal model in the test set.

RESULTS: The DL+ age-tree model yielded higher areas under the receiver operating characteristic curve (AUC) than DL model and DL+ age-cutoff model (0.945 vs. 0.835, P < .001; 0.945 vs. 0.931, P < .001, respectively). With the assistance of DL+ age-tree model, both junior and senior radiologists' AUC had significant improvement (0.746-0.818, P = .010, 0.827-0.860, P = .005, respectively).

CONCLUSIONS: The DL+ age-tree model based on US images and age showed excellent performance in the differentiation of MBC and FA. Moreover, it can effectively improve the performance of radiologists with different degrees of experience that may contribute to reducing the misdiagnosis of MBC.

PMID:39317636 | DOI:10.1016/j.clbc.2024.09.001

Categories: Literature Watch

SELFNet: Denoising Shear Wave Elastography Using Spatial-temporal Fourier Feature Networks

Tue, 2024-09-24 06:00

Ultrasound Med Biol. 2024 Sep 23:S0301-5629(24)00301-6. doi: 10.1016/j.ultrasmedbio.2024.08.004. Online ahead of print.

ABSTRACT

OBJECTIVE: Ultrasound-based shear wave elastography offers estimation of tissue stiffness through analysis of the propagation of a shear wave induced by a stimulus. Displacement or velocity fields during the process can contain noise as a result of the limited number of acquisitions. With advances in physics-informed deep learning, neural networks can approximate a physics field by minimizing the residuals of governing physics equations.

METHODS: In this research, we introduce a shear wave elastography Fourier feature network (SELFNet) using spatial-temporal random Fourier features within a physics-informed neural network framework to estimate and denoise particle displacement signals. The network uses a sparse mapping to increase robustness and incorporates the governing equations for regularization while simultaneously learning the mapping of the shear modulus. The method was evaluated in datasets from tissue-mimicking phantom of lesions and ex vivo tissue.

RESULTS: The findings indicate that SELFNet is capable of smoothing out the noise in phantom lesions with different stiffness and sizes, outperforming a reference Gaussian filtering method by 17% in relative ℓ2 error, 45% in reconstruction root-mean-square error. Furthermore, the ablation study suggested that SELFNet can prevent over-fitting through the Fourier feature mapping module. An ex vivo study confirmed its applicability to different types of tissue.

CONCLUSION: The implementation of SELFNet shows promise for shear wave elastography with limited acquisitions. In this context, subject to successful translation, it has the potential to be extended to clinical applications, such as the diagnosis of cancer or liver disease.

PMID:39317627 | DOI:10.1016/j.ultrasmedbio.2024.08.004

Categories: Literature Watch

Deep learning enables accurate brain tissue microstructure analysis based on clinically feasible diffusion magnetic resonance imaging

Tue, 2024-09-24 06:00

Neuroimage. 2024 Sep 22:120858. doi: 10.1016/j.neuroimage.2024.120858. Online ahead of print.

ABSTRACT

Diffusion magnetic resonance imaging (dMRI) allows non-invasive assessment of brain tissue microstructure. Current model-based tissue microstructure reconstruction techniques require a large number of diffusion gradients, which is not clinically feasible due to imaging time constraints, and this has limited the use of tissue microstructure information in clinical settings. Recently, approaches based on deep learning (DL) have achieved promising tissue microstructure reconstruction results using clinically feasible dMRI. However, it remains unclear whether the subtle tissue changes associated with disease or age are properly preserved with DL approaches and whether DL reconstruction results can benefit clinical applications. Here, we provide the first evidence that DL approaches to tissue microstructure reconstruction yield reliable brain tissue microstructure analysis based on clinically feasible dMRI scans. Specifically, we reconstructed tissue microstructure from four different brain dMRI datasets with only 12 diffusion gradients, a clinically feasible protocol, and the neurite orientation dispersion and density imaging (NODDI) and spherical mean technique (SMT) models were considered. With these results we show that disease-related and age-dependent alterations of brain tissue were accurately identified. These findings demonstrate that DL tissue microstructure reconstruction can accurately quantify microstructural alterations in the brain based on clinically feasible dMRI.

PMID:39317273 | DOI:10.1016/j.neuroimage.2024.120858

Categories: Literature Watch

Joint segmentation of tumors in 3D PET-CT images with a network fusing multi-view and multi-modal information

Tue, 2024-09-24 06:00

Phys Med Biol. 2024 Sep 24. doi: 10.1088/1361-6560/ad7f1b. Online ahead of print.

ABSTRACT

Joint segmentation of tumors in PET-CT images is crucial for precise treatment planning. However, current segmentation methods often use addition or concatenation to fuse PET and CT images, which potentially overlooks the nuanced interplay between these modalities. Additionally, these methods often neglect multi-view information that is helpful for more accurately locating and segmenting the target structure. This study aims to address these disadvantages and develop a deep learning-based algorithm for joint segmentation of tumors in PET-CT images.&#xD;Approach. To address these limitations, we propose the Multi-view Information Enhancement and Multi-modal Feature Fusion Network (MIEMFF-Net) for joint tumor segmentation in three-dimensional PET-CT images. Our model incorporates a dynamic multi-modal fusion strategy to effectively exploit the metabolic and anatomical information from PET and CT images and a multi-view information enhancement strategy to effectively recover the lost information during upsampling. A Multi-scale Spatial Perception Block is proposed to effectively extract information from different views and reduce redundancy interference in the multi-view feature extraction process.&#xD;Main results. The proposed MIEMFF-Net achieved a Dice score of 83.93%, a Precision of 81.49%, a Sensitivity of 87.89% and an IOU of 69.27% on the STS dataset and a Dice score of 76.83%, a Precision of 86.21%, a Sensitivity of 80.73% and an IOU of 65.15% on the AutoPET dataset. &#xD;Significance. Experimental results demonstrate that MIEMFF-Net outperforms existing state-of-the-art(SOTA) models which implies potential applications of the proposed method in clinical practice.

PMID:39317235 | DOI:10.1088/1361-6560/ad7f1b

Categories: Literature Watch

Proton dose calculation with LSTM networks in presence of a magnetic field

Tue, 2024-09-24 06:00

Phys Med Biol. 2024 Sep 24. doi: 10.1088/1361-6560/ad7f1e. Online ahead of print.

ABSTRACT

OBJECTIVE: To present a long short-term memory (LSTM) network-based dose calculation method for magnetic resonance (MR)-guided proton therapy.

APPROACH: 35 planning computed tomography (CT) images of prostate cancer patients were collected for Monte Carlo (MC) dose calculation under a perpendicular 1.5 T magnetic field. Proton pencil beams (PB) at three energies (150, 175, and 200 MeV) were simulated (7560 PBs at each energy). A 3D relative stopping power (RSP) cuboid covering the extent of the PB dose was extracted and given as input to the LSTM model, yielding a 3D predicted PB dose. Three single-energy (SE) LSTM models were trained separately on the corresponding 150/175/200 MeV datasets and a multi-energy (ME) LSTM model with an energy embedding layer was trained on either the combined dataset with three energies or a continuous energy (CE) dataset with 1 MeV steps ranging from 125 to 200 MeV. For each model, training and validation involved 25 patients and 10 patients were for testing. Two single field uniform dose prostate treatment plans were optimized and recalculated with MC and the CE model.

RESULTS: Test results of all PBs from the three SE models showed a mean gamma passing rate (2%/2mm, 10% dose cutoff) above 99.9% with an average center-of-mass (COM) discrepancy below 0.4 mm between predicted and simulated trajectories. The ME model showed a mean gamma passing rate exceeding 99.8% and a COM discrepancy of less than 0.5 mm at the three energies. Treatment plan recalculation by the CE model yielded gamma passing rates of 99.6% and 97.9%. The inference time of the models was 9-10 ms per PB.

SIGNIFICANCE: LSTM models for proton dose calculation in a magnetic field were developed and showed promising accuracy and efficiency for prostate cancer patients.

PMID:39317232 | DOI:10.1088/1361-6560/ad7f1e

Categories: Literature Watch

Randomized controlled trial of artificial intelligence diagnostic system in clinical practice to detect esophageal squamous cell carcinoma

Tue, 2024-09-24 06:00

Endoscopy. 2024 Sep 24. doi: 10.1055/a-2421-3194. Online ahead of print.

ABSTRACT

Background Artificial intelligence (AI) has made remarkable progress in image recognition using deep learning systems and has been used to detect esophageal squamous cell carcinoma (ESCC). However, all previous reports were not investigated in clinical settings, but in a retrospective design. Therefore, we conducted this trial to determine how AI can help endoscopists detect ESCC in clinical settings. Methods This was a prospective, single-center, exploratory, and randomized controlled trial. High-risk patients with ESCC undergoing screening or surveillance esophagogastroduodenoscopy were enrolled and randomly assigned to either the AI or control group. In the AI group, the endoscopists watched both the AI monitor detecting ESCC with annotation and the normal monitor simultaneously, whereas in the control group, the endoscopists watched only the normal monitor. In both groups, the endoscopists observed the esophagus using white-light imaging (WLI), followed by narrow-band imaging (NBI) and iodine staining. The primary endpoint was the enhanced detection rate of ESCC by non-experts using AI. The detection rate was defined as the ratio of WLI/NBI-detected ESCCs to all ESCCs detected by iodine staining. Results A total of 320 patients were included in this analysis. The detection rate of ESCC in non-experts was 47% in the AI group and 45% in the control group (p=0.93), with no significant difference, was similar to that in experts (87% vs. 57%, p=0.20) and all endoscopists (57% vs. 50%, p=0.70). Conclusions This study could not demonstrate an improvement in the esophageal cancer detection rate using the AI diagnostic support system for ESCC.

PMID:39317205 | DOI:10.1055/a-2421-3194

Categories: Literature Watch

Deep evidential learning for radiotherapy dose prediction

Tue, 2024-09-24 06:00

Comput Biol Med. 2024 Sep 23;182:109172. doi: 10.1016/j.compbiomed.2024.109172. Online ahead of print.

ABSTRACT

BACKGROUND: As we navigate towards integrating deep learning methods in the real clinic, a safety concern lies in whether and how the model can express its own uncertainty when making predictions. In this work, we present a novel application of an uncertainty-quantification framework called Deep Evidential Learning in the domain of radiotherapy dose prediction.

METHOD: Using medical images of the Open Knowledge-Based Planning Challenge dataset, we found that this model can be effectively harnessed to yield uncertainty estimates that inherited correlations with prediction errors upon completion of network training. This was achieved only after reformulating the original loss function for a stable implementation.

RESULTS: We found that (i) epistemic uncertainty was highly correlated with prediction errors, with various association indices comparable or stronger than those for Monte-Carlo Dropout and Deep Ensemble methods, (ii) the median error varied with uncertainty threshold much more linearly for epistemic uncertainty in Deep Evidential Learning relative to these other two conventional frameworks, indicative of a more uniformly calibrated sensitivity to model errors, (iii) relative to epistemic uncertainty, aleatoric uncertainty demonstrated a more significant shift in its distribution in response to Gaussian noise added to CT intensity, compatible with its interpretation as reflecting data noise.

CONCLUSION: Collectively, our results suggest that Deep Evidential Learning is a promising approach that can endow deep-learning models in radiotherapy dose prediction with statistical robustness. We have also demonstrated how this framework leads to uncertainty heatmaps that correlate strongly with model errors, and how it can be used to equip the predicted Dose-Volume-Histograms with confidence intervals.

PMID:39317056 | DOI:10.1016/j.compbiomed.2024.109172

Categories: Literature Watch

A flexible 2.5D medical image segmentation approach with in-slice and cross-slice attention

Tue, 2024-09-24 06:00

Comput Biol Med. 2024 Sep 23;182:109173. doi: 10.1016/j.compbiomed.2024.109173. Online ahead of print.

ABSTRACT

Deep learning has become the de facto method for medical image segmentation, with 3D segmentation models excelling in capturing complex 3D structures and 2D models offering high computational efficiency. However, segmenting 2.5D images, characterized by high in-plane resolution but lower through-plane resolution, presents significant challenges. While applying 2D models to individual slices of a 2.5D image is feasible, it fails to capture the spatial relationships between slices. On the other hand, 3D models face challenges such as resolution inconsistencies in 2.5D images, along with computational complexity and susceptibility to overfitting when trained with limited data. In this context, 2.5D models, which capture inter-slice correlations using only 2D neural networks, emerge as a promising solution due to their reduced computational demand and simplicity in implementation. In this paper, we introduce CSA-Net, a flexible 2.5D segmentation model capable of processing 2.5D images with an arbitrary number of slices. CSA-Net features an innovative Cross-Slice Attention (CSA) module that effectively captures 3D spatial information by learning long-range dependencies between the center slice (for segmentation) and its neighboring slices. Moreover, CSA-Net utilizes the self-attention mechanism to learn correlations among pixels within the center slice. We evaluated CSA-Net on three 2.5D segmentation tasks: (1) multi-class brain MR image segmentation, (2) binary prostate MR image segmentation, and (3) multi-class prostate MR image segmentation. CSA-Net outperformed leading 2D, 2.5D, and 3D segmentation methods across all three tasks, achieving average Dice coefficients and HD95 values of 0.897 and 1.40 mm for the brain dataset, 0.921 and 1.06 mm for the prostate dataset, and 0.659 and 2.70 mm for the ProstateX dataset, demonstrating its efficacy and superiority. Our code is publicly available at: https://github.com/mirthAI/CSA-Net.

PMID:39317055 | DOI:10.1016/j.compbiomed.2024.109173

Categories: Literature Watch

When Metal Nanoclusters Meet Smart Synthesis

Tue, 2024-09-24 06:00

ACS Nano. 2024 Sep 24. doi: 10.1021/acsnano.4c09597. Online ahead of print.

ABSTRACT

Atomically precise metal nanoclusters (MNCs) represent a fascinating class of ultrasmall nanoparticles with molecule-like properties, bridging conventional metal-ligand complexes and nanocrystals. Despite their potential for various applications, synthesis challenges such as a precise understanding of varied synthetic parameters and property-driven synthesis persist, hindering their full exploitation and wider application. Incorporating smart synthesis methodologies, including a closed-loop framework of automation, data interpretation, and feedback from AI, offers promising solutions to address these challenges. In this perspective, we summarize the closed-loop smart synthesis that has been demonstrated in various nanomaterials and explore the research frontiers of smart synthesis for MNCs. Moreover, the perspectives on the inherent challenges and opportunities of smart synthesis for MNCs are discussed, aiming to provide insights and directions for future advancements in this emerging field of AI for Science, while the integration of deep learning algorithms stands to substantially enrich research in smart synthesis by offering enhanced predictive capabilities, optimization strategies, and control mechanisms, thereby extending the potential of MNC synthesis.

PMID:39316700 | DOI:10.1021/acsnano.4c09597

Categories: Literature Watch

Hyperdimensional computing: A fast, robust, and interpretable paradigm for biological data

Tue, 2024-09-24 06:00

PLoS Comput Biol. 2024 Sep 24;20(9):e1012426. doi: 10.1371/journal.pcbi.1012426. eCollection 2024 Sep.

ABSTRACT

Advances in bioinformatics are primarily due to new algorithms for processing diverse biological data sources. While sophisticated alignment algorithms have been pivotal in analyzing biological sequences, deep learning has substantially transformed bioinformatics, addressing sequence, structure, and functional analyses. However, these methods are incredibly data-hungry, compute-intensive, and hard to interpret. Hyperdimensional computing (HDC) has recently emerged as an exciting alternative. The key idea is that random vectors of high dimensionality can represent concepts such as sequence identity or phylogeny. These vectors can then be combined using simple operators for learning, reasoning, or querying by exploiting the peculiar properties of high-dimensional spaces. Our work reviews and explores HDC's potential for bioinformatics, emphasizing its efficiency, interpretability, and adeptness in handling multimodal and structured data. HDC holds great potential for various omics data searching, biosignal analysis, and health applications.

PMID:39316621 | DOI:10.1371/journal.pcbi.1012426

Categories: Literature Watch

Catalyzing innovation in cancer drug discovery through artificial intelligence, machine learning and patency

Tue, 2024-09-24 06:00

Pharm Pat Anal. 2024;13(1-3):1-5. doi: 10.1080/20468954.2024.2347798. Epub 2024 May 21.

NO ABSTRACT

PMID:39316581 | DOI:10.1080/20468954.2024.2347798

Categories: Literature Watch

Pages