Deep learning
DSEception: a noval neural networks architecture for enhancing pneumonia and tuberculosis diagnosis
Front Bioeng Biotechnol. 2024 Sep 3;12:1454652. doi: 10.3389/fbioe.2024.1454652. eCollection 2024.
ABSTRACT
BACKGROUND: Pneumonia and tuberculosis are prevalent pulmonary diseases globally, each demanding specific care measures. However, distinguishing between these two conditions imposes challenges due to the high skill requirements for doctors, the impact of imaging positions and respiratory intensity of patients, and the associated high healthcare costs, emphasizing the imperative need for intelligent and efficient diagnostic methods.
METHOD: This study aims to develop a highly accurate automatic diagnosis and classification method for various lung diseases (Normal, Pneumonia, and Tuberculosis). We propose a hybrid model, which is based on the InceptionV3 architecture, enhanced by introducing Deepwise Separable Convolution after the Inception modules and incorporating the Squeeze-and-Excitation mechanism. This architecture successfully enables the model to extract richer features without significantly increasing the parameter count and computational workload, thereby markedly improving the performance in predicting and classifying lung diseases. To objectively assess the proposed model, external testing and five-fold cross-validation were conducted. Additionally, widely used baseline models in the scholarly community were constructed for comparison.
RESULT: In the external testing phase, the our model achieved an average accuracy (ACC) of 90.48% and an F1-score (F1) of 91.44%, which is an approximate 4% improvement over the best-performing baseline model, ResNet. In the five-fold cross-validation, our model's average ACC and F1 reached 88.27% ± 2.76% and 89.29% ± 2.69%, respectively, demonstrating exceptional predictive performance and stability. The results indicate that our model holds promise for deployment in clinical settings to assist in the diagnosis of lung diseases, potentially reducing misdiagnosis rates and patient losses.
CONCLUSION: Utilizing deep learning for automatic assistance in the diagnosis of pneumonia and tuberculosis holds clinical significance by enhancing diagnostic accuracy, reducing healthcare costs, enabling rapid screening and large-scale detection, and facilitating personalized treatment approaches, thereby contributing to widespread accessibility and improved healthcare services in the future.
PMID:39291256 | PMC:PMC11405223 | DOI:10.3389/fbioe.2024.1454652
Deep learning-based structure segmentation and intramuscular fat annotation on lumbar magnetic resonance imaging
JOR Spine. 2024 Sep 17;7(3):e70003. doi: 10.1002/jsp2.70003. eCollection 2024 Sep.
ABSTRACT
BACKGROUND: Lumbar disc herniation (LDH) is a prevalent cause of low back pain. LDH patients commonly experience paraspinal muscle atrophy and fatty infiltration (FI), which further exacerbates the symptoms of low back pain. Magnetic resonance imaging (MRI) is crucial for assessing paraspinal muscle condition. Our study aims to develop a dual-model for automated muscle segmentation and FI annotation on MRI, assisting clinicians evaluate LDH conditions comprehensively.
METHODS: The study retrospectively collected data diagnosed with LDH from December 2020 to May 2022. The dataset was split into a 7:3 ratio for training and testing, with an external test set prepared to validate model generalizability. The model's performance was evaluated using average precision (AP), recall and F1 score. The consistency was assessed using the Dice similarity coefficient (DSC) and Cohen's Kappa. The mean absolute percentage error (MAPE) was calculated to assess the error of the model measurements of relative cross-sectional area (rCSA) and FI. Calculate the MAPE of FI measured by threshold algorithms to compare with the model.
RESULTS: A total of 417 patients being evaluated, comprising 216 males and 201 females, with a mean age of 49 ± 15 years. In the internal test set, the muscle segmentation model achieved an overall DSC of 0.92 ± 0.10, recall of 92.60%, and AP of 0.98. The fat annotation model attained a recall of 91.30%, F1 Score of 0.82, and Cohen's Kappa of 0.76. However, there was a decrease on the external test set. For rCSA measurements, except for longissimus (10.89%), the MAPE of other muscles was less than 10%. When comparing the errors of FI for each paraspinal muscle, the MAPE of the model was lower than that of the threshold algorithm.
CONCLUSION: The models demonstrate outstanding performance, with lower error in FI measurement compared to thresholding algorithms.
PMID:39291096 | PMC:PMC11406510 | DOI:10.1002/jsp2.70003
Development and validation of deep learning models for identifying the brand of pedicle screws on plain spine radiographs
JOR Spine. 2024 Sep 17;7(3):e70001. doi: 10.1002/jsp2.70001. eCollection 2024 Sep.
ABSTRACT
BACKGROUND: In spinal revision surgery, previous pedicle screws (PS) may need to be replaced with new implants. Failure to accurately identify the brand of PS-based instrumentation preoperatively may increase the risk of perioperative complications. This study aimed to develop and validate an optimal deep learning (DL) model to identify the brand of PS-based instrumentation on plain radiographs of spine (PRS) using anteroposterior (AP) and lateral images.
METHODS: A total of 529 patients who received PS-based instrumentation from seven manufacturers were enrolled in this retrospective study. The postoperative PRS were gathered as ground truths. The training, validation, and testing datasets contained 338, 85, and 106 patients, respectively. YOLOv5 was used to crop out the screws' trajectory, and the EfficientNet-b0 model was used to develop single models (AP, Lateral, Merge, and Concatenated) based on the different PRS images. The ensemble models were different combinations of the single models. Primary outcomes were the models' performance in accuracy, sensitivity, precision, F1-score, kappa value, and area under the curve (AUC). Secondary outcomes were the relative performance of models versus human readers and external validation of the DL models.
RESULTS: The Lateral model had the most stable performance among single models. The discriminative performance was improved by the ensemble method. The AP + Lateral ensemble model had the most stable performance, with an accuracy of 0.9434, F1 score of 0.9388, and AUC of 0.9834. The performance of the ensemble models was comparable to that of experienced orthopedic surgeons and superior to that of inexperienced orthopedic surgeons. External validation revealed that the Lat + Concat ensemble model had the best accuracy (0.9412).
CONCLUSION: The DL models demonstrated stable performance in identifying the brand of PS-based instrumentation based on AP and/or lateral images of PRS, which may assist orthopedic spine surgeons in preoperative revision planning in clinical practice.
PMID:39291095 | PMC:PMC11406509 | DOI:10.1002/jsp2.70001
Virtual Screening and Molecular Docking: Discovering Novel METTL3 Inhibitors
ACS Med Chem Lett. 2024 Aug 6;15(9):1491-1499. doi: 10.1021/acsmedchemlett.4c00216. eCollection 2024 Sep 12.
ABSTRACT
Methyltransferase-like 3 (METTL3) is an RNA methyltransferase that catalyzes the N6 -methyladenosine (m6A) modification of mRNA in eukaryotic cells. Past studies have shown that METTL3 is highly expressed in various cancers and is closely related to tumor development. Therefore, METTL3 inhibitors have received widespread attention as effective treatments for different types of tumors. This study proposes a hybrid high-throughput virtual screening (HTVS) protocol that combines structure-based methods with geometric deep learning-based DeepDock algorithms. We identified unique skeleton inhibitors of METTL3 from our self-built internal database. Among them, compound C3 showed significant inhibitory activity on METTL3, and further molecular dynamics simulations were performed to provide more details about the binding conformation. Overall, our research demonstrates the effectiveness of hybrid virtual algorithms, which is of great significance for understanding the biological functions of METTL3 and developing treatment methods for related diseases.
PMID:39291017 | PMC:PMC11403746 | DOI:10.1021/acsmedchemlett.4c00216
Enhancing sustainability in the production of palm oil: creative monitoring methods using YOLOv7 and YOLOv8 for effective plantation management
Biotechnol Rep (Amst). 2024 Aug 30;44:e00853. doi: 10.1016/j.btre.2024.e00853. eCollection 2024 Dec.
ABSTRACT
The You Only Look Once (YOLO) deep learning model iterations-YOLOv7-YOLOv8-were put through a rigorous evaluation process to see how well they could recognize oil palm plants. Precision, recall, F1-score, and detection time metrics are analyzed for a variety of configurations, including YOLOv7x, YOLOv7-W6, YOLOv7-D6, YOLOv8s, YOLOv8n, YOLOv8m, YOLOv8l, and YOLOv8x. YOLO label v1.2.1 was used to label a dataset of 80,486 images for training, and 482 drone-captured images, including 5,233 images of oil palms, were used for testing the models. The YOLOv8 series showed notable advancements; with 99.31 %, YOLOv8m obtained the greatest F1-score, signifying the highest detection accuracy. Furthermore, YOLOv8s showed a notable decrease in detection times, improving its suitability for comprehensive environmental surveys and in-the-moment monitoring. Precise identification of oil palm trees is beneficial for improved resource management and less environmental effect; this supports the use of these models in conjunction with drone and satellite imaging technologies for agricultural economic sustainability and optimal crop management.
PMID:39290791 | PMC:PMC11403242 | DOI:10.1016/j.btre.2024.e00853
Evaluation of Ischemic Penumbra in Stroke Patients Based on Deep Learning and Multimodal CT
J Healthc Eng. 2021 Nov 30;2021:3215107. doi: 10.1155/2021/3215107. eCollection 2021.
ABSTRACT
In order to investigate the value of multimodal CT for quantitative assessment of collateral circulation, ischemic semidark zone, core infarct volume in patients with acute ischemic stroke (AIS), and prognosis assessment in intravenous thrombolytic therapy, segmentation model which is based on the self-attention mechanism is prone to generate attention coefficient maps with incorrect regions of interest. Moreover, the stroke lesion is not clearly characterized, and lesion boundary is poorly differentiated from normal brain tissue, thus affecting the segmentation performance. To address this problem, a primary and secondary path attention compensation network structure is proposed, which is based on the improved global attention upsampling U-Net model. The main path network is responsible for performing accurate lesion segmentation and outputting segmentation results. Likewise, the auxiliary path network generates loose auxiliary attention compensation coefficients, which compensate for possible attention coefficient errors in the main path network. Two hybrid loss functions are proposed to realize the respective functions of main and auxiliary path networks. It is experimentally demonstrated that both the improved global attention upsampling U-Net and the proposed primary and secondary path attention compensation networks show significant improvement in segmentation performance. Moreover, patients with good collateral circulation have a small final infarct area volume and a good clinical prognosis after intravenous thrombolysis. Quantitative assessment of collateral circulation and ischemic semidark zone by multimodal CT can better predict the clinical prognosis of intravenous thrombolysis.
PMID:39290779 | PMC:PMC11407880 | DOI:10.1155/2021/3215107
PNNGS, a multi-convolutional parallel neural network for genomic selection
Front Plant Sci. 2024 Sep 3;15:1410596. doi: 10.3389/fpls.2024.1410596. eCollection 2024.
ABSTRACT
Genomic selection (GS) can accomplish breeding faster than phenotypic selection. Improving prediction accuracy is the key to promoting GS. To improve the GS prediction accuracy and stability, we introduce parallel convolution to deep learning for GS and call it a parallel neural network for genomic selection (PNNGS). In PNNGS, information passes through convolutions of different kernel sizes in parallel. The convolutions in each branch are connected with residuals. Four different Lp loss functions train PNNGS. Through experiments, the optimal number of parallel paths for rice, sunflower, wheat, and maize is found to be 4, 6, 4, and 3, respectively. Phenotype prediction is performed on 24 cases through ridge-regression best linear unbiased prediction (RRBLUP), random forests (RF), support vector regression (SVR), deep neural network genomic prediction (DNNGP), and PNNGS. Serial DNNGP and parallel PNNGS outperform the other three algorithms. On average, PNNGS prediction accuracy is 0.031 larger than DNNGP prediction accuracy, indicating that parallelism can improve the GS model. Plants are divided into clusters through principal component analysis (PCA) and K-means clustering algorithms. The sample sizes of different clusters vary greatly, indicating that this is unbalanced data. Through stratified sampling, the prediction stability and accuracy of PNNGS are improved. When the training samples are reduced in small clusters, the prediction accuracy of PNNGS decreases significantly. Increasing the sample size of small clusters is critical to improving the prediction accuracy of GS.
PMID:39290743 | PMC:PMC11405342 | DOI:10.3389/fpls.2024.1410596
Corrigendum: Contextual emotion detection in images using deep learning
Front Artif Intell. 2024 Sep 3;7:1476791. doi: 10.3389/frai.2024.1476791. eCollection 2024.
ABSTRACT
[This corrects the article DOI: 10.3389/frai.2024.1386753.].
PMID:39290717 | PMC:PMC11405858 | DOI:10.3389/frai.2024.1476791
An enhanced AlexNet-Based model for femoral bone tumor classification and diagnosis using magnetic resonance imaging
J Bone Oncol. 2024 Aug 3;48:100626. doi: 10.1016/j.jbo.2024.100626. eCollection 2024 Oct.
ABSTRACT
OBJECTIVE: Bone tumors, known for their infrequent occurrence and diverse imaging characteristics, require precise differentiation into benign and malignant categories. Existing diagnostic approaches heavily depend on the laborious and variable manual delineation of tumor regions. Deep learning methods, particularly convolutional neural networks (CNNs), have emerged as a promising solution to tackle these issues. This paper introduces an enhanced deep-learning model based on AlexNet to classify femoral bone tumors accurately.
METHODS: This study involved 500 femoral tumor patients from July 2020 to January 2023, with 500 imaging cases (335 benign and 165 malignant). A CNN was employed for automated classification. The model framework encompassed training and testing stages, with 8 layers (5 Conv and 3 FC) and ReLU activation. Essential architectural modifications included Batch Normalization (BN) after the first and second convolutional filters. Comparative experiments with various existing methods were conducted to assess algorithm performance in tumor staging. Evaluation metrics encompassed accuracy, precision, sensitivity, specificity, F-measure, ROC curves, and AUC values.
RESULTS: The analysis of precision, sensitivity, specificity, and F1 score from the results demonstrates that the method introduced in this paper offers several advantages, including a low feature dimension and robust generalization (with an accuracy of 98.34 %, sensitivity of 97.26 %, specificity of 95.74 %, and an F1 score of 96.37). These findings underscore its exceptional overall detection capabilities. Notably, when comparing various algorithms, they generally exhibit similar classification performance. However, the algorithm presented in this paper stands out with a higher AUC value (AUC=0.848), signifying enhanced sensitivity and more robust specificity.
CONCLUSION: This study presents an optimized AlexNet model for classifying femoral bone tumor images based on convolutional neural networks. This algorithm demonstrates higher accuracy, precision, sensitivity, specificity, and F1-score than other methods. Furthermore, the AUC value further confirms the outstanding performance of this algorithm in terms of sensitivity and specificity. This research makes a significant contribution to the field of medical image classification, offering an efficient automated classification solution, and holds the potential to advance the application of artificial intelligence in bone tumor classification.
PMID:39290649 | PMC:PMC11407034 | DOI:10.1016/j.jbo.2024.100626
Development of message passing-based graph convolutional networks for classifying cancer pathology reports
BMC Med Inform Decis Mak. 2024 Sep 17;24(Suppl 5):262. doi: 10.1186/s12911-024-02662-5.
ABSTRACT
BACKGROUND: Applying graph convolutional networks (GCN) to the classification of free-form natural language texts leveraged by graph-of-words features (TextGCN) was studied and confirmed to be an effective means of describing complex natural language texts. However, the text classification models based on the TextGCN possess weaknesses in terms of memory consumption and model dissemination and distribution. In this paper, we present a fast message passing network (FastMPN), implementing a GCN with message passing architecture that provides versatility and flexibility by allowing trainable node embedding and edge weights, helping the GCN model find the better solution. We applied the FastMPN model to the task of clinical information extraction from cancer pathology reports, extracting the following six properties: main site, subsite, laterality, histology, behavior, and grade.
RESULTS: We evaluated the clinical task performance of the FastMPN models in terms of micro- and macro-averaged F1 scores. A comparison was performed with the multi-task convolutional neural network (MT-CNN) model. Results show that the FastMPN model is equivalent to or better than the MT-CNN.
CONCLUSIONS: Our implementation revealed that our FastMPN model, which is based on the PyTorch platform, can train a large corpus (667,290 training samples) with 202,373 unique words in less than 3 minutes per epoch using one NVIDIA V100 hardware accelerator. Our experiments demonstrated that using this implementation, the clinical task performance scores of information extraction related to tumors from cancer pathology reports were highly competitive.
PMID:39289714 | DOI:10.1186/s12911-024-02662-5
Longitudinal deep neural networks for assessing metastatic brain cancer on a large open benchmark
Nat Commun. 2024 Sep 17;15(1):8170. doi: 10.1038/s41467-024-52414-2.
ABSTRACT
The detection and tracking of metastatic cancer over the lifetime of a patient remains a major challenge in clinical trials and real-world care. Advances in deep learning combined with massive datasets may enable the development of tools that can address this challenge. We present NYUMets-Brain, the world's largest, longitudinal, real-world dataset of cancer consisting of the imaging, clinical follow-up, and medical management of 1,429 patients. Using this dataset we developed Segmentation-Through-Time, a deep neural network which explicitly utilizes the longitudinal structure of the data and obtained state-of-the-art results at small (<10 mm3) metastases detection and segmentation. We also demonstrate that the monthly rate of change of brain metastases over time are strongly predictive of overall survival (HR 1.27, 95%CI 1.18-1.38). We are releasing the dataset, codebase, and model weights for other cancer researchers to build upon these results and to serve as a public benchmark.
PMID:39289405 | DOI:10.1038/s41467-024-52414-2
Integrating neural networks with advanced optimization techniques for accurate kidney disease diagnosis
Sci Rep. 2024 Sep 18;14(1):21740. doi: 10.1038/s41598-024-71410-6.
ABSTRACT
Kidney diseases pose a significant global health challenge, requiring precise diagnostic tools to improve patient outcomes. This study addresses this need by investigating three main categories of renal diseases: kidney stones, cysts, and tumors. Utilizing a comprehensive dataset of 12,446 CT whole abdomen and urogram images, this study developed an advanced AI-driven diagnostic system specifically tailored for kidney disease classification. The innovative approach of this study combines the strengths of traditional convolutional neural network architecture (AlexNet) with modern advancements in ConvNeXt architectures. By integrating AlexNet's robust feature extraction capabilities with ConvNeXt's advanced attention mechanisms, the paper achieved an exceptional classification accuracy of 99.85%. A key advancement in this study's methodology lies in the strategic amalgamation of features from both networks. This paper concatenated hierarchical spatial information and incorporated self-attention mechanisms to enhance classification performance. Furthermore, the study introduced a custom optimization technique inspired by the Adam optimizer, which dynamically adjusts the step size based on gradient norms. This tailored optimizer facilitated faster convergence and more effective weight updates, imporving model performance. The model of this study demonstrated outstanding performance across various metrics, with an average precision of 99.89%, recall of 99.95%, and specificity of 99.83%. These results highlight the efficacy of the hybrid architecture and optimization strategy in accurately diagnosing kidney diseases. Additionally, the methodology of this paper emphasizes interpretability and explainability, which are crucial for the clinical deployment of deep learning models.
PMID:39289394 | DOI:10.1038/s41598-024-71410-6
Aerial imagery dataset of lost oil wells
Sci Data. 2024 Sep 17;11(1):1005. doi: 10.1038/s41597-024-03820-0.
ABSTRACT
Orphaned wells are wells for which the operator is unknown or insolvent. The location of hundreds of thousands of these wells remain unknown in the United States alone. Cost-effective techniques are essential to locate orphaned wells to address environmental problems. In this paper, we present a dataset consisting of 120,948 aerial images of recently documented orphan wells. Each of these 512 × 512 images is paired with segmentation masks that indicate the presence or absence of such well. These images, sourced from the National Agriculture Imagery Program, cover the continental United States with spatial resolutions ranging from 30 centimeters to 1 meter. Additionally, we included negative examples by selecting locations uniformly across the United States. Accompanying metadata includes the IDs and spatial resolution of the original images, which are available for free through the United States Geological Survey, and the pixel coordinates of documented orphaned wells identified in these images. This dataset is intended to support the development of deep-learning models that can help locating undocumented orphan wells from such imagery, thereby blunting the environmental damage they do.
PMID:39289364 | DOI:10.1038/s41597-024-03820-0
Predicting post-lung transplant survival in systemic sclerosis using CT-derived features from preoperative chest CT scans
Eur Radiol. 2024 Sep 18. doi: 10.1007/s00330-024-11077-9. Online ahead of print.
ABSTRACT
OBJECTIVES: The current understanding of survival prediction of lung transplant (LTx) patients with systemic sclerosis (SSc) is limited. This study aims to identify novel image features from preoperative chest CT scans associated with post-LTx survival in SSc patients and integrate them into comprehensive prediction models.
MATERIALS AND METHODS: We conducted a retrospective study based on a cohort of SSc patients with demographic information, clinical data, and preoperative chest CT scans who underwent LTx between 2004 and 2020. This cohort consists of 102 patients (mean age, 50 years ± 10, 61% (62/102) females). Five CT-derived body composition features (bone, skeletal muscle, visceral, subcutaneous, and intramuscular adipose tissues) and three CT-derived cardiopulmonary features (heart, arteries, and veins) were automatically computed using 3-D convolutional neural networks. Cox regression was used to identify post-LTx survival factors, generate composite prediction models, and stratify patients based on mortality risk. Model performance was assessed using the area under the receiver operating characteristics curve (ROC-AUC).
RESULTS: Muscle mass ratio, bone density, artery-vein volume ratio, muscle volume, and heart volume ratio computed from CT images were significantly associated with post-LTx survival. Models using only CT-derived features outperformed all state-of-the-art clinical models in predicting post-LTx survival. The addition of CT-derived features improved the performance of traditional models at 1-year, 3-year, and 5-year survival prediction with maximum AUC scores of 0.77 (0.67-0.86), 0.85 (0.77-0.93), and 0.90 (95% CI: 0.83-0.97), respectively.
CONCLUSION: The integration of CT-derived features with demographic and clinical features can significantly improve t post-LTx survival prediction and identify high-risk SSc patients.
KEY POINTS: Question What CT features can predict post-lung-transplant survival for SSc patients? Finding CT body composition features such as muscle mass, bone density, and cardiopulmonary volumes significantly predict survival. Clinical relevance Our individualized risk assessment tool can better guide clinicians in choosing and managing patients requiring lung transplant for systemic sclerosis.
PMID:39289301 | DOI:10.1007/s00330-024-11077-9
Deep Learning Algorithm-Based MRI Radiomics and Pathomics for Predicting Microsatellite Instability Status in Rectal Cancer: A Multicenter Study
Acad Radiol. 2024 Sep 16:S1076-6332(24)00656-1. doi: 10.1016/j.acra.2024.09.008. Online ahead of print.
ABSTRACT
RATIONALE AND OBJECTIVES: To develop and validate multimodal deep-learning models based on clinical variables, multiparametric MRI (mp-MRI) and hematoxylin and eosin (HE) stained pathology slides for predicting microsatellite instability (MSI) status in rectal cancer patients.
MATERIALS AND METHODS: A total of 467 surgically confirmed rectal cancer patients from three centers were included in this study. Patients from center 1 were randomly divided into a training set (242 patients) and an internal validation (invad) set (105 patients) in a 7:3 ratio. Patients from centers 2 and 3 (120 patients) were included in an external validation (exvad) set. HE and immunohistochemistry (IHC) staining were analyzed, and MSI status was confirmed by IHC staining. Independent predictive factors were identified through univariate and multivariate analyses based on clinical evaluations and were used to construct a clinical model. Deep learning with ResNet-101 was applied to preoperative MRI (T2WI, DWI, and contrast-enhanced T1WI sequences) and postoperative HE-stained images to calculate deep-learning radiomics score (DLRS) and deep-learning pathomics score (DLPS), respectively, and to DLRS and DLPS models. Receiver operating characteristic (ROC) curves were plotted, and the area under the curve (AUC) was used to evaluate and compare the predictive performance of each model.
RESULTS: Among all rectal cancer patients, 82 (17.6%) had MSI. Long diameter (LD) and pathological T stage (pT) were identified as independent predictors and were used to construct the clinical model. After undergoing deep learning and feature selection, a final set of 30 radiomics features and 30 pathomics features were selected to construct the DLRS and DLPS models. A nomogram combining the clinical model, DLRS, and DLPS was created through weighted linear combination. The AUC values of the clinical model for predicting MSI were 0.714, 0.639, and 0.697 in the training, invad, and exvad sets, respectively. The AUCs of DLPS and DLRS ranged from 0.896 to 0.961 across the training, invad, and exvad sets. The nomogram achieved AUC values of 0.987, 0.987, and 0.974, with sensitivities of 1.0, 0.963, and 1.0 and specificities of 0.919, 0.949, and 0.867 in the training, invad, and exvad sets, respectively. The nomogram outperformed the other three models in all sets, with DeLong test results indicating superior predictive performance in the training set.
CONCLUSION: The nomogram, incorporating clinical data, mp-MRI, and HE staining, effectively reflects tumor heterogeneity by integrating multimodal data. This model demonstrates high predictive accuracy and generalizability in predicting MSI status in rectal cancer patients.
PMID:39289097 | DOI:10.1016/j.acra.2024.09.008
Intratumoral and Peritumoral Radiomics for Predicting the Prognosis of High-grade Serous Ovarian Cancer Patients Receiving Platinum-Based Chemotherapy
Acad Radiol. 2024 Sep 16:S1076-6332(24)00655-X. doi: 10.1016/j.acra.2024.09.001. Online ahead of print.
ABSTRACT
RATIONALE AND OBJECTIVES: This study aimed to develop a deep learning (DL) prognostic model to evaluate the significance of intra- and peritumoral radiomics in predicting outcomes for high-grade serous ovarian cancer (HGSOC) patients receiving platinum-based chemotherapy.
MATERIALS AND METHODS: A DL model was trained and validated on retrospectively collected unenhanced computed tomography (CT) scans from 474 patients at two institutions, which were divided into a training set (N = 362), an internal test set (N = 86), and an external test set (N = 26). The model incorporated tumor segmentation and peritumoral region analysis, using various input configurations: original tumor regions of interest (ROIs), ROI subregions, and ROIs expanded by 1 and 3 pixels. Model performance was assessed via hazard ratios (HRs) and receiver operating characteristic (ROC) curves. Patients were stratified into high- and low-risk groups on the basis of the training set's optimal cutoff value.
RESULTS: Among the input configurations, the model using an ROI with a 1-pixel peritumoral expansion achieved the highest predictive accuracy. The DL model exhibited robust performance for predicting progression-free survival, with HRs of 3.41 (95% CI: 2.85, 4.08; P < 0.001) in training set, 1.14 (95% CI: 1.03, 1.26; P = 0.012) in internal test set, and 1.32 (95% CI: 1.07, 1.63; P = 0.011) in external test set. KM survival analysis revealed significant differences between the high-risk and low-risk groups (P < 0.05).
CONCLUSION: The DL model effectively predicts survival outcomes in HGSOC patients receiving platinum-based chemotherapy, offering valuable insights for prognostic assessment and personalized treatment planning.
PMID:39289095 | DOI:10.1016/j.acra.2024.09.001
Reply to: "Enhancing diagnostic accuracy for primary bone tumors: The role of expert histological analysis and AI-driven deep learning models"
Eur J Surg Oncol. 2024 Sep 7:108670. doi: 10.1016/j.ejso.2024.108670. Online ahead of print.
NO ABSTRACT
PMID:39289050 | DOI:10.1016/j.ejso.2024.108670
Deep-learning-assisted thermogalvanic hydrogel fiber sensor for self-powered in-nostril respiratory monitoring
J Colloid Interface Sci. 2024 Sep 14;678(Pt C):143-149. doi: 10.1016/j.jcis.2024.09.132. Online ahead of print.
ABSTRACT
Direct and consistent monitoring of respiratory patterns is crucial for disease prognostication. Although the wired clinical respiratory monitoring apparatus can operate accurately, the existing defects are evident, such as the indispensability of an external power supply, low mobility, poor comfort, and limited monitoring timeframes. Here, we present a self-powered in-nostril hydrogel sensor for long-term non-irritant anti-interference respiratory monitoring, which is developed from a dual-network binary-solvent thermogalvanic polyvinyl alcohol hydrogel fiber (d = 500 μm, L=30 mm) with Fe2+/Fe3+ ions serving as a redox couple, which can generate a thermoelectrical signal in the nasal cavity based on the temperature difference between the exhaled gas and skin as well as avoid interference from the external environment. Due to strong hydrogen bonding between solvent molecules, the sensor retains over 90 % of its moisture after 14 days, exhibiting great potential in wearable respiratory surveillance. With the assistance of deep learning, the hydrogel fiber-based respiration monitoring strategy can actively recognize seven typical breathing patterns with an accuracy of 97.1 % by extracting the time sequence and dynamic parameters of the thermoelectric signals generated by respiration, providing an alert for high-risk respiratory symptoms. This work demonstrates the significant potential of thermogalvanic gels for next-generation wearable bioelectronics for early screening of respiratory diseases.
PMID:39288575 | DOI:10.1016/j.jcis.2024.09.132
Deep learning-assisted distinguishing breast phyllodes tumors from fibroadenomas based on ultrasound images: a diagnostic study
Br J Radiol. 2024 Sep 17:tqae147. doi: 10.1093/bjr/tqae147. Online ahead of print.
ABSTRACT
OBJECTIVES: To evaluate the performance of ultrasound-based deep learning (DL) models in distinguishing breast phyllodes tumors (PTs) from fibroadenomas (FAs) and their clinical utility in assisting radiologists with varying diagnostic experiences.
METHODS: We retrospectively collected 1180 ultrasound images from 539 patients (247 PTs and 292 FAs). Five DL network models with different structures were trained and validated using nodule regions annotated by radiologists on breast ultrasound images. DL models were trained using the methods of transfer learning and 3-fold cross-validation. The model demonstrated the best evaluation index in the 3-fold cross-validation was selected for comparison with radiologists' diagnostic decisions. Two-round reader studies were conducted to investigate the value of DL model in assisting six radiologists with different levels of experience.
RESULTS: Upon testing, Xception model demonstrated the best diagnostic performance (AUC: 0.87, 95%CI: 0.81-0.92), outperforming all radiologists (all p < 0.05). Additionally, the DL model enhanced the diagnostic performance of radiologists. Accuracy demonstrated improvements of 4%, 4%, and 3% for senior, intermediate, and junior radiologists, respectively.
CONCLUSIONS: The DL models showed superior predictive abilities compared to experienced radiologists in distinguishing breast PTs from FAs. Utilizing the model led to improved efficiency and diagnostic performance for radiologists with different levels of experience (6-25 years of work).
ADVANCES IN KNOWLEDGE: We developed and validated a DL model based on the largest available dataset to assist in diagnosing PTs. This model has the potential to allow radiologists to discriminate two types of breast tumors which are challenging to identify with precision and accuracy, and subsequently to make more informed decisions about surgical plans.
PMID:39288312 | DOI:10.1093/bjr/tqae147
PPG2RespNet: a deep learning model for respirational signal synthesis and monitoring from photoplethysmography (PPG) signal
Phys Eng Sci Med. 2024 Sep 17. doi: 10.1007/s13246-024-01482-1. Online ahead of print.
ABSTRACT
Breathing conditions affect a wide range of people, including those with respiratory issues like asthma and sleep apnea. Smartwatches with photoplethysmogram (PPG) sensors can monitor breathing. However, current methods have limitations due to manual parameter tuning and pre-defined features. To address this challenge, we propose the PPG2RespNet deep-learning framework. It draws inspiration from the UNet and UNet + + models. It uses three publicly available PPG datasets (VORTAL, BIDMC, Capnobase) to autonomously and efficiently extract respiratory signals. The datasets contain PPG data from different groups, such as intensive care unit patients, pediatric patients, and healthy subjects. Unlike conventional U-Net architectures, PPG2RespNet introduces layered skip connections, establishing hierarchical and dense connections for robust signal extraction. The bottleneck layer of the model is also modified to enhance the extraction of latent features. To evaluate PPG2RespNet's performance, we assessed its ability to reconstruct respiratory signals and estimate respiration rates. The model outperformed other models in signal-to-signal synthesis, achieving exceptional Pearson correlation coefficients (PCCs) with ground truth respiratory signals: 0.94 for BIDMC, 0.95 for VORTAL, and 0.96 for Capnobase. With mean absolute errors (MAE) of 0.69, 0.58, and 0.11 for the respective datasets, the model exhibited remarkable precision in estimating respiration rates. We used regression and Bland-Altman plots to analyze the predictions of the model in comparison to the ground truth. PPG2RespNet can thus obtain high-quality respiratory signals non-invasively, making it a valuable tool for calculating respiration rates.
PMID:39287773 | DOI:10.1007/s13246-024-01482-1