Deep learning
Autism spectrum disorder diagnosis with EEG signals using time series maps of brain functional connectivity and a combined CNN-LSTM model
Comput Methods Programs Biomed. 2024 Apr 24;250:108196. doi: 10.1016/j.cmpb.2024.108196. Online ahead of print.
ABSTRACT
BACKGROUND AND OBJECTIVE: People with autism spectrum disorder (ASD) often have cognitive impairments. Effective connectivity between different areas of the brain is essential for normal cognition. Electroencephalography (EEG) has been widely used in the detection of neurological diseases. Previous studies on detecting ASD with EEG data have focused on frequency-related features. Most of these studies have augmented data by splitting the dataset into time slices or sliding windows. However, such approaches to data augmentation may cause the testing data to be contaminated by the training data. To solve this problem, this study developed a novel method for detecting ASD with EEG data.
METHODS: This study quantified the functional connectivity of the subject's brain from EEG signals and defined the individual to be the unit of analysis. Publicly available EEG data were gathered from 97 and 92 subjects with ASD and typical development (TD), respectively, while they were at rest or performing a task. Time-series maps of brain functional connectivity were constructed, and the data were augmented using a deep convolutional generative adversarial network. In addition, a combined network for ASD detection, based on convolutional neural network (CNN) and long short-term memory (LSTM), was designed and implemented.
RESULTS: Based on functional connectivity, the network achieved classification accuracies of 81.08% and 74.55% on resting state and task state data, respectively. In addition, we found that the functional connectivity of ASD differed from TD primarily in the short-distance functional connectivity of the parietal and occipital lobes and in the distant connections from the right temporoparietal junction region to the left posterior temporal lobe.
CONCLUSIONS: This paper provides a new perspective for better utilizing EEG to understand ASD. The method proposed in our study is expected to be a reliable tool to assist in the diagnosis of ASD.
PMID:38678958 | DOI:10.1016/j.cmpb.2024.108196
Advancing musculoskeletal tumor diagnosis: Automated segmentation and predictive classification using deep learning and radiomics
Comput Biol Med. 2024 Apr 22;175:108502. doi: 10.1016/j.compbiomed.2024.108502. Online ahead of print.
ABSTRACT
OBJECTIVES: Musculoskeletal (MSK) tumors, given their high mortality rate and heterogeneity, necessitate precise examination and diagnosis to guide clinical treatment effectively. Magnetic resonance imaging (MRI) is pivotal in detecting MSK tumors, as it offers exceptional image contrast between bone and soft tissue. This study aims to enhance the speed of detection and the diagnostic accuracy of MSK tumors through automated segmentation and grading utilizing MRI.
MATERIALS AND METHODS: The research included 170 patients (mean age, 58 years ±12 (standard deviation), 84 men) with MSK lesions, who underwent MRI scans from April 2021 to May 2023. We proposed a deep learning (DL) segmentation model MSAPN based on multi-scale attention and pixel-level reconstruction, and compared it with existing algorithms. Using MSAPN-segmented lesions to extract their radiomic features for the benign and malignant classification of tumors.
RESULTS: Compared to the most advanced segmentation algorithms, MSAPN demonstrates better performance. The Dice similarity coefficients (DSC) are 0.871 and 0.815 in the testing set and independent validation set, respectively. The radiomics model for classifying benign and malignant lesions achieves an accuracy of 0.890. Moreover, there is no statistically significant difference between the radiomics model based on manual segmentation and MSAPN segmentation.
CONCLUSION: This research contributes to the advancement of MSK tumor diagnosis through automated segmentation and predictive classification. The integration of DL algorithms and radiomics shows promising results, and the visualization analysis of feature maps enhances clinical interpretability.
PMID:38678943 | DOI:10.1016/j.compbiomed.2024.108502
Adaptive prediction for effluent quality of wastewater treatment plant: Improvement with a dual-stage attention-based LSTM network
J Environ Manage. 2024 Apr 26;359:120887. doi: 10.1016/j.jenvman.2024.120887. Online ahead of print.
ABSTRACT
The accurate effluent prediction plays a crucial role in providing early warning for abnormal effluent and achieving the adjustment of feedforward control parameters during wastewater treatment. This study applied a dual-staged attention mechanism based on long short-term memory network (DA-LSTM) to improve the accuracy of effluent quality prediction. The results showed that input attention (IA) and temporal attention (TA) significantly enhanced the prediction performance of LSTM. Specially, IA could adaptively adjust feature weights to enhance the robustness against input noise, with R2 increased by 13.18%. To promote its long-term memory ability, TA was used to increase the memory span from 96 h to 168 h. Compared to a single LSTM model, the DA-LSTM model showed an improvement in prediction accuracy by 5.10%, 2.11%, 14.47% for COD, TP, and TN. Additionally, DA-LSTM demonstrated excellent generalization performance in new scenarios, with the R2 values for COD, TP, and TN increasing by 22.67%, 20.06%, and 17.14% respectively, while the MAPE values decreased by 56.46%, 63.08%, and 42.79%. In conclusion, the DA-LSTM model demonstrated excellent prediction performance and generalization ability due to its advantages of feature-adaptive weighting and long-term memory focusing. This has forward-looking significance for achieving efficient early warning of abnormal operating conditions and timely management of control parameters.
PMID:38678908 | DOI:10.1016/j.jenvman.2024.120887
Use of one-dimensional CNN for input data size reduction in LSTM for improved computational efficiency and accuracy in hourly rainfall-runoff modeling
J Environ Manage. 2024 Apr 27;359:120931. doi: 10.1016/j.jenvman.2024.120931. Online ahead of print.
ABSTRACT
A deep learning architecture, denoted as CNNsLSTM, is proposed for hourly rainfall-runoff modeling in this study. The architecture involves a serial coupling of the one-dimensional convolutional neural network (1D-CNN) and the long short-term memory (LSTM) network. In the proposed framework, multiple layers of the CNN component process long-term hourly meteorological time series data, while the LSTM component handles short-term meteorological time series data and utilizes the extracted features from the 1D-CNN. In order to demonstrate the effectiveness of the proposed approach, it was implemented for hourly rainfall-runoff modeling in the Ishikari River watershed, Japan. A meteorological dataset, including precipitation, air temperature, evapotranspiration, longwave radiation, and shortwave radiation, was utilized as input. The results of the proposed approach (CNNsLSTM) were compared with those of previously proposed deep learning approaches used in hydrologic modeling, such as 1D-CNN, LSTM with only hourly inputs (LSTMwHour), a parallel architecture of 1D-CNN and LSTM (CNNpLSTM), and the LSTM architecture, which uses both daily and hourly input data (LSTMwDpH). Meteorological and runoff datasets were separated into training, validation, and test periods to train the deep learning model without overfitting, and evaluate the model with an independent dataset. The proposed approach clearly improved estimation accuracy compared to previously utilized deep learning approaches in rainfall = runoff modeling. In comparison with the observed flows, the median values of the Nash-Sutcliffe efficiency for the test period were 0.455-0.469 for 1D-CNN, 0.639-0.656 for CNNpLSTM, 0.745 for LSTMwHour, 0.831 for LSTMwDpH, and 0.865-0.873 for the proposed CNNsLSTM. Furthermore, the proposed CNNsLSTM reduced the median root mean square error (RMSE) of 1D-CNN by 50.2%-51.4%, CNNpLSTM by 37.4%-40.8%, LSTMwHour by 27.3%-29.5%, and LSTMwDpH by 10.6%-13.4%. Particularly, the proposed CNNsLSTM improved the estimations for high flows (≧75th percentile) and peak flows (≧95th percentile). The computational speed of LSTMwDpH is the fastest among the five architectures. Although the computation speed of CNNsLSTM is slower than LSTMwDpH's, it is still 6.9-7.9 times faster than that of LSTMwHour. Therefore, the proposed CNNsLSTM would be an effective approach for flood management and hydraulic structure design, mainly under climate change conditions that require estimating hourly river flows using meteorological datasets.
PMID:38678895 | DOI:10.1016/j.jenvman.2024.120931
DeepKEGG: a multi-omics data integration framework with biological insights for cancer recurrence prediction and biomarker discovery
Brief Bioinform. 2024 Mar 27;25(3):bbae185. doi: 10.1093/bib/bbae185.
ABSTRACT
Deep learning-based multi-omics data integration methods have the capability to reveal the mechanisms of cancer development, discover cancer biomarkers and identify pathogenic targets. However, current methods ignore the potential correlations between samples in integrating multi-omics data. In addition, providing accurate biological explanations still poses significant challenges due to the complexity of deep learning models. Therefore, there is an urgent need for a deep learning-based multi-omics integration method to explore the potential correlations between samples and provide model interpretability. Herein, we propose a novel interpretable multi-omics data integration method (DeepKEGG) for cancer recurrence prediction and biomarker discovery. In DeepKEGG, a biological hierarchical module is designed for local connections of neuron nodes and model interpretability based on the biological relationship between genes/miRNAs and pathways. In addition, a pathway self-attention module is constructed to explore the correlation between different samples and generate the potential pathway feature representation for enhancing the prediction performance of the model. Lastly, an attribution-based feature importance calculation method is utilized to discover biomarkers related to cancer recurrence and provide a biological interpretation of the model. Experimental results demonstrate that DeepKEGG outperforms other state-of-the-art methods in 5-fold cross validation. Furthermore, case studies also indicate that DeepKEGG serves as an effective tool for biomarker discovery. The code is available at https://github.com/lanbiolab/DeepKEGG.
PMID:38678587 | DOI:10.1093/bib/bbae185
Determining individual suitability for neoadjuvant systemic therapy in breast cancer patients through deep learning
Clin Transl Oncol. 2024 Apr 28. doi: 10.1007/s12094-024-03459-8. Online ahead of print.
ABSTRACT
BACKGROUND: The survival advantage of neoadjuvant systemic therapy (NST) for breast cancer patients remains controversial, especially when considering the heterogeneous characteristics of individual patients.
OBJECTIVE: To discern the variability in responses to breast cancer treatment at the individual level and propose personalized treatment recommendations utilizing deep learning (DL).
METHODS: Six models were developed to offer individualized treatment suggestions. Outcomes for patients whose actual treatments aligned with model recommendations were compared to those whose did not. The influence of certain baseline features of patients on NST selection was visualized and quantified by multivariate logistic regression and Poisson regression analyses.
RESULTS: Our study included 94,487 female breast cancer patients. The Balanced Individual Treatment Effect for Survival data (BITES) model outperformed other models in performance, showing a statistically significant protective effect with inverse probability treatment weighting (IPTW)-adjusted baseline features [IPTW-adjusted hazard ratio: 0.51, 95% confidence interval (CI), 0.41-0.64; IPTW-adjusted risk difference: 21.46, 95% CI 18.90-24.01; IPTW-adjusted difference in restricted mean survival time: 21.51, 95% CI 19.37-23.80]. Adherence to BITES recommendations is associated with reduced breast cancer mortality and fewer adverse effects. BITES suggests that patients with TNM stage IIB, IIIB, triple-negative subtype, a higher number of positive axillary lymph nodes, and larger tumors are most likely to benefit from NST.
CONCLUSIONS: Our results demonstrated the potential of BITES to aid in clinical treatment decisions and offer quantitative treatment insights. In our further research, these models should be validated in clinical settings and additional patient features as well as outcome measures should be studied in depth.
PMID:38678522 | DOI:10.1007/s12094-024-03459-8
Optimizing latent graph representations of surgical scenes for unseen domain generalization
Int J Comput Assist Radiol Surg. 2024 Apr 28. doi: 10.1007/s11548-024-03121-2. Online ahead of print.
ABSTRACT
PURPOSE: Advances in deep learning have resulted in effective models for surgical video analysis; however, these models often fail to generalize across medical centers due to domain shift caused by variations in surgical workflow, camera setups, and patient demographics. Recently, object-centric learning has emerged as a promising approach for improved surgical scene understanding, capturing and disentangling visual and semantic properties of surgical tools and anatomy to improve downstream task performance. In this work, we conduct a multicentric performance benchmark of object-centric approaches, focusing on critical view of safety assessment in laparoscopic cholecystectomy, then propose an improved approach for unseen domain generalization.
METHODS: We evaluate four object-centric approaches for domain generalization, establishing baseline performance. Next, leveraging the disentangled nature of object-centric representations, we dissect one of these methods through a series of ablations (e.g., ignoring either visual or semantic features for downstream classification). Finally, based on the results of these ablations, we develop an optimized method specifically tailored for domain generalization, LG-DG, that includes a novel disentanglement loss function.
RESULTS: Our optimized approach, LG-DG, achieves an improvement of 9.28% over the best baseline approach. More broadly, we show that object-centric approaches are highly effective for domain generalization thanks to their modular approach to representation learning.
CONCLUSION: We investigate the use of object-centric methods for unseen domain generalization, identify method-agnostic factors critical for performance, and present an optimized approach that substantially outperforms existing methods.
PMID:38678488 | DOI:10.1007/s11548-024-03121-2
Identification of agricultural surface source pollution in plain river network areas based on 3D-EEMs and convolutional neural networks
Water Sci Technol. 2024 Apr;89(8):1961-1980. doi: 10.2166/wst.2024.122. Epub 2024 Apr 15.
ABSTRACT
Agricultural non-point sources, as major sources of organic pollution, continue to flow into the river network area of the Jiangnan Plain, posing a serious threat to the quality of water bodies, the ecological environment, and human health. Therefore, there is an urgent need for a method that can accurately identify various types of agricultural organic pollution to prevent the water ecosystems in the region from significant organic pollution. In this study, a network model called RA-GoogLeNet is proposed for accurately identifying agricultural organic pollution in the river network area of the Jiangnan Plain. RA-GoogLeNet uses fluorescence spectral data of agricultural non-point source water quality in Changzhou Changdang Lake Basin, based on GoogLeNet architecture, and adds an efficient channel attention (ECA) mechanism to its A-Inception module, which enables the model to automatically learn the importance of independent channel features. ResNet are used to connect each A-Reception module. The experimental results show that RA-GoogLeNet performs well in fluorescence spectral classification of water quality, with an accuracy of 96.3%, which is 1.2% higher than the baseline model, and has good recall and F1 score. This study provides powerful technical support for the traceability of agricultural organic pollution.
PMID:38678402 | DOI:10.2166/wst.2024.122
Computed tomography-based automated 3D measurement of femoral version: Validation against standard 2D measurements in symptomatic patients
J Orthop Res. 2024 Apr 27. doi: 10.1002/jor.25865. Online ahead of print.
ABSTRACT
To validate 3D methods for femoral version measurement, we asked: (1) Can a fully automated segmentation of the entire femur and 3D measurement of femoral version using a neck based method and a head-shaft based method be performed? (2) How do automatic 3D-based computed tomography (CT) measurements of femoral version compare to the most commonly used 2D-based measurements utilizing four different landmarks? Retrospective study (May 2017 to June 2018) evaluating 45 symptomatic patients (57 hips, mean age 18.7 ± 5.1 years) undergoing pelvic and femoral CT. Femoral version was assessed using four previously described methods (Lee, Reikeras, Tomczak, and Murphy). Fully-automated segmentation yielded 3D femur models used to measure femoral version via femoral neck- and head-shaft approaches. Mean femoral version with 95% confidence intervals, and intraclass correlation coefficients were calculated, and Bland-Altman analysis was performed. Automatic 3D segmentation was highly accurate, with mean dice coefficients of 0.98 ± 0.03 and 0.97 ± 0.02 for femur/pelvis, respectively. Mean difference between 3D head-shaft- (27.4 ± 16.6°) and 3D neck methods (12.9 ± 13.7°) was 14.5 ± 10.7° (p < 0.001). The 3D neck method was closer to the proximal Lee (-2.4 ± 5.9°, -4.4 to 0.5°, p = 0.009) and Reikeras (2 ± 5.6°, 95% CI: 0.2 to 3.8°, p = 0.03) methods. The 3D head-shaft method was closer to the distal Tomczak (-1.3 ± 7.5°, 95% CI: -3.8 to 1.1°, p = 0.57) and Murphy (1.5 ± 5.4°, -0.3 to 3.3°, p = 0.12) methods. Automatic 3D neck-based-/head-shaft methods yielded femoral version angles comparable to the proximal/distal 2D-based methods, when applying fully-automated segmentations.
PMID:38678375 | DOI:10.1002/jor.25865
Predicting dust pollution from dry bulk ports in coastal cities: A hybrid approach based on data decomposition and deep learning
Environ Pollut. 2024 Apr 25:124053. doi: 10.1016/j.envpol.2024.124053. Online ahead of print.
ABSTRACT
Dust pollution from storage and handling of materials in dry bulk ports seriously affects air quality and public health in coastal cities. Accurate prediction of dust pollution helps identify risks early and take preventive measures. However, there remain challenges in solving non-stationary time series and selecting relevant features. Besides, existing studies rarely consider impacts of port operations on dust pollution. Therefore, a hybrid approach based on data decomposition and deep learning is proposed to predict dust pollution from dry bulk ports. Port operational data is specially integrated into input features. A secondary decomposition and recombination (SDR) strategy is presented to reduce data non-stationarity. A dual-stage attention-based sequence-to-sequence (DA-Seq2Seq) model is employed to adaptively select the most relevant features at each time step, as well as capture long-term temporal dependencies. This approach is compared with baseline models on a dataset from a dry bulk port in northern China. The results reveal the advantages of SDR strategy and integrating operational data and show that this approach has higher accuracy than baseline models. The proposed approach can mitigate adverse effects of dust pollution from dry bulk ports on urban residents and help port authorities control dust pollution.
PMID:38677458 | DOI:10.1016/j.envpol.2024.124053
Deep learning models for ischemic stroke lesion segmentation in medical images: A survey
Comput Biol Med. 2024 Apr 25;175:108509. doi: 10.1016/j.compbiomed.2024.108509. Online ahead of print.
ABSTRACT
This paper provides a comprehensive review of deep learning models for ischemic stroke lesion segmentation in medical images. Ischemic stroke is a severe neurological disease and a leading cause of death and disability worldwide. Accurate segmentation of stroke lesions in medical images such as MRI and CT scans is crucial for diagnosis, treatment planning and prognosis. This paper first introduces common imaging modalities used for stroke diagnosis, discussing their capabilities in imaging lesions at different disease stages from the acute to chronic stage. It then reviews three major public benchmark datasets for evaluating stroke segmentation algorithms: ATLAS, ISLES and AISD, highlighting their key characteristics. The paper proceeds to provide an overview of foundational deep learning architectures for medical image segmentation, including CNN-based and transformer-based models. It summarizes recent innovations in adapting these architectures to the task of stroke lesion segmentation across the three datasets, analyzing their motivations, modifications and results. A survey of loss functions and data augmentations employed for this task is also included. The paper discusses various aspects related to stroke segmentation tasks, including prior knowledge, small lesions, and multimodal fusion, and then concludes by outlining promising future research directions. Overall, this comprehensive review covers critical technical developments in the field to support continued progress in automated stroke lesion segmentation.
PMID:38677171 | DOI:10.1016/j.compbiomed.2024.108509
All you need is data preparation: A systematic review of image harmonization techniques in Multi-center/device studies for medical support systems
Comput Methods Programs Biomed. 2024 Apr 23;250:108200. doi: 10.1016/j.cmpb.2024.108200. Online ahead of print.
ABSTRACT
BACKGROUND AND OBJECTIVES: Artificial intelligence (AI) models trained on multi-centric and multi-device studies can provide more robust insights and research findings compared to single-center studies. However, variability in acquisition protocols and equipment can introduce inconsistencies that hamper the effective pooling of multi-source datasets. This systematic review evaluates strategies for image harmonization, which standardizes appearances to enable reliable AI analysis of multi-source medical imaging.
METHODS: A literature search using PRISMA guidelines was conducted to identify relevant papers published between 2013 and 2023 analyzing multi-centric and multi-device medical imaging studies that utilized image harmonization approaches.
RESULTS: Common image harmonization techniques included grayscale normalization (improving classification accuracy by up to 24.42 %), resampling (increasing the percentage of robust radiomics features from 59.5 % to 89.25 %), and color normalization (enhancing AUC by up to 0.25 in external test sets). Initially, mathematical and statistical methods dominated, but machine and deep learning adoption has risen recently. Color imaging modalities like digital pathology and dermatology have remained prominent application areas, though harmonization efforts have expanded to diverse fields including radiology, nuclear medicine, and ultrasound imaging. In all the modalities covered by this review, image harmonization improved AI performance, with increasing of up to 24.42 % in classification accuracy and 47 % in segmentation Dice scores.
CONCLUSIONS: Continued progress in image harmonization represents a promising strategy for advancing healthcare by enabling large-scale, reliable analysis of integrated multi-source datasets using AI. Standardizing imaging data across clinical settings can help realize personalized, evidence-based care supported by data-driven technologies while mitigating biases associated with specific populations or acquisition protocols.
PMID:38677080 | DOI:10.1016/j.cmpb.2024.108200
Deep learning approach for cardiovascular disease risk stratification and survival analysis on a Canadian cohort
Int J Cardiovasc Imaging. 2024 Apr 28. doi: 10.1007/s10554-024-03100-3. Online ahead of print.
ABSTRACT
The quantification of carotid plaque has been routinely used to predict cardiovascular risk in cardiovascular disease (CVD) and coronary artery disease (CAD). To determine how well carotid plaque features predict the likelihood of CAD and cardiovascular (CV) events using deep learning (DL) and compare against the machine learning (ML) paradigm. The participants in this study consisted of 459 individuals who had undergone coronary angiography, contrast-enhanced ultrasonography, and focused carotid B-mode ultrasound. Each patient was tracked for thirty days. The measurements on these patients consisted of maximum plaque height (MPH), total plaque area (TPA), carotid intima-media thickness (cIMT), and intraplaque neovascularization (IPN). CAD risk and CV event stratification were performed by applying eight types of DL-based models. Univariate and multivariate analysis was also conducted to predict the most significant risk predictors. The DL's model effectiveness was evaluated by the area-under-the-curve measurement while the CV event prediction was evaluated using the Cox proportional hazard model (CPHM) and compared against the DL-based concordance index (c-index). IPN showed a substantial ability to predict CV events (p < 0.0001). The best DL system improved by 21% (0.929 vs. 0.762) over the best ML system. DL-based CV event prediction showed a ~ 17% increase in DL-based c-index compared to the CPHM (0.86 vs. 0.73). CAD and CV incidents were linked to IPN and carotid imaging characteristics. For survival analysis and CAD prediction, the DL-based system performs superior to ML-based models.
PMID:38678144 | DOI:10.1007/s10554-024-03100-3
Efficient diagnosis of psoriasis and lichen planus cutaneous diseases using deep learning approach
Sci Rep. 2024 Apr 27;14(1):9715. doi: 10.1038/s41598-024-60526-4.
ABSTRACT
The tendency of skin diseases to manifest in a unique and yet similar appearance, absence of enough competent dermatologists, and urgency of diagnosis and classification on time and accurately, makes the need of machine aided diagnosis blatant. This study is conducted with the purpose of broadening the research in skin disease diagnosis with computer by traversing the capabilities of deep Learning algorithms to classify two skin diseases noticeably close in appearance, Psoriasis and Lichen Planus. The resemblance between these two skin diseases is striking, often resulting in their classification within the same category. Despite this, there is a dearth of research focusing specifically on these diseases. A customized 50 layers ResNet-50 architecture of convolutional neural network is used and the results are validated through fivefold cross-validation, threefold cross-validation, and random split. By utilizing advanced data augmentation and class balancing techniques, the diversity of the dataset has increased, and the dataset imbalance has been minimized. ResNet-50 has achieved an accuracy of 89.07%, sensitivity of 86.46%, and specificity of 86.02%. With their promising results, these algorithms make the potential of machine aided diagnosis clear. Deep Learning algorithms could provide assistance to physicians and dermatologists by classification of skin diseases, with similar appearance, in real-time.
PMID:38678100 | DOI:10.1038/s41598-024-60526-4
Vision-based estimation of manipulation forces by deep learning of laparoscopic surgical images obtained in a porcine excised kidney experiment
Sci Rep. 2024 Apr 27;14(1):9686. doi: 10.1038/s41598-024-60574-w.
ABSTRACT
In robot-assisted surgery, in which haptics should be absent, surgeons experience haptics-like sensations as "pseudo-haptic feedback". As surgeons who routinely perform robot-assisted laparoscopic surgery, we wondered if we could make these "pseudo-haptics" explicit to surgeons. Therefore, we created a simulation model that estimates manipulation forces using only visual images in surgery. This study aimed to achieve vision-based estimations of the magnitude of forces during forceps manipulation of organs. We also attempted to detect over-force, exceeding the threshold of safe manipulation. We created a sensor forceps that can detect precise pressure at the tips with three vectors. Using an endoscopic system that is used in actual surgery, images of the manipulation of excised pig kidneys were recorded with synchronized force data. A force estimation model was then created using deep learning. Effective detection of over-force was achieved if the region of the visual images was restricted by the region of interest around the tips of the forceps. In this paper, we emphasize the importance of limiting the region of interest in vision-based force estimation tasks.
PMID:38678091 | DOI:10.1038/s41598-024-60574-w
Computational tools for plant genomics and breeding
Sci China Life Sci. 2024 Apr 23. doi: 10.1007/s11427-024-2578-6. Online ahead of print.
ABSTRACT
Plant genomics and crop breeding are at the intersection of biotechnology and information technology. Driven by a combination of high-throughput sequencing, molecular biology and data science, great advances have been made in omics technologies at every step along the central dogma, especially in genome assembling, genome annotation, epigenomic profiling, and transcriptome profiling. These advances further revolutionized three directions of development. One is genetic dissection of complex traits in crops, along with genomic prediction and selection. The second is comparative genomics and evolution, which open up new opportunities to depict the evolutionary constraints of biological sequences for deleterious variant discovery. The third direction is the development of deep learning approaches for the rational design of biological sequences, especially proteins, for synthetic biology. All three directions of development serve as the foundation for a new era of crop breeding where agronomic traits are enhanced by genome design.
PMID:38676814 | DOI:10.1007/s11427-024-2578-6
Olecranon bone age assessment in puberty using a lateral elbow radiograph and a deep-learning model
Eur Radiol. 2024 Apr 27. doi: 10.1007/s00330-024-10748-x. Online ahead of print.
ABSTRACT
OBJECTIVES: To improve pubertal bone age (BA) evaluation by developing a precise and practical elbow BA classification using the olecranon, and a deep-learning AI model.
MATERIALS AND METHODS: Lateral elbow radiographs taken for BA evaluation in children under 18 years were collected from January 2020 to June 2022, retrospectively. A novel classification and the olecranon BA were established based on the morphological changes in the olecranon ossification process during puberty. The olecranon BA was compared with other elbow and hand BA methods, using intraclass correlation coefficients (ICCs), and a deep-learning AI model was developed.
RESULTS: A total of 3508 lateral elbow radiographs (mean age 9.8 ± 1.8 years) were collected. The olecranon BA showed the highest applicability (100%) and interobserver agreement (ICC 0.993) among elbow BA methods. It showed excellent reliability with Sauvegrain (0.967 in girls, 0.969 in boys) and Dimeglio (0.978 in girls, 0.978 in boys) elbow BA methods, as well as Korean standard (KS) hand BA in boys (0.917), and good reliability with KS in girls (0.896) and Greulich-Pyle (GP)/Tanner-Whitehouse (TW)3 (0.835 in girls, 0.895 in boys) hand BA methods. The AI model for olecranon BA showed an accuracy of 0.96 and a specificity of 0.98 with EfficientDet-b4. External validation showed an accuracy of 0.86 and a specificity of 0.91.
CONCLUSION: The olecranon BA evaluation for puberty, requiring only a lateral elbow radiograph, showed the highest applicability and interobserver agreement, and excellent reliability with other BA evaluation methods, along with a high performance of the AI model.
CLINICAL RELEVANCE STATEMENT: This AI model uses a single lateral elbow radiograph to determine bone age for puberty from the olecranon ossification center and can improve pubertal bone age assessment with the highest applicability and excellent reliability compared to previous methods.
KEY POINTS: Elbow bone age is valuable for pubertal bone age assessment, but conventional methods have limitations. Olecranon bone age and its AI model showed high performances for pubertal bone age assessment. Olecranon bone age system is practical and accurate while requiring only a single lateral elbow radiograph.
PMID:38676732 | DOI:10.1007/s00330-024-10748-x
Coronary plaque phenotype associated with positive remodeling
J Cardiovasc Comput Tomogr. 2024 Apr 26:S1934-5925(24)00100-X. doi: 10.1016/j.jcct.2024.04.009. Online ahead of print.
ABSTRACT
BACKGROUND: Positive remodeling is an integral part of the vascular adaptation process during the development of atherosclerosis, which can be detected by coronary computed tomography angiography (CTA).
METHODS: A total of 426 patients who underwent both coronary CTA and optical coherence tomography (OCT) were included. Four machine learning (ML) models, gradient boosting machine (GBM), random forest (RF), deep learning (DL), and support vector machine (SVM), were employed to detect specific plaque features. A total of 15 plaque features assessed by OCT were analyzed. The variable importance ranking was used to identify the features most closely associated with positive remodeling.
RESULTS: In the variable importance ranking, lipid index and maximal calcification arc were consistently ranked high across all four ML models. Lipid index and maximal calcification arc were correlated with positive remodeling, showing pronounced influence at the lower range and diminishing influence at the higher range. Patients with more plaques with positive remodeling throughout their entire coronary trees had higher low-density lipoprotein cholesterol levels and were associated with a higher incidence of cardiovascular events during 5-year follow-up (Hazard ratio 2.10 [1.26-3.48], P = 0.004).
CONCLUSION: Greater lipid accumulation and less calcium burden were important features associated with positive remodeling in the coronary arteries. The number of coronary plaques with positive remodeling was associated with a higher incidence of cardiovascular events.
PMID:38677958 | DOI:10.1016/j.jcct.2024.04.009
A Machine Learning Algorithm Improves the Diagnostic Accuracy of the Histologic Component of Antibody Mediated Rejection (AMR-H) in Cardiac Transplant Endomyocardial Biopsies
Cardiovasc Pathol. 2024 Apr 25:107646. doi: 10.1016/j.carpath.2024.107646. Online ahead of print.
ABSTRACT
BACKGROUND: Pathologic antibody mediated rejection (pAMR) remains a major driver of graft failure in cardiac transplant patients. The endomyocardial biopsy remains the primary diagnostic tool but presents with challenges, particularly in distinguishing the histologic component (pAMR-H) defined by 1) intravascular macrophage accumulation in capillaries and 2) activated endothelial cells that expand the cytoplasm to narrow or occlude the vascular lumen. Frequently, pAMR-H is difficult to distinguish from acute cellular rejection (ACR) and healing injury. With the advent of digital slide scanning and advances in machine deep learning, artificial intelligence technology is widely under investigation in the areas of oncologic pathology, but in its infancy in transplant pathology. For the first time, we determined if a machine learning algorithm could distinguish pAMR-H from normal myocardium, healing injury and ACR.
MATERIALS AND METHODS: A total of 4,212 annotations (1,053 regions of normal, 1,053 pAMR-H, 1,053 healing injury and 1,053 ACR) were completed from 300 hematoxylin and eosin slides scanned using a Leica Aperio GT450 digital whole slide scanner at 40X magnification. All regions of pAMR-H were annotated from patients confirmed with a previous diagnosis of pAMR2 (>50% positive C4d immunofluorescence and/or >10% CD68 positive intravascular macrophages). Annotations were imported into a Python 3.7 development environment using the OpenSlide™ package and a convolutional neural network approach utilizing transfer learning was performed.
RESULTS: The machine learning algorithm showed 98% overall validation accuracy and pAMR-H was correctly distinguished from specific categories with the following accuracies: normal myocardium (99.2%), healing injury (99.5%) and ACR (99.5%).
CONCLUSION: Our novel deep learning algorithm can reach acceptable, and possibly surpass, performance of current diagnostic standards of identifying pAMR-H. Such a tool may serve as an adjunct diagnostic aid for improving the pathologist's accuracy and reproducibility, especially in difficult cases with high inter-observer variability. This is one of the first studies that provides evidence that an artificial intelligence machine learning algorithm can be trained and validated to diagnose pAMR-H in cardiac transplant patients. Ongoing studies include multi-institutional verification testing to ensure generalizability.
PMID:38677634 | DOI:10.1016/j.carpath.2024.107646
Longitudinal assessment of tumor-infiltrating lymphocytes in primary breast cancer following neoadjuvant radiotherapy
Int J Radiat Oncol Biol Phys. 2024 Apr 26:S0360-3016(24)00566-2. doi: 10.1016/j.ijrobp.2024.04.065. Online ahead of print.
ABSTRACT
BACKGROUND: Tumor-infiltrating lymphocytes (TILs) have prognostic significance in several cancers, including breast. Despite interest in combining radiotherapy with immunotherapy, little is known about the effect of radiotherapy itself on the tumor-immune microenvironment, including TILs. Here, we interrogated longitudinal dynamics of tumor-infiltrating and systemic lymphocytes in patient samples taken before, during, and after neoadjuvant radiotherapy (NART), from XXX and XXX breast clinical trials.
METHODS: We manually scored stromal TILs (sTILs) from longitudinal tumor samples using standardized guidelines, as well as deep learning-based scores at cell-level (cTIL) and cell- and tissue-level combination analysis (SuperTIL). In parallel, we interrogated absolute lymphocyte counts from routine blood tests at corresponding timepoints during treatment. Exploratory analyses studied the relationship between TILs and pathological complete response (pCR) and long-term outcomes.
RESULTS: Patients receiving NART experienced a significant and uniform decrease in sTILs that did not recover at the time of surgery (P < 0.0001). This lymphodepletive effect was also mirrored in peripheral blood. Our "SuperTIL" deep learning score showed good concordance with manual sTILs, and importantly performed comparably to manual scores in predicting pCR from diagnostic biopsies. Analysis suggested an association between baseline sTILs and pCR, as well as sTILs at surgery and relapse, in patients receiving NART.
CONCLUSIONS: This study provides novel insights into TIL dynamics in the context of NART in breast cancer, and demonstrates the potential for artificial intelligence to assist routine pathology. We have identified trends which warrant further interrogation and have a bearing on future radio-immunotherapy trials.
PMID:38677525 | DOI:10.1016/j.ijrobp.2024.04.065