Deep learning
Regional choroidal thickness estimation from color fundus images based on convolutional neural networks
Heliyon. 2024 Feb 23;10(5):e26872. doi: 10.1016/j.heliyon.2024.e26872. eCollection 2024 Mar 15.
ABSTRACT
PURPOSE: This study aims to estimate the regional choroidal thickness from color fundus images from convolutional neural networks in different network structures and task learning models.
METHOD: 1276 color fundus photos and their corresponding choroidal thickness values from healthy subjects were obtained from the Topcon DRI Triton optical coherence tomography machine. Initially, ten commonly used convolutional neural networks were deployed to identify the most accurate model, which was subsequently selected for further training. This selected model was then employed in combination with single-, multiple-, and auxiliary-task training models to predict the average and sub-region choroidal thickness in both ETDRS (Early Treatment Diabetic Retinopathy Study) grids and 100-grid subregions. The values of mean absolute error and coefficient of determination (R2) were involved to evaluate the models' performance.
RESULTS: Efficientnet-b0 network outperformed other networks with the lowest mean absolute error value (25.61 μm) and highest R2 (0.7817) in average choroidal thickness. Incorporating diopter spherical, anterior chamber depth, and lens thickness as auxiliary tasks improved predicted accuracy (p-value = 6.39×10-44, 2.72×10-38, 1.15×10-36 respectively). For ETDRS regional choroidal thickness estimation, multi-task model achieved better results than single task model (lowest mean absolute error = 31.10 μm vs. 33.20 μm). The multi-task training also can simultaneously predict the choroidal thickness of 100 grids with a minimum mean absolute error of 33.86 μm.
CONCLUSIONS: Efficientnet-b0, in combination with multi-task and auxiliary task models, achieve high accuracy in estimating average and regional macular choroidal thickness directly from color fundus photographs.
PMID:38468930 | PMC:PMC10925995 | DOI:10.1016/j.heliyon.2024.e26872
A Hybrid Deep Learning CNN model for COVID-19 detection from chest X-rays
Heliyon. 2024 Feb 29;10(5):e26938. doi: 10.1016/j.heliyon.2024.e26938. eCollection 2024 Mar 15.
ABSTRACT
Coronavirus disease (COVID-2019) is emerging in Wuhan, China in 2019. It has spread throughout the world since the year 2020. Millions of people were affected and caused death to them till now. To avoid the spreading of COVID-2019, various precautions and restrictions have been taken by all nations. At the same time, infected persons are needed to identify and isolate, and medical treatment should be provided to them. Due to a deficient number of Reverse Transcription Polymerase Chain Reaction (RT-PCR) tests, a Chest X-ray image is becoming an effective technique for diagnosing COVID-19. In this work, the Hybrid Deep Learning CNN model is proposed for the diagnosis COVID-19 using chest X-rays. The proposed model consists of a heading model and a base model. The base model utilizes two pre-trained deep learning structures such as VGG16 and VGG19. The feature dimensions from these pre-trained models are reduced by incorporating different pooling layers, such as max and average. In the heading part, dense layers of size three with different activation functions are also added. A dropout layer is supplemented to avoid overfitting. The experimental analyses are conducted to identify the efficacy of the proposed hybrid deep learning with existing transfer learning architectures such as VGG16, VGG19, EfficientNetB0 and ResNet50 using a COVID-19 radiology database. Various classification techniques, such as K-Nearest Neighbor (KNN), Naive Bayes, Random Forest, Support Vector Machine (SVM), and Neural Network, were also used for the performance comparison of the proposed model. The hybrid deep learning model with average pooling layers, along with SVM-linear and neural networks, both achieved an accuracy of 92%.These proposed models can be employed to assist radiologists and physicians in avoiding misdiagnosis rates and to validate the positive COVID-19 infected cases.
PMID:38468922 | PMC:PMC10926074 | DOI:10.1016/j.heliyon.2024.e26938
<em>In silico</em> models of the macromolecular Na<sub>V</sub>1.5-K<sub>IR</sub>2.1 complex
Front Physiol. 2024 Feb 26;15:1362964. doi: 10.3389/fphys.2024.1362964. eCollection 2024.
ABSTRACT
In cardiac cells, the expression of the cardiac voltage-gated Na+ channel (NaV1.5) is reciprocally regulated with the inward rectifying K+ channel (KIR2.1). These channels can form macromolecular complexes that pre-assemble early during forward trafficking (transport to the cell membrane). In this study, we present in silico 3D models of NaV1.5-KIR2.1, generated by rigid-body protein-protein docking programs and deep learning-based AlphaFold-Multimer software. Modeling revealed that the two channels could physically interact with each other along the entire transmembrane region. Structural mapping of disease-associated mutations revealed a hotspot at this interface with several trafficking-deficient variants in close proximity. Thus, examining the role of disease-causing variants is important not only in isolated channels but also in the context of macromolecular complexes. These findings may contribute to a better understanding of the life-threatening cardiovascular diseases underlying KIR2.1 and NaV1.5 malfunctions.
PMID:38468705 | PMC:PMC10925717 | DOI:10.3389/fphys.2024.1362964
Scanning the imaging horizon for hypertrophic cardiomyopathy
Can J Cardiol. 2024 Mar 9:S0828-282X(24)00203-4. doi: 10.1016/j.cjca.2024.02.030. Online ahead of print.
ABSTRACT
This paper discusses some of the recent advances in the use of noninvasive imaging applied to patients with hypertrophic cardiomyopathy. Echocardiography and cardiac CT are briefly discussed with respect to their power to detect apical aneurysmal disease. Echo phenotype-genotype correlations and the ability of echo to characterize myocardial work are reviewed. Positron emission tomography (PET) is reviewed in the context of ischaemia imaging and also the use of a new tracer which may allow for recognition of early activation of the fibrosis pathway. Next, the technical capabilities of CMR to measure myocardial perfusion, oxygenation and disarray are discussed as they apply to HCM. The application of radiomics to improve prediction of sudden cardiac death is touched upon. Finally, a deep learning approach to the recognition of HCM versus phenocopies is presented as a potential future diagnostic aid in the not-too-distant future.
PMID:38467329 | DOI:10.1016/j.cjca.2024.02.030
Application of artificial intelligence in dental implant prognosis: A scoping review
J Dent. 2024 Mar 9:104924. doi: 10.1016/j.jdent.2024.104924. Online ahead of print.
ABSTRACT
OBJECTIVES: The purpose of this scoping review was to evaluate the performance of artificial intelligence (AI) in the prognosis of dental implants.
DATA: Studies that analyzed the performance of AI models in the prediction of implant prognosis based on medical records or radiographic images. Quality assessment was conducted using the Joanna Briggs Institute (JBI) Critical Appraisal Checklist for Quasi-Experimental Studies.
SOURCES: This scoping review included studies published in English up to October 2023 in MEDLINE/PubMed, Embase, Cochrane Library, and Scopus. A manual search was also performed.
STUDY SELECTION: Of 892 studies, full-text analysis was conducted in 36 studies. Twelve studies met the inclusion criteria. Eight used deep learning models, 3 applied traditional machine learning algorithms, and 1 study combined both types. The performance was quantified using accuracy, sensitivity, specificity, precision, F1 score, and receiver operating characteristic area under curves (ROC AUC). The prognostic accuracy was analyzed and ranged from 70% to 96.13%.
CONCLUSIONS: AI is a promising tool in evaluating implant prognosis, but further enhancements are required. Additional radiographic and clinical data are needed to improve AI performance in implant prognosis.
CLINICAL SIGNIFICANCE: AI can predict the prognosis of dental implants based on radiographic images or medical records. As a result, clinicians can receive predicted implant prognosis with the assistance of AI before implant placement and make informed decisions.
PMID:38467177 | DOI:10.1016/j.jdent.2024.104924
Full-length radiograph based automatic musculoskeletal modeling using convolutional neural network
J Biomech. 2024 Mar 9;166:112046. doi: 10.1016/j.jbiomech.2024.112046. Online ahead of print.
ABSTRACT
Full-length radiographs contain information from which many anatomical parameters of the pelvis, femur, and tibia may be derived, but only a few anatomical parameters are used for musculoskeletal modeling. This study aimed to develop a fully automatic algorithm to extract anatomical parameters from full-length radiograph to generate a musculoskeletal model that is more accurate than linear scaled one. A U-Net convolutional neural network was trained to segment the pelvis, femur, and tibia from the full-length radiograph. Eight anatomic parameters (six for length and width, two for angles) were automatically extracted from the bone segmentation masks and used to generate the musculoskeletal model. Sørensen-Dice coefficient was used to quantify the consistency of automatic bone segmentation masks with manually segmented labels. Maximum distance error, root mean square (RMS) distance error and Jaccard index (JI) were used to evaluate the geometric accuracy of the automatically generated pelvis, femur and tibia models versus CT bone models. Mean Sørensen-Dice coefficients for the pelvis, femur and tibia 2D segmentation masks were 0.9898, 0.9822 and 0.9786, respectively. The algorithm-driven bone models were closer to the 3D CT bone models than the scaled generic models in geometry, with significantly lower maximum distance error (28.3 % average decrease from 24.35 mm) and RMS distance error (28.9 % average decrease from 9.55 mm) and higher JI (17.2 % average increase from 0.46) (P < 0.001). The algorithm-driven musculoskeletal modeling (107.15 ± 10.24 s) was faster than the manual process (870.07 ± 44.79 s) for the same full-length radiograph. This algorithm provides a fully automatic way to generate a musculoskeletal model from full-length radiograph that achieves an approximately 30 % reduction in distance errors, which could enable personalized musculoskeletal simulation based on full-length radiograph for large scale OA populations.
PMID:38467079 | DOI:10.1016/j.jbiomech.2024.112046
Deciphering the Lexicon of Protein Targets: A Review on Multifaceted Drug Discovery in the Era of Artificial Intelligence
Mol Pharm. 2024 Mar 11. doi: 10.1021/acs.molpharmaceut.3c01161. Online ahead of print.
ABSTRACT
Understanding protein sequence and structure is essential for understanding protein-protein interactions (PPIs), which are essential for many biological processes and diseases. Targeting protein binding hot spots, which regulate signaling and growth, with rational drug design is promising. Rational drug design uses structural data and computational tools to study protein binding sites and protein interfaces to design inhibitors that can change these interactions, thereby potentially leading to therapeutic approaches. Artificial intelligence (AI), such as machine learning (ML) and deep learning (DL), has advanced drug discovery and design by providing computational resources and methods. Quantum chemistry is essential for drug reactivity, toxicology, drug screening, and quantitative structure-activity relationship (QSAR) properties. This review discusses the methodologies and challenges of identifying and characterizing hot spots and binding sites. It also explores the strategies and applications of artificial-intelligence-based rational drug design technologies that target proteins and protein-protein interaction (PPI) binding hot spots. It provides valuable insights for drug design with therapeutic implications. We have also demonstrated the pathological conditions of heat shock protein 27 (HSP27) and matrix metallopoproteinases (MMP2 and MMP9) and designed inhibitors of these proteins using the drug discovery paradigm in a case study on the discovery of drug molecules for cancer treatment. Additionally, the implications of benzothiazole derivatives for anticancer drug design and discovery are deliberated.
PMID:38466810 | DOI:10.1021/acs.molpharmaceut.3c01161
Automatic detection of cell-cycle stages using recurrent neural networks
PLoS One. 2024 Mar 11;19(3):e0297356. doi: 10.1371/journal.pone.0297356. eCollection 2024.
ABSTRACT
Mitosis is the process by which eukaryotic cells divide to produce two similar daughter cells with identical genetic material. Research into the process of mitosis is therefore of critical importance both for the basic understanding of cell biology and for the clinical approach to manifold pathologies resulting from its malfunctioning, including cancer. In this paper, we propose an approach to study mitotic progression automatically using deep learning. We used neural networks to predict different mitosis stages. We extracted video sequences of cells undergoing division and trained a Recurrent Neural Network (RNN) to extract image features. The use of RNN enabled better extraction of features. The RNN-based approach gave better performance compared to classifier based feature extraction methods which do not use time information. Evaluation of precision, recall, and F-score indicates the superiority of the proposed model compared to the baseline. To study the loss in performance due to confusion between adjacent classes, we plotted the confusion matrix as well. In addition, we visualized the feature space to understand why RNNs are better at classifying the mitosis stages than other classifier models, which indicated the formation of strong clusters for the different classes, clearly confirming the advantage of the proposed RNN-based approach.
PMID:38466708 | DOI:10.1371/journal.pone.0297356
MENet: A Mitscherlich function based ensemble of CNN models to classify lung cancer using CT scans
PLoS One. 2024 Mar 11;19(3):e0298527. doi: 10.1371/journal.pone.0298527. eCollection 2024.
ABSTRACT
Lung cancer is one of the leading causes of cancer-related deaths worldwide. To reduce the mortality rate, early detection and proper treatment should be ensured. Computer-aided diagnosis methods analyze different modalities of medical images to increase diagnostic precision. In this paper, we propose an ensemble model, called the Mitscherlich function-based Ensemble Network (MENet), which combines the prediction probabilities obtained from three deep learning models, namely Xception, InceptionResNetV2, and MobileNetV2, to improve the accuracy of a lung cancer prediction model. The ensemble approach is based on the Mitscherlich function, which produces a fuzzy rank to combine the outputs of the said base classifiers. The proposed method is trained and tested on the two publicly available lung cancer datasets, namely Iraq-Oncology Teaching Hospital/National Center for Cancer Diseases (IQ-OTH/NCCD) and LIDC-IDRI, both of these are computed tomography (CT) scan datasets. The obtained results in terms of some standard metrics show that the proposed method performs better than state-of-the-art methods. The codes for the proposed work are available at https://github.com/SuryaMajumder/MENet.
PMID:38466701 | DOI:10.1371/journal.pone.0298527
Trustworthy deep learning framework for the detection of abnormalities in X-ray shoulder images
PLoS One. 2024 Mar 11;19(3):e0299545. doi: 10.1371/journal.pone.0299545. eCollection 2024.
ABSTRACT
Musculoskeletal conditions affect an estimated 1.7 billion people worldwide, causing intense pain and disability. These conditions lead to 30 million emergency room visits yearly, and the numbers are only increasing. However, diagnosing musculoskeletal issues can be challenging, especially in emergencies where quick decisions are necessary. Deep learning (DL) has shown promise in various medical applications. However, previous methods had poor performance and a lack of transparency in detecting shoulder abnormalities on X-ray images due to a lack of training data and better representation of features. This often resulted in overfitting, poor generalisation, and potential bias in decision-making. To address these issues, a new trustworthy DL framework has been proposed to detect shoulder abnormalities (such as fractures, deformities, and arthritis) using X-ray images. The framework consists of two parts: same-domain transfer learning (TL) to mitigate imageNet mismatch and feature fusion to reduce error rates and improve trust in the final result. Same-domain TL involves training pre-trained models on a large number of labelled X-ray images from various body parts and fine-tuning them on the target dataset of shoulder X-ray images. Feature fusion combines the extracted features with seven DL models to train several ML classifiers. The proposed framework achieved an excellent accuracy rate of 99.2%, F1Score of 99.2%, and Cohen's kappa of 98.5%. Furthermore, the accuracy of the results was validated using three visualisation tools, including gradient-based class activation heat map (Grad CAM), activation visualisation, and locally interpretable model-independent explanations (LIME). The proposed framework outperformed previous DL methods and three orthopaedic surgeons invited to classify the test set, who obtained an average accuracy of 79.1%. The proposed framework has proven effective and robust, improving generalisation and increasing trust in the final results.
PMID:38466693 | DOI:10.1371/journal.pone.0299545
Physics-informed Deep Learning for Muscle Force Prediction with Unlabeled sEMG Signals
IEEE Trans Neural Syst Rehabil Eng. 2024 Mar 11;PP. doi: 10.1109/TNSRE.2024.3375320. Online ahead of print.
ABSTRACT
Computational biomechanical analysis plays a pivotal role in understanding and improving human movements and physical functions. Although physics-based modeling methods can interpret the dynamic interaction between the neural drive to muscle dynamics and joint kinematics, they suffer from high computational latency. In recent years, data-driven methods have emerged as a promising alternative due to their fast execution speed, but label information is still required during training, which is not easy to acquire in practice. To tackle these issues, this paper presents a novel physics-informed deep learning method to predict muscle forces without any label information during model training. In addition, the proposed method could also identify personalized muscle-tendon parameters. To achieve this, the Hill muscle model-based forward dynamics is embedded into the deep neural network as the additional loss to further regulate the behavior of the deep neural network. Experimental validations on the wrist joint from six healthy subjects are performed, and a fully connected neural network (FNN) is selected to implement the proposed method. The predicted results of muscle forces show comparable or even lower root mean square error (RMSE) and higher coefficient of determination compared with baseline methods, which have to use the labeled surface electromyography (sEMG) signals, and it can also identify muscle-tendon parameters accurately, demonstrating the effectiveness of the proposed physics-informed deep learning method.
PMID:38466606 | DOI:10.1109/TNSRE.2024.3375320
Permutation Equivariant Graph Framelets for Heterophilous Graph Learning
IEEE Trans Neural Netw Learn Syst. 2024 Mar 11;PP. doi: 10.1109/TNNLS.2024.3370918. Online ahead of print.
ABSTRACT
The nature of heterophilous graphs is significantly different from that of homophilous graphs, which causes difficulties in early graph neural network (GNN) models and suggests aggregations beyond the one-hop neighborhood. In this article, we develop a new way to implement multiscale extraction via constructing Haar-type graph framelets with desired properties of permutation equivariance, efficiency, and sparsity, for deep learning tasks on graphs. We further design a graph framelet neural network model permutation equivariant graph framelet augmented network (PEGFAN) based on our constructed graph framelets. The experiments are conducted on a synthetic dataset and nine benchmark datasets to compare the performance with other state-of-the-art models. The result shows that our model can achieve the best performance on certain datasets of heterophilous graphs (including the majority of heterophilous datasets with relatively larger sizes and denser connections) and competitive performance on the remaining.
PMID:38466605 | DOI:10.1109/TNNLS.2024.3370918
Learning Rates of Deep Nets for Geometrically Strongly Mixing Sequence
IEEE Trans Neural Netw Learn Syst. 2024 Mar 11;PP. doi: 10.1109/TNNLS.2024.3371025. Online ahead of print.
ABSTRACT
The great success of deep learning poses an urgent challenge to establish the theoretical basis for its working mechanism. Recently, research on the convergence of deep neural networks (DNNs) has made great progress. However, the existing studies are based on the assumption that the samples are independent, which is too strong to be applied to many real-world scenarios. In this brief, we establish a fast learning rate for the empirical risk minimization (ERM) on DNN regression with dependent samples, and the dependence is expressed in terms of geometrically strongly mixing sequence. To the best of our knowledge, this is the first convergence result of DNN methods based on mixing sequences. This result is a natural generalization of the independent sample case.
PMID:38466602 | DOI:10.1109/TNNLS.2024.3371025
CollaPPI: A Collaborative Learning Framework for Predicting Protein-Protein Interactions
IEEE J Biomed Health Inform. 2024 Mar 11;PP. doi: 10.1109/JBHI.2024.3375621. Online ahead of print.
ABSTRACT
Exploring protein-protein interaction (PPI) is of paramount importance for elucidating the intrinsic mechanism of various biological processes. Nevertheless, experimental determination of PPI can be both time-consuming and expensive, motivating the exploration of data-driven deep learning technologies as a viable, efficient, and accurate alternative. Nonetheless, most current deep learning-based methods regarded a pair of proteins to be predicted for possible interaction as two separate entities when extracting PPI features, thus neglecting the knowledge sharing among the collaborative protein and the target protein. Aiming at the above issue, a collaborative learning framework CollaPPI was proposed in this study, where two kinds of collaboration, i.e., protein-level collaboration and task-level collaboration, were incorporated to achieve not only the knowledge-sharing between a pair of proteins, but also the complementation of such shared knowledge between biological domains closely related to PPI (i.e., protein function, and subcellular location). Evaluation results demonstrated that CollaPPI obtained superior performance compared to state-of-the-art methods on two PPI benchmarks. Besides, evaluation results of CollaPPI on the additional PPI type prediction task further proved its excellent generalization ability.
PMID:38466584 | DOI:10.1109/JBHI.2024.3375621
Assessing the Impact of Urban Environments on Mental Health and Perception Using Deep Learning: A Review and Text Mining Analysis
J Urban Health. 2024 Mar 11. doi: 10.1007/s11524-024-00830-6. Online ahead of print.
ABSTRACT
Understanding how outdoor environments affect mental health outcomes is vital in today's fast-paced and urbanized society. Recently, advancements in data-gathering technologies and deep learning have facilitated the study of the relationship between the outdoor environment and human perception. In a systematic review, we investigate how deep learning techniques can shed light on a better understanding of the influence of outdoor environments on human perceptions and emotions, with an emphasis on mental health outcomes. We have systematically reviewed 40 articles published in SCOPUS and the Web of Science databases which were the published papers between 2016 and 2023. The study presents and utilizes a novel topic modeling method to identify coherent keywords. By extracting the top words of each research topic, and identifying the current topics, we indicate that current studies are classified into three areas. The first topic was "Urban Perception and Environmental Factors" where the studies aimed to evaluate perceptions and mental health outcomes. Within this topic, the studies were divided based on human emotions, mood, stress, and urban features impacts. The second topic was titled "Data Analysis and Urban Imagery in Modeling" which focused on refining deep learning techniques, data collection methods, and participants' variability to understand human perceptions more accurately. The last topic was named "Greenery and visual exposure in urban spaces" which focused on the impact of the amount and the exposure of green features on mental health and perceptions. Upon reviewing the papers, this study provides a guide for subsequent research to enhance the view of using deep learning techniques to understand how urban environments influence mental health. It also provides various suggestions that should be taken into account when planning outdoor spaces.
PMID:38466494 | DOI:10.1007/s11524-024-00830-6
Artificial intelligence-assisted double reading of chest radiographs to detect clinically relevant missed findings: a two-centre evaluation
Eur Radiol. 2024 Mar 11. doi: 10.1007/s00330-024-10676-w. Online ahead of print.
ABSTRACT
OBJECTIVES: To evaluate an artificial intelligence (AI)-assisted double reading system for detecting clinically relevant missed findings on routinely reported chest radiographs.
METHODS: A retrospective study was performed in two institutions, a secondary care hospital and tertiary referral oncology centre. Commercially available AI software performed a comparative analysis of chest radiographs and radiologists' authorised reports using a deep learning and natural language processing algorithm, respectively. The AI-detected discrepant findings between images and reports were assessed for clinical relevance by an external radiologist, as part of the commercial service provided by the AI vendor. The selected missed findings were subsequently returned to the institution's radiologist for final review.
RESULTS: In total, 25,104 chest radiographs of 21,039 patients (mean age 61.1 years ± 16.2 [SD]; 10,436 men) were included. The AI software detected discrepancies between imaging and reports in 21.1% (5289 of 25,104). After review by the external radiologist, 0.9% (47 of 5289) of cases were deemed to contain clinically relevant missed findings. The institution's radiologists confirmed 35 of 47 missed findings (74.5%) as clinically relevant (0.1% of all cases). Missed findings consisted of lung nodules (71.4%, 25 of 35), pneumothoraces (17.1%, 6 of 35) and consolidations (11.4%, 4 of 35).
CONCLUSION: The AI-assisted double reading system was able to identify missed findings on chest radiographs after report authorisation. The approach required an external radiologist to review the AI-detected discrepancies. The number of clinically relevant missed findings by radiologists was very low.
CLINICAL RELEVANCE STATEMENT: The AI-assisted double reader workflow was shown to detect diagnostic errors and could be applied as a quality assurance tool. Although clinically relevant missed findings were rare, there is potential impact given the common use of chest radiography.
KEY POINTS: • A commercially available double reading system supported by artificial intelligence was evaluated to detect reporting errors in chest radiographs (n=25,104) from two institutions. • Clinically relevant missed findings were found in 0.1% of chest radiographs and consisted of unreported lung nodules, pneumothoraces and consolidations. • Applying AI software as a secondary reader after report authorisation can assist in reducing diagnostic errors without interrupting the radiologist's reading workflow. However, the number of AI-detected discrepancies was considerable and required review by a radiologist to assess their relevance.
PMID:38466390 | DOI:10.1007/s00330-024-10676-w
Trends and Hotspots in Global Radiomics Research: A Bibliometric Analysis
Technol Cancer Res Treat. 2024 Jan-Dec;23:15330338241235769. doi: 10.1177/15330338241235769.
ABSTRACT
Objectives: The purpose of this research is to summarize the structure of radiomics-based knowledge and to explore potential trends and priorities by using bibliometric analysis. Methods: Select radiomics-related publications from 2012 to October 2022 from the Science Core Collection Web site. Use VOSviewer (version 1.6.18), CiteSpace (version 6.1.3), Tableau (version 2022), Microsoft Excel and Rstudio's free online platforms (http://bibliometric.com) for co-writing, co-citing, and co-occurrence analysis of countries, institutions, authors, references, and keywords in the field. The visual analysis is also carried out on it. Results: The study included 6428 articles. Since 2012, there has been an increase in research papers based on radiomics. Judging by publications, China has made the largest contribution in this area. We identify the most productive institutions and authors as Fudan University and Tianjie. The top three magazines with the most publications are《FRONTIERS IN ONCOLOGY》, 《EUROPEAN RADIOLOGY》, and 《CANCERS》. According to the results of reference and keyword analysis, "deep learning, nomogram, ultrasound, f-18-fdg, machine learning, covid-19, radiogenomics" has been determined as the main research direction in the future. Conclusion: Radiomics is in a phase of vigorous development with broad prospects. Cross-border cooperation between countries and institutions should be strengthened in the future. It can be predicted that the development of deep learning-based models and multimodal fusion models will be the focus of future research. Advances in knowledge: This study explores the current state of research and hot spots in the field of radiomics from multiple perspectives, comprehensively, and objectively reflecting the evolving trends in imaging-related research and providing a reference for future research.
PMID:38465611 | DOI:10.1177/15330338241235769
Deep Learning-Based Spermatogenic Staging in Tissue Sections of Cynomolgus Macaque Testes
Toxicol Pathol. 2024 Mar 11:1926233241234059. doi: 10.1177/01926233241234059. Online ahead of print.
ABSTRACT
The indirect assessment of adverse effects on fertility in cynomolgus monkeys requires that tissue sections of the testis be microscopically evaluated with awareness of the stage of spermatogenesis that a particular cross-section of a seminiferous tubule is in. This difficult and subjective task could very much benefit from automation. Using digital whole slide images (WSIs) from tissue sections of testis, we have developed a deep learning model that can annotate the stage of each tubule with high sensitivity, precision, and accuracy. The model was validated on six WSI using a six-stage spermatogenic classification system. Whole slide images contained an average number of 4938 seminiferous tubule cross-sections. On average, 78% of these tubules were staged with 29% in stage I-IV, 12% in stage V-VI, 4% in stage VII, 19% in stage VIII-IX, 18% in stage X-XI, and 17% in stage XII. The deep learning model supports pathologists in conducting a stage-aware evaluation of the testis. It also allows derivation of a stage-frequency map. The diagnostic value of this stage-frequency map is still unclear, as further data on its variability and relevance need to be generated for testes with spermatogenic disturbances.
PMID:38465599 | DOI:10.1177/01926233241234059
Association of Blood Pressure With Brain Ages: A Cohort Study of Gray and White Matter Aging Discrepancy in Mid-to-Older Adults From UK Biobank
Hypertension. 2024 Mar 11. doi: 10.1161/HYPERTENSIONAHA.123.22176. Online ahead of print.
ABSTRACT
BACKGROUND: Gray matter (GM) and white matter (WM) impairments are both associated with raised blood pressure (BP), although whether elevated BP is differentially associated with the GM and WM aging process remains inadequately examined.
METHODS: We included 37 327 participants with diffusion-weighted imaging (DWI) and 39 630 participants with T1-weighted scans from UK Biobank. BP was classified into 4 categories: normal BP, high-normal BP, grade 1, and grade 2 hypertension. Brain age gaps (BAGs) for GM (BAGGM) and WM (BAGWM) were derived from diffusion-weighted imaging and T1 scans separately using 3-dimensional-convolutional neural network deep learning techniques.
RESULTS: There was an increase in both BAGGM and BAGWM with raised BP (P<0.05). BAGWM was significantly larger than BAGGM at high-normal BP (0.195 years older; P=0.006), grade 1 hypertension (0.174 years older; P=0.004), and grade 2 hypertension (0.510 years older; P<0.001), but not for normal BP. Mediation analysis revealed that the association between hypertension and cognitive decline was primarily mediated by WM impairment. Mendelian randomization analysis suggested a causal relationship between hypertension and WM aging acceleration (unstandardized B, 1.780; P=0.016) but not for GM (P>0.05). Sliding-window analysis indicated the association between hypertension and brain aging acceleration was moderated by chronological age, showing stronger correlations in midlife but weaker associations in the older age.
CONCLUSIONS: Compared with GM, WM was more vulnerable to raised BP. Our study provided compelling evidence that concerted efforts should be directed towards WM damage in individuals with hypertension in clinical practice.
PMID:38465593 | DOI:10.1161/HYPERTENSIONAHA.123.22176
WATUNet: a deep neural network for segmentation of volumetric sweep imaging ultrasound
Mach Learn Sci Technol. 2024 Mar 1;5(1):015042. doi: 10.1088/2632-2153/ad2e15. Epub 2024 Mar 8.
ABSTRACT
Limited access to breast cancer diagnosis globally leads to delayed treatment. Ultrasound, an effective yet underutilized method, requires specialized training for sonographers, which hinders its widespread use. Volume sweep imaging (VSI) is an innovative approach that enables untrained operators to capture high-quality ultrasound images. Combined with deep learning, like convolutional neural networks, it can potentially transform breast cancer diagnosis, enhancing accuracy, saving time and costs, and improving patient outcomes. The widely used UNet architecture, known for medical image segmentation, has limitations, such as vanishing gradients and a lack of multi-scale feature extraction and selective region attention. In this study, we present a novel segmentation model known as Wavelet_Attention_UNet (WATUNet). In this model, we incorporate wavelet gates and attention gates between the encoder and decoder instead of a simple connection to overcome the limitations mentioned, thereby improving model performance. Two datasets are utilized for the analysis: the public 'Breast Ultrasound Images' dataset of 780 images and a private VSI dataset of 3818 images, captured at the University of Rochester by the authors. Both datasets contained segmented lesions categorized into three types: no mass, benign mass, and malignant mass. Our segmentation results show superior performance compared to other deep networks. The proposed algorithm attained a Dice coefficient of 0.94 and an F1 score of 0.94 on the VSI dataset and scored 0.93 and 0.94 on the public dataset, respectively. Moreover, our model significantly outperformed other models in McNemar's test with false discovery rate correction on a 381-image VSI set. The experimental findings demonstrate that the proposed WATUNet model achieves precise segmentation of breast lesions in both standard-of-care and VSI images, surpassing state-of-the-art models. Hence, the model holds considerable promise for assisting in lesion identification, an essential step in the clinical diagnosis of breast lesions.
PMID:38464559 | PMC:PMC10921088 | DOI:10.1088/2632-2153/ad2e15