Deep learning

Correction to: Feasibility study of deep-learning-based bone suppression incorporated with single-energy material decomposition technique in chest X-rays

Tue, 2024-07-02 06:00

Br J Radiol. 2024 Jul 2:tqae120. doi: 10.1093/bjr/tqae120. Online ahead of print.

NO ABSTRACT

PMID:38954833 | DOI:10.1093/bjr/tqae120

Categories: Literature Watch

Enhancing Spatial Resolution in Tandem Mass Spectrometry Ion/Ion Reaction Imaging Experiments through Image Fusion

Tue, 2024-07-02 06:00

J Am Soc Mass Spectrom. 2024 Jul 2. doi: 10.1021/jasms.4c00144. Online ahead of print.

ABSTRACT

We have recently developed a charge inversion ion/ion reaction to selectively derivatize phosphatidylserine lipids via gas-phase Schiff base formation. This tandem mass spectrometry (MS/MS) workflow enables the separation and detection of isobaric lipids in imaging mass spectrometry, but the images acquired using this workflow are limited to relatively poor spatial resolutions due to the current time and limit of detection requirements for these ion/ion reaction imaging mass spectrometry experiments. This trade-off between chemical specificity and spatial resolution can be overcome by using computational image fusion, which combines complementary information from multiple images. Herein, we demonstrate a proof-of-concept workflow that fuses a low spatial resolution (i.e., 125 μm) ion/ion reaction product ion image with higher spatial resolution (i.e., 25 μm) ion images from a full scan experiment performed using the same tissue section, which results in a predicted ion/ion reaction product ion image with a 5-fold improvement in spatial resolution. Linear regression, random forest regression, and two-dimensional convolutional neural network (2-D CNN) predictive models were tested for this workflow. Linear regression and 2D CNN models proved optimal for predicted ion/ion images of PS 40:6 and SHexCer d38:1, respectively.

PMID:38954826 | DOI:10.1021/jasms.4c00144

Categories: Literature Watch

A deep neural network prediction method for diabetes based on Kendall's correlation coefficient and attention mechanism

Tue, 2024-07-02 06:00

PLoS One. 2024 Jul 2;19(7):e0306090. doi: 10.1371/journal.pone.0306090. eCollection 2024.

ABSTRACT

Diabetes is a chronic disease, which is characterized by abnormally high blood sugar levels. It may affect various organs and tissues, and even lead to life-threatening complications. Accurate prediction of diabetes can significantly reduce its incidence. However, the current prediction methods struggle to accurately capture the essential characteristics of nonlinear data, and the black-box nature of these methods hampers its clinical application. To address these challenges, we propose KCCAM_DNN, a diabetes prediction method that integrates Kendall's correlation coefficient and an attention mechanism within a deep neural network. In the KCCAM_DNN, Kendall's correlation coefficient is initially employed for feature selection, which effectively filters out key features influencing diabetes prediction. For missing values in the data, polynomial regression is utilized for imputation, ensuring data completeness. Subsequently, we construct a deep neural network (KCCAM_DNN) based on the self-attention mechanism, which assigns greater weight to crucial features affecting diabetes and enhances the model's predictive performance. Finally, we employ the SHAP model to analyze the impact of each feature on diabetes prediction, augmenting the model's interpretability. Experimental results show that KCCAM_DNN exhibits superior performance on both PIMA Indian and LMCH diabetes datasets, achieving test accuracies of 99.090% and 99.333%, respectively, approximately 2% higher than the best existing method. These results suggest that KCCAM_DNN is proficient in diabetes prediction, providing a foundation for informed decision-making in the diagnosis and prevention of diabetes.

PMID:38954714 | DOI:10.1371/journal.pone.0306090

Categories: Literature Watch

Dynamic 3D Point Cloud Sequences as 2D Videos

Tue, 2024-07-02 06:00

IEEE Trans Pattern Anal Mach Intell. 2024 Jul 2;PP. doi: 10.1109/TPAMI.2024.3421359. Online ahead of print.

ABSTRACT

Dynamic 3D point cloud sequences serve as one of the most common and practical representation modalities of dynamic real-world environments. However, their unstructured nature in both spatial and temporal domains poses significant challenges to effective and efficient processing. Existing deep point cloud sequence modeling approaches imitate the mature 2D video learning mechanisms by developing complex spatio-temporal point neighbor grouping and feature aggregation schemes, often resulting in methods lacking effectiveness, efficiency, and expressive power. In this paper, we propose a novel generic representation called Structured Point Cloud Videos (SPCVs). Intuitively, by leveraging the fact that 3D geometric shapes are essentially 2D manifolds, SPCV re-organizes a point cloud sequence as a 2D video with spatial smoothness and temporal consistency, where the pixel values correspond to the 3D coordinates of points. The structured nature of our SPCV representation allows for the seamless adaptation of well-established 2D image/video techniques, enabling efficient and effective processing and analysis of 3D point cloud sequences. To achieve such re-organization, we design a self-supervised learning pipeline that is geometrically regularized and driven by self-reconstructive and deformation field learning objectives. Additionally, we construct SPCV-based frameworks for both low-level and high-level 3D point cloud sequence processing and analysis tasks, including action recognition, temporal interpolation, and compression. Extensive experiments demonstrate the versatility and superiority of the proposed SPCV, which has the potential to offer new possibilities for deep learning on unstructured 3D point cloud sequences. Code will be released at https://github.com/ZENGYIMING-EAMON/SPCV.

PMID:38954587 | DOI:10.1109/TPAMI.2024.3421359

Categories: Literature Watch

Uni4Eye++: A General Masked Image Modeling Multi-modal Pre-training Framework for Ophthalmic Image Classification and Segmentation

Tue, 2024-07-02 06:00

IEEE Trans Med Imaging. 2024 Jul 2;PP. doi: 10.1109/TMI.2024.3422102. Online ahead of print.

ABSTRACT

A large-scale labeled dataset is a key factor for the success of supervised deep learning in most ophthalmic image analysis scenarios. However, limited annotated data is very common in ophthalmic image analysis, since manual annotation is time-consuming and labor-intensive. Self-supervised learning (SSL) methods bring huge opportunities for better utilizing unlabeled data, as they do not require massive annotations. To utilize as many unlabeled ophthalmic images as possible, it is necessary to break the dimension barrier, simultaneously making use of both 2D and 3D images as well as alleviating the issue of catastrophic forgetting. In this paper, we propose a universal self-supervised Transformer framework named Uni4Eye++ to discover the intrinsic image characteristic and capture domain-specific feature embedding in ophthalmic images. Uni4Eye++ can serve as a global feature extractor, which builds its basis on a Masked Image Modeling task with a Vision Transformer architecture. On the basis of our previous work Uni4Eye, we further employ an image entropy guided masking strategy to reconstruct more-informative patches and a dynamic head generator module to alleviate modality confusion. We evaluate the performance of our pre-trained Uni4Eye++ encoder by fine-tuning it on multiple downstream ophthalmic image classification and segmentation tasks. The superiority of Uni4Eye++ is successfully established through comparisons to other state-of-the-art SSL pre-training methods. Our code is available at Github1.

PMID:38954581 | DOI:10.1109/TMI.2024.3422102

Categories: Literature Watch

Semi-Supervised Multimodal Representation Learning Through a Global Workspace

Tue, 2024-07-02 06:00

IEEE Trans Neural Netw Learn Syst. 2024 Jul 2;PP. doi: 10.1109/TNNLS.2024.3416701. Online ahead of print.

ABSTRACT

Recent deep learning models can efficiently combine inputs from different modalities (e.g., images and text) and learn to align their latent representations or to translate signals from one domain to another (as in image captioning or text-to-image generation). However, current approaches mainly rely on brute-force supervised training over large multimodal datasets. In contrast, humans (and other animals) can learn useful multimodal representations from only sparse experience with matched cross-modal data. Here, we evaluate the capabilities of a neural network architecture inspired by the cognitive notion of a "global workspace" (GW): a shared representation for two (or more) input modalities. Each modality is processed by a specialized system (pretrained on unimodal data and subsequently frozen). The corresponding latent representations are then encoded to and decoded from a single shared workspace. Importantly, this architecture is amenable to self-supervised training via cycle-consistency: encoding-decoding sequences should approximate the identity function. For various pairings of vision-language modalities and across two datasets of varying complexity, we show that such an architecture can be trained to align and translate between two modalities with very little need for matched data (from four to seven times less than a fully supervised approach). The GW representation can be used advantageously for downstream classification and cross-modal retrieval tasks and for robust transfer learning. Ablation studies reveal that both the shared workspace and the self-supervised cycle-consistency training are critical to the system's performance.

PMID:38954575 | DOI:10.1109/TNNLS.2024.3416701

Categories: Literature Watch

Multiview Deep Learning-based Efficient Medical Data Management for Survival Time Forecasting

Tue, 2024-07-02 06:00

IEEE J Biomed Health Inform. 2024 Jul 2;PP. doi: 10.1109/JBHI.2024.3422180. Online ahead of print.

ABSTRACT

In recent years, data-driven remote medical management has received much attention, especially in application of survival time forecasting. By monitoring the physical characteristics indexes of patients, intelligent algorithms can be deployed to implement efficient healthcare management. However, such pure medical data-driven scenes generally lack multimedia information, which brings challenge to analysis tasks. To deal with this issue, this paper introduces the idea of ensemble deep learning to enhance feature representation ability, thus enhancing knowledge discovery in remote healthcare management. Therefore, a multiview deep learning-based efficient medical data management framework for survival time forecasting is proposed in this paper, which is named as "MDL-MDM" for short. Firstly, basic monitoring data for body indexes of patients is encoded, which serves as the data foundation for forecasting tasks. Then, three different neural network models, convolution neural network, graph attention network, and graph convolution network, are selected to build a hybrid computing framework. Their combination can bring a multiview feature learning framework to realize an efficient medical data management framework. In addition, experiments are conducted on a realistic medical dataset about cancer patients in the US. Results show that the proposal can predict survival time with 1% to 2% reduction in prediction error.

PMID:38954570 | DOI:10.1109/JBHI.2024.3422180

Categories: Literature Watch

PSTNet: Enhanced Polyp Segmentation With Multi-Scale Alignment and Frequency Domain Integration

Tue, 2024-07-02 06:00

IEEE J Biomed Health Inform. 2024 Jul 2;PP. doi: 10.1109/JBHI.2024.3421550. Online ahead of print.

ABSTRACT

Accurate segmentation of colorectal polyps in colonoscopy images is crucial for effective diagnosis and management of colorectal cancer (CRC). However, current deep learning-based methods primarily rely on fusing RGB information across multiple scales, leading to limitations in accurately identifying polyps due to restricted RGB domain information and challenges in feature misalignment during multi-scale aggregation. To address these limitations, we propose the Polyp Segmentation Network with Shunted Transformer (PSTNet), a novel approach that integrates both RGB and frequency domain cues present in the images. PSTNet comprises three key modules: the Frequency Characterization Attention Module (FCAM) for extracting frequency cues and capturing polyp characteristics, the Feature Supplementary Alignment Module (FSAM) for aligning semantic information and reducing misalignment noise, and the Cross Perception localization Module (CPM) for synergizing frequency cues with high-level semantics to achieve efficient polyp segmentation. Extensive experiments on challenging datasets demonstrate PSTNet's significant improvement in polyp segmentation accuracy across various metrics, consistently outperforming state-of-the-art methods. The integration of frequency domain cues and the novel architectural design of PSTNet contribute to advancing computer-assisted polyp segmentation, facilitating more accurate diagnosis and management of CRC. Our source code is available for reference at https://github.com/clearxu/PSTNet.

PMID:38954569 | DOI:10.1109/JBHI.2024.3421550

Categories: Literature Watch

Cross-Anatomy Transfer Learning via Shape- Aware Adaptive Fine-Tuning for 3D Vessel Segmentation

Tue, 2024-07-02 06:00

IEEE J Biomed Health Inform. 2024 Jul 2;PP. doi: 10.1109/JBHI.2024.3422177. Online ahead of print.

ABSTRACT

Deep learning methods have recently achieved remarkable performance in vessel segmentation applications, yet require numerous labor-intensive labeled data. To alleviate the requirement of manual annotation, transfer learning methods can potentially be used to acquire the related knowledge of tubular structures from public large-scale labeled vessel datasets for target vessel segmentation in other anatomic sites of the human body. However, the cross-anatomy domain shift is a challenging task due to the formidable discrepancy among various vessel structures in different anatomies, resulting in the limited performance of transfer learning. Therefore, we propose a cross-anatomy transfer learning framework for 3D vessel segmentation, which first generates a pre-trained model on a public hepatic vessel dataset and then adaptively fine-tunes our target segmentation network initialized from the model for segmentation of other anatomic vessels. In the framework, the adaptive fine-tuning strategy is presented to dynamically decide on the frozen or fine-tuned filters of the target network for each input sample with a proxy network. Moreover, we develop a Gaussian-based signed distance map that explicitly encodes vessel-specific shape context. The prediction of the map is added as an auxiliary task in the segmentation network to capture geometry-aware knowledge in the fine-tuning. We demonstrate the effectiveness of our method through extensive experiments on two small-scale datasets of coronary artery and brain vessel. The results indicate the proposed method effectively overcomes the discrepancy of cross-anatomy domain shift to achieve accurate vessel segmentation for these two datasets.

PMID:38954568 | DOI:10.1109/JBHI.2024.3422177

Categories: Literature Watch

The Impact of Artificial Intelligence on Allergy Diagnosis and Treatment

Tue, 2024-07-02 06:00

Curr Allergy Asthma Rep. 2024 Jul 2. doi: 10.1007/s11882-024-01152-y. Online ahead of print.

ABSTRACT

PURPOSE OF REVIEW: Artificial intelligence (AI), be it neuronal networks, machine learning or deep learning, has numerous beneficial effects on healthcare systems; however, its potential applications and diagnostic capabilities for immunologic diseases have yet to be explored. Understanding AI systems can help healthcare workers better assimilate artificial intelligence into their practice and unravel its potential in diagnostics, clinical research, and disease management.

RECENT FINDINGS: We reviewed recent advancements in AI systems and their integration in healthcare systems, along with their potential benefits in the diagnosis and management of diseases. We explored machine learning as employed in allergy diagnosis and its learning patterns from patient datasets, as well as the possible advantages of using AI in the field of research related to allergic reactions and even remote monitoring. Considering the ethical challenges and privacy concerns raised by clinicians and patients with regard to integrating AI in healthcare, we explored the new guidelines adapted by regulatory bodies. Despite these challenges, AI appears to have been successfully incorporated into various healthcare systems and is providing patient-centered solutions while simultaneously assisting healthcare workers. Artificial intelligence offers new hope in the field of immunologic disease diagnosis, monitoring, and management and thus has the potential to revolutionize healthcare systems.

PMID:38954325 | DOI:10.1007/s11882-024-01152-y

Categories: Literature Watch

Unraveling Brain Synchronisation Dynamics by Explainable Neural Networks using EEG Signals: Application to Dyslexia Diagnosis

Tue, 2024-07-02 06:00

Interdiscip Sci. 2024 Jul 2. doi: 10.1007/s12539-024-00634-x. Online ahead of print.

ABSTRACT

The electrical activity of the neural processes involved in cognitive functions is captured in EEG signals, allowing the exploration of the integration and coordination of neuronal oscillations across multiple spatiotemporal scales. We have proposed a novel approach that combines the transformation of EEG signal into image sequences, considering cross-frequency phase synchronisation (CFS) dynamics involved in low-level auditory processing, with the development of a two-stage deep learning model for the detection of developmental dyslexia (DD). This deep learning model exploits spatial and temporal information preserved in the image sequences to find discriminative patterns of phase synchronisation over time achieving a balanced accuracy of up to 83%. This result supports the existence of differential brain synchronisation dynamics between typical and dyslexic seven-year-old readers. Furthermore, we have obtained interpretable representations using a novel feature mask to link the most relevant regions during classification with the cognitive processes attributed to normal reading and those corresponding to compensatory mechanisms found in dyslexia.

PMID:38954232 | DOI:10.1007/s12539-024-00634-x

Categories: Literature Watch

Attention module incorporated transfer learning empowered deep learning-based models for classification of phenotypically similar tropical cattle breeds (Bos indicus)

Tue, 2024-07-02 06:00

Trop Anim Health Prod. 2024 Jul 2;56(6):192. doi: 10.1007/s11250-024-04050-7.

ABSTRACT

Accurate breed identification in dairy cattle is essential for optimizing herd management and improving genetic standards. A smart method for correctly identifying phenotypically similar breeds can empower farmers to enhance herd productivity. A convolutional neural network (CNN) based model was developed for the identification of Sahiwal and Red Sindhi cows. To increase the classification accuracy, first, cows's pixels were segmented from the background using CNN model. Using this segmented image, a masked image was produced by retaining cows' pixels from the original image while eliminating the background. To improve the classification accuracy, models were trained on four different images of each cow: front view, side view, grayscale front view, and grayscale side view. The masked images of these views were fed to the multi-input CNN model which predicts the class of input images. The segmentation model achieved intersection-over-union (IoU) and F1-score values of 81.75% and 85.26%, respectively with an inference time of 296 ms. For the classification task, multiple variants of MobileNet and EfficientNet models were used as the backbone along with pre-trained weights. The MobileNet model achieved 80.0% accuracy for both breeds, while MobileNetV2 and MobileNetV3 reached 82.0% accuracy. CNN models with EfficientNet as backbones outperformed MobileNet models, with accuracy ranging from 84.0% to 86.0%. The F1-scores for these models were found to be above 83.0%, indicating effective breed classification with fewer false positives and negatives. Thus, the present study demonstrates that deep learning models can be used effectively to identify phenotypically similar-looking cattle breeds. To accurately identify zebu breeds, this study will reduce the dependence of farmers on experts.

PMID:38954103 | DOI:10.1007/s11250-024-04050-7

Categories: Literature Watch

Pediatric Electrocardiogram-Based Deep Learning to Predict Secundum Atrial Septal Defects

Tue, 2024-07-02 06:00

Pediatr Cardiol. 2024 Jul 2. doi: 10.1007/s00246-024-03540-7. Online ahead of print.

ABSTRACT

Secundum atrial septal defect (ASD2) detection is often delayed, with the potential for late diagnosis complications. Recent work demonstrated artificial intelligence-enhanced ECG analysis shows promise to detect ASD2 in adults. However, its application to pediatric populations remains underexplored. In this study, we trained a convolutional neural network (AI-pECG) on paired ECG-echocardiograms (≤ 2 days apart) to detect ASD2 from patients ≤ 18 years old without major congenital heart disease. Model performance was evaluated on the first ECG-echocardiogram pair per patient for Boston Children's Hospital internal testing and emergency department cohorts using area under the receiver operating (AUROC) and precision-recall (AUPRC) curves. The training cohort comprised of 92,377 ECG-echocardiogram pairs (46,261 patients; median age 8.2 years) with an ASD2 prevalence of 6.7%. Test groups included internal testing (12,631 patients; median age 7.4 years; 6.9% prevalence) and emergency department (2,830 patients; median age 7.5 years; 4.9% prevalence) cohorts. Model performance was higher in the internal test (AUROC 0.84, AUPRC 0.46) cohort than the emergency department cohort (AUROC 0.80, AUPRC 0.30). In both cohorts, AI-pECG outperformed ECG findings of incomplete right bundle branch block. Model explainability analyses suggest high-risk limb lead features include greater amplitude P waves (suggestive of right atrial enlargement) and V1 RSR' (suggestive of RBBB). Our findings demonstrate the promise of AI-pECG to inexpensively screen and/or detect ASD2 in pediatric patients. Future multicenter validation and prospective trials to inform clinical decision making are warranted.

PMID:38953953 | DOI:10.1007/s00246-024-03540-7

Categories: Literature Watch

Optimized Wasserstein Deep Convolutional Generative Adversarial Network fostered Groundnut Leaf Disease Identification System

Tue, 2024-07-02 06:00

Network. 2024 Jul 2:1-25. doi: 10.1080/0954898X.2024.2351146. Online ahead of print.

ABSTRACT

Groundnut is a noteworthy oilseed crop. Attacks by leaf diseases are one of the most important reasons causing low yield and loss of groundnut plant growth, which will directly diminish the yield and quality. Therefore, an Optimized Wasserstein Deep Convolutional Generative Adversarial Network fostered Groundnut Leaf Disease Identification System (GLDI-WDCGAN-AOA) is proposed in this paper. The pre-processed output is fed to Hesitant Fuzzy Linguistic Bi-objective Clustering (HFL-BOC) for segmentation. By using Wasserstein Deep Convolutional Generative Adversarial Network (WDCGAN), the input leaf images are classified into Healthy leaf, early leaf spot, late leaf spot, nutrition deficiency, and rust. Finally, the weight parameters of WDCGAN are optimized by Aquila Optimization Algorithm (AOA) to achieve high accuracy. The proposed GLDI-WDCGAN-AOA approach provides 23.51%, 22.01%, and 18.65% higher accuracy and 24.78%, 23.24%, and 28.98% lower error rate analysed with existing methods, such as Real-time automated identification and categorization of groundnut leaf disease utilizing hybrid machine learning methods (GLDI-DNN), Online identification of peanut leaf diseases utilizing the data balancing method along deep transfer learning (GLDI-LWCNN), and deep learning-driven method depending on progressive scaling method for the precise categorization of groundnut leaf infections (GLDI-CNN), respectively.

PMID:38953316 | DOI:10.1080/0954898X.2024.2351146

Categories: Literature Watch

Accelerating Polymer Discovery with Uncertainty-Guided PGCNN: Explainable AI for Predicting Properties and Mechanistic Insights

Tue, 2024-07-02 06:00

J Chem Inf Model. 2024 Jul 2. doi: 10.1021/acs.jcim.4c00555. Online ahead of print.

ABSTRACT

Deep learning holds great potential for expediting the discovery of new polymers from the vast chemical space. However, accurately predicting polymer properties for practical applications based on their monomer composition has long been a challenge. The main obstacles include insufficient data, ineffective representation encoding, and lack of explainability. To address these issues, we propose an interpretable model called the Polymer Graph Convolutional Neural Network (PGCNN) that can accurately predict various polymer properties. This model is trained using the RadonPy data set and validated using experimental data samples. By integrating evidential deep learning with the model, we can quantify the uncertainty of predictions and enable sample-efficient training through uncertainty-guided active learning. Additionally, we demonstrate that the global attention of the graph embedding can aid in discovering underlying physical principles by identifying important functional groups within polymers and associating them with specific material attributes. Lastly, we explore the high-throughput screening capability of our model by rapidly identifying thousands of promising candidates with low and high thermal conductivity from a pool of one million hypothetical polymers. In summary, our research not only advances our mechanistic understanding of polymers using explainable AI but also paves the way for data-driven trustworthy discovery of polymer materials.

PMID:38953249 | DOI:10.1021/acs.jcim.4c00555

Categories: Literature Watch

Deep Learning Enhanced Label-Free Action Potential Detection Using Plasmonic-Based Electrochemical Impedance Microscopy

Tue, 2024-07-02 06:00

Anal Chem. 2024 Jul 2. doi: 10.1021/acs.analchem.4c01179. Online ahead of print.

ABSTRACT

Measuring neuronal electrical activity, such as action potential propagation in cells, requires the sensitive detection of the weak electrical signal with high spatial and temporal resolution. None of the existing tools can fulfill this need. Recently, plasmonic-based electrochemical impedance microscopy (P-EIM) was demonstrated for the label-free mapping of the ignition and propagation of action potentials in neuron cells with subcellular resolution. However, limited by the signal-to-noise ratio in the high-speed P-EIM video, action potential mapping was achieved by averaging 90 cycles of signals. Such extensive averaging is not desired and may not always be feasible due to factors such as neuronal desensitization. In this study, we utilized advanced signal processing techniques to detect action potentials in P-EIM extracted signals with fewer averaged cycles. Matched filtering successfully detected action potential signals with as few as averaging five cycles of signals. Long short-term memory (LSTM) recurrent neural network achieved the best performance and was able to detect single-cycle stimulated action potential successfully [satisfactory area under the receiver operating characteristic curve (AUC) equal to 0.855]. Therefore, we show that deep learning-based signal processing can dramatically improve the usability of P-EIM mapping of neuronal electrical signals.

PMID:38953225 | DOI:10.1021/acs.analchem.4c01179

Categories: Literature Watch

Artificial Intelligence and Machine Learning in Neuroregeneration: A Systematic Review

Tue, 2024-07-02 06:00

Cureus. 2024 May 30;16(5):e61400. doi: 10.7759/cureus.61400. eCollection 2024 May.

ABSTRACT

Artificial intelligence (AI) and machine learning (ML) show promise in various medical domains, including medical imaging, precise diagnoses, and pharmaceutical research. In neuroscience and neurosurgery, AI/ML advancements enhance brain-computer interfaces, neuroprosthetics, and surgical planning. They are poised to revolutionize neuroregeneration by unraveling the nervous system's complexities. However, research on AI/ML in neuroregeneration is fragmented, necessitating a comprehensive review. Adhering to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) recommendations, 19 English-language papers focusing on AI/ML in neuroregeneration were selected from a total of 247. Two researchers independently conducted data extraction and quality assessment using the Mixed Methods Appraisal Tool (MMAT) 2018. Eight studies were deemed high quality, 10 moderate, and four low. Primary goals included diagnosing neurological disorders (35%), robotic rehabilitation (18%), and drug discovery (12% each). Methods ranged from analyzing imaging data (24%) to animal models (24%) and electronic health records (12%). Deep learning accounted for 41% of AI/ML techniques, while standard ML algorithms constituted 29%. The review underscores the growing interest in AI/ML for neuroregenerative medicine, with increasing publications. These technologies aid in diagnosing diseases and facilitating functional recovery through robotics and targeted stimulation. AI-driven drug discovery holds promise for identifying neuroregenerative therapies. Nonetheless, addressing existing limitations remains crucial in this rapidly evolving field.

PMID:38953082 | PMC:PMC11215936 | DOI:10.7759/cureus.61400

Categories: Literature Watch

A selective CutMix approach improves generalizability of deep learning-based grading and risk assessment of prostate cancer

Tue, 2024-07-02 06:00

J Pathol Inform. 2024 May 7;15:100381. doi: 10.1016/j.jpi.2024.100381. eCollection 2024 Dec.

ABSTRACT

The Gleason score is an important predictor of prognosis in prostate cancer. However, its subjective nature can result in over- or under-grading. Our objective was to train an artificial intelligence (AI)-based algorithm to grade prostate cancer in specimens from patients who underwent radical prostatectomy (RP) and to assess the correlation of AI-estimated proportions of different Gleason patterns with biochemical recurrence-free survival (RFS), metastasis-free survival (MFS), and overall survival (OS). Training and validation of algorithms for cancer detection and grading were completed with three large datasets containing a total of 580 whole-mount prostate slides from 191 RP patients at two centers and 6218 annotated needle biopsy slides from the publicly available Prostate Cancer Grading Assessment dataset. A cancer detection model was trained using MobileNetV3 on 0.5 mm × 0.5 mm cancer areas (tiles) captured at 10× magnification. For cancer grading, a Gleason pattern detector was trained on tiles using a ResNet50 convolutional neural network and a selective CutMix training strategy involving a mixture of real and artificial examples. This strategy resulted in improved model generalizability in the test set compared with three different control experiments when evaluated on both needle biopsy slides and whole-mount prostate slides from different centers. In an additional test cohort of RP patients who were clinically followed over 30 years, quantitative Gleason pattern AI estimates achieved concordance indexes of 0.69, 0.72, and 0.64 for predicting RFS, MFS, and OS times, outperforming the control experiments and International Society of Urological Pathology system (ISUP) grading by pathologists. Finally, unsupervised clustering of test RP patient specimens into low-, medium-, and high-risk groups based on AI-estimated proportions of each Gleason pattern resulted in significantly improved RFS and MFS stratification compared with ISUP grading. In summary, deep learning-based quantitative Gleason scoring using a selective CutMix training strategy may improve prognostication after prostate cancer surgery.

PMID:38953042 | PMC:PMC11215954 | DOI:10.1016/j.jpi.2024.100381

Categories: Literature Watch

Toward the design of persuasive systems for a healthy workplace: a real-time posture detection

Tue, 2024-07-02 06:00

Front Big Data. 2024 Jun 17;7:1359906. doi: 10.3389/fdata.2024.1359906. eCollection 2024.

ABSTRACT

Persuasive technologies, in connection with human factor engineering requirements for healthy workplaces, have played a significant role in ensuring a change in human behavior. Healthy workplaces suggest different best practices applicable to body posture, proximity to the computer system, movement, lighting conditions, computer system layout, and other significant psychological and cognitive aspects. Most importantly, body posture suggests how users should sit or stand in workplaces in line with best and healthy practices. In this study, we developed two study phases (pilot and main) using two deep learning models: convolutional neural networks (CNN) and Yolo-V3. To train the two models, we collected posture datasets from creative common license YouTube videos and Kaggle. We classified the dataset into comfortable and uncomfortable postures. Results show that our YOLO-V3 model outperformed CNN model with a mean average precision of 92%. Based on this finding, we recommend that YOLO-V3 model be integrated in the design of persuasive technologies for a healthy workplace. Additionally, we provide future implications for integrating proximity detection taking into consideration the ideal number of centimeters users should maintain in a healthy workplace.

PMID:38953011 | PMC:PMC11215059 | DOI:10.3389/fdata.2024.1359906

Categories: Literature Watch

Pages