Deep learning

Enhancing diabetic retinopathy and macular edema detection through multi scale feature fusion using deep learning model

Mon, 2024-12-16 06:00

Graefes Arch Clin Exp Ophthalmol. 2024 Dec 16. doi: 10.1007/s00417-024-06687-4. Online ahead of print.

ABSTRACT

BACKGROUND: This work tackles the growing problem of early identification of diabetic retinopathy and diabetic macular edema. The deep neural network design utilizes multi-scale feature fusion to improve automated diagnostic accuracy. Methods This approach uses convolutional neural networks (CNN) and is designed to combine higher-level semantic inputs with low-level textural characteristics. The contextual and localized abstract representations that complement each other are combined via a unique fusion technique.

RESULTS: Use the MESSIDOR dataset, which comprises retinal images labeled with pathological annotations, for model training and validation to ensure robust algorithm development. The suggested model shows a 98% general precision and good performance in diabetic retinopathy. This model achieves an impressive nearly 100% exactness for diabetic macular edema, with particularly high accuracy (0.99).

CONCLUSION: Consistent performance increases the likelihood that the vision will be upheld through public screening and extensive clinical integration.

PMID:39680112 | DOI:10.1007/s00417-024-06687-4

Categories: Literature Watch

Deep learning can detect elbow disease in dogs screened for elbow dysplasia

Mon, 2024-12-16 06:00

Vet Radiol Ultrasound. 2025 Jan;66(1):e13465. doi: 10.1111/vru.13465.

ABSTRACT

Medical image analysis based on deep learning is a rapidly advancing field in veterinary diagnostics. The aim of this retrospective diagnostic accuracy study was to develop and assess a convolutional neural network (CNN, EfficientNet) to evaluate elbow radiographs from dogs screened for elbow dysplasia. An auto-cropping tool based on the deep learning model RetinaNet was developed for radiograph preprocessing to crop the radiographs to the region of interest around the elbow joint. A total of 7229 radiographs with corresponding International Elbow Working Group scoring were included for training (n = 4000), validation (n = 1000), and testing (n = 2229) of CNN models for elbow diagnostics. The radiographs were classified in a binary manner as normal (negative class) or abnormal (positive class), where abnormal radiographs had various severities of osteoarthrosis and/or visible primary elbow dysplasia lesions. Explainable artificial intelligence analysis were performed on both correctly and incorrectly classified radiographs using VarGrad heatmaps to visualize regions of importance for the CNN model's predictions. The highest-performing CNN model showed excellent test accuracy, sensitivity, and specificity, all achieving a value of 0.98. Explainability analysis showed frequent highlighting along the margins of the anconeal process of both correctly and incorrectly classified radiographs. Uncertainty estimation using entropy to characterize the uncertainty of the model predictions showed that radiographs with ambiguous predictions could be flagged for human evaluation. Our study demonstrates robust performance of CNNs for detecting abnormal elbow joints in dogs screened for elbow dysplasia.

PMID:39679734 | DOI:10.1111/vru.13465

Categories: Literature Watch

Automated Bone Cancer Detection Using Deep Learning on X-Ray Images

Mon, 2024-12-16 06:00

Surg Innov. 2024 Dec 16:15533506241299886. doi: 10.1177/15533506241299886. Online ahead of print.

ABSTRACT

In recent days, bone cancer is a life-threatening health issue that can lead to death. However, physicians use CT-scan, X-rays, or MRI images to recognize bone cancer, but still require techniques to increase precision and reduce human labor. These methods face challenges such as high costs, time consumption, and the risk of misdiagnosis due to the complexity of bone tumor appearances. Therefore, it is essential to establish an automated system to detect healthy bones from cancerous ones. In this regard, Artificial intelligence, particularly deep learning, shows increased attention in the medical image analysis process. This research presents a new Golden Search Optimization along with Deep Learning Enabled Computer Aided Diagnosis for Bone Cancer Classification (GSODL-CADBCC) on X-ray images. The aim of the GSODL-CADBCC approach is to accurately distinguish the input X-ray images into healthy and cancerous. This research presents the GSODL-CADBCC technique that leverages the bilateral filtering technique to remove the noise. This method uses the SqueezeNet model to generate feature vectors, and the GSO algorithm efficiently selects the hyperparameters. Finally, the extracted features can be classified by improved cuckoo search with a long short-term memory model. The experimental results demonstrate that the GSODL- CADBCC approach attains highest performance with an average accuracy of 95.52% on the training set data and 94.79% on the testing set data. This automated approach not only reduces the need for manual interpretation but also minimizes the risk of diagnostic errors and provides a viable option for precise medical imaging-based bone cancer screening.

PMID:39679470 | DOI:10.1177/15533506241299886

Categories: Literature Watch

A self-attention-driven deep learning framework for inference of transcriptional gene regulatory networks

Mon, 2024-12-16 06:00

Brief Bioinform. 2024 Nov 22;26(1):bbae639. doi: 10.1093/bib/bbae639.

ABSTRACT

The interactions between transcription factors (TFs) and the target genes could provide a basis for constructing gene regulatory networks (GRNs) for mechanistic understanding of various biological complex processes. From gene expression data, particularly single-cell transcriptomic data containing rich cell-to-cell variations, it is highly desirable to infer TF-gene interactions (TGIs) using deep learning technologies. Numerous models or software including deep learning-based algorithms have been designed to identify transcriptional regulatory relationships between TFs and the downstream genes. However, these methods do not significantly improve predictions of TGIs due to some limitations regarding constructing underlying interactive structures linking regulatory components. In this study, we introduce a deep learning framework, DeepTGI, that encodes gene expression profiles from single-cell and/or bulk transcriptomic data and predicts TGIs with high accuracy. Our approach could fuse the features extracted from Auto-encoder with self-attention mechanism and other networks and could transform multihead attention modules to define representative features. By comparing it with other models or methods, DeepTGI exhibits its superiority to identify more potential TGIs and to reconstruct the GRNs and, therefore, could provide broader perspectives for discovery of more biological meaningful TGIs and for understanding transcriptional gene regulatory mechanisms.

PMID:39679439 | DOI:10.1093/bib/bbae639

Categories: Literature Watch

Fast and customizable image formation model for optical coherence tomography

Mon, 2024-12-16 06:00

Biomed Opt Express. 2024 Nov 13;15(12):6783-6798. doi: 10.1364/BOE.534263. eCollection 2024 Dec 1.

ABSTRACT

Optical coherence tomography (OCT) is a technique that performs high-resolution, three-dimensional, imaging of semi-transparent scattering biological tissues. Models of OCT image formation are needed for applications such as aiding image interpretation and validating OCT signal processing techniques. Existing image formation models generally trade off between model realism and computation time. In particular, the most realistic models tend to be highly computationally demanding, which becomes a limiting factor when simulating C-scan generation. Here we present an OCT image formation model based on the first-order Born approximation that is significantly faster than existing models, whilst maintaining a high degree of realism. This model is made more powerful because it is amenable to simulation of phase sensitive OCT, thus making it applicable to scenarios where sample displacement is of interest, such as optical coherence elastography (OCE) or Doppler OCT. The low computational cost of the model also makes it suitable for creating large OCT data sets needed for training deep learning OCT signal processing models. We present details of our novel image formation model and demonstrate its accuracy and computational efficiency.

PMID:39679414 | PMC:PMC11640576 | DOI:10.1364/BOE.534263

Categories: Literature Watch

The impact of body mass index on rehabilitation outcomes after lower limb amputation

Mon, 2024-12-16 06:00

PM R. 2024 Dec 16. doi: 10.1002/pmrj.13292. Online ahead of print.

ABSTRACT

PURPOSE: To determine the effect of obesity on physical function and clinical outcome measures in patients who received inpatient rehabilitation services for lower extremity amputation.

METHODS: A retrospective review was performed on patients with lower extremity amputation (n = 951). Patients were stratified into five categories adjusted for limb loss mass across different levels of healthy body mass index (BMI), overweight, and obesity. Outcomes included the Inpatient Rehabilitation Facility Patient Assessment Instrument functional scores (GG section), discharge home, length of stay (LOS), therapy time, discharge location, medical complications and acute care readmissions. Deep learning neural networks (DLNNs) were developed to learn the relationships between adjusted BMI and discharge home.

RESULTS: The severely obese group (BMI > 40 kg/m2) demonstrated 7%-13% lower toileting hygiene functional scores at discharge compared to the remaining groups (p < .001). The severely obese group also demonstrated 8%-9% lower sit-to-lying and lying-to-sitting bed mobility scores than the other groups (both p < .001). Sit-to-stand scores were 16%-21% worse and toilet transfer scores were 12%-20% worse in the BMI > 40 kg/m2 group than the other groups (all p < .001). Walking 50 ft with two turns was most difficult for the BMI > 40 kg/m2 group, with mean scores 7%-27% lower than the other BMI groups (p = .011). Wheelchair mobility scores for propelling 150 ft were worst for the severely obese group (4.9 points vs. 5.1-5.5 points for all other groups; p = .021). The LOS was longest in the BMI > 40 group and shortest in the BMI < 25 group (15.0 days vs. 13.3 days; p = .032). Logistic regression analysis indicated that BMI > 40 kg/m2 was associated with lower odds risk (OR) of discharge-to-home (OR = 0.504 [0.281-0.904]; p < .022). DLNNs found that adjusted BMI and BMI category were ranked 11th and 12th out of 90 model variables in predicting discharge home.

CONCLUSION: Patients with severe obesity (>40 kg/m2) achieved lower functional independence for several tasks and are less likely to be discharged home despite higher therapy volume than other groups. If a patient is going home, obesity will pose unique demands on the caregivers and resources can be put in place to help reintegrate the patient into life.

PMID:39676648 | DOI:10.1002/pmrj.13292

Categories: Literature Watch

Making sense of missense: challenges and opportunities in variant pathogenicity prediction

Mon, 2024-12-16 06:00

Dis Model Mech. 2024 Dec 1;17(12):dmm052218. doi: 10.1242/dmm.052218. Epub 2024 Dec 16.

ABSTRACT

Computational tools for predicting variant pathogenicity are widely used to support clinical variant interpretation. Recently, several models, which do not rely on known variant classifications during training, have been developed. These approaches can potentially overcome biases of current clinical databases, such as misclassifications, and can potentially better generalize to novel, unclassified variants. AlphaMissense is one such model, built on the highly successful protein structure prediction model, AlphaFold. AlphaMissense has shown great performance in benchmarks of functional and clinical data, outperforming many supervised models that were trained on similar data. However, like other in silico predictors, AlphaMissense has notable limitations. As a large deep learning model, it lacks interpretability, does not assess the functional impact of variants, and provides pathogenicity scores that are not disease specific. Improving interpretability and precision in computational tools for variant interpretation remains a promising area for advancing clinical genetics.

PMID:39676521 | DOI:10.1242/dmm.052218

Categories: Literature Watch

Automatic Segmentation of Sylvian Fissure in Brain Ultrasound Images of Pre-Term Infants Using Deep Learning Models

Sun, 2024-12-15 06:00

Ultrasound Med Biol. 2024 Dec 14:S0301-5629(24)00440-X. doi: 10.1016/j.ultrasmedbio.2024.11.016. Online ahead of print.

ABSTRACT

OBJECTIVE: Segmentation of brain sulci in pre-term infants is crucial for monitoring their development. While magnetic resonance imaging has been used for this purpose, cranial ultrasound (cUS) is the primary imaging technique used in clinical practice. Here, we present the first study aiming to automate brain sulci segmentation in pre-term infants using ultrasound images.

METHODS: Our study focused on segmentation of the Sylvian fissure in a single cUS plane (C3), although this approach could be extended to other sulci and planes. We evaluated the performance of deep learning models, specifically U-Net and ResU-Net, in automating the segmentation process in two scenarios. First, we conducted cross-validation on images acquired from the same ultrasound machine. Second, we applied fine-tuning techniques to adapt the models to images acquired from different vendors.

RESULTS: The ResU-Net approach achieved Dice and Sensitivity scores of 0.777 and 0.784, respectively, in the cross-validation experiment. When applied to external datasets, results varied based on similarity to the training images. Similar images yielded comparable results, while different images showed a drop in performance. Additionally, this study highlighted the advantages of ResU-Net over U-Net, suggesting that residual connections enhance the model's ability to learn and represent complex anatomical structures.

CONCLUSION: This study demonstrated the feasibility of using deep learning models to automatically segment the Sylvian fissure in cUS images. Accurate sonographic characterisation of cerebral sulci can improve the understanding of brain development and aid in identifying infants with different developmental trajectories, potentially impacting later functional outcomes.

PMID:39676003 | DOI:10.1016/j.ultrasmedbio.2024.11.016

Categories: Literature Watch

A deep learning approach for the screening of referable age-related macular degeneration - Model development and external validation

Sun, 2024-12-15 06:00

J Formos Med Assoc. 2024 Dec 14:S0929-6646(24)00567-9. doi: 10.1016/j.jfma.2024.12.008. Online ahead of print.

ABSTRACT

PURPOSE: To develop a deep learning image assessment software, VeriSee™ AMD, and to validate its accuracy in diagnosing referable age-related macular degeneration (AMD).

METHODS: For model development, a total of 6801 judgable 45-degree color fundus images from patients, aged 50 years and over, were collected. These images were assessed for AMD severity by ophthalmologists, according to the Age-Related Eye Disease Studies (AREDS) AMD category. Referable AMD was defined as category three (intermediate) or four (advanced). Of these images, 6123 were used for model training and validation. The other 678 images were used for testing the accuracy of VeriSee™ AMD relative to the ophthalmologists. Area under the receiver operating characteristic curve (AUC) for VeriSee™ AMD, and the sensitivities and specificities for VeriSee™ AMD and ophthalmologists were calculated. For external validation, another 937 color fundus images were used to test the accuracy of VeriSee™ AMD.

RESULTS: During model development, the AUC for VeriSee™ AMD in diagnosing referable AMD was 0.961. The accuracy for VeriSee™ AMD for testing was 92.04% (sensitivity 90.0% and specificity 92.43%). The mean accuracy of the ophthalmologists in diagnosing referable AMD was 85.8% (range: 75.93%-97.31%). During external validation, VeriSee AMD achieved a sensitivity of 90.03%, a specificity of 96.44%, and an accuracy of 92.04%.

CONCLUSIONS: VeriSee™ AMD demonstrated good sensitivity and specificity in diagnosing referable AMD from color fundus images. The findings of this study support the use of VeriSee™ AMD in assisting with the clinical screening of intermediate and advanced AMD using color fundus photography.

PMID:39675993 | DOI:10.1016/j.jfma.2024.12.008

Categories: Literature Watch

Conditional generative diffusion deep learning for accelerated diffusion tensor and kurtosis imaging

Sun, 2024-12-15 06:00

Magn Reson Imaging. 2024 Dec 13:110309. doi: 10.1016/j.mri.2024.110309. Online ahead of print.

ABSTRACT

PURPOSE: The purpose of this study was to develop DiffDL, a generative diffusion probabilistic model designed to produce high-quality diffusion tensor imaging (DTI) and diffusion kurtosis imaging (DKI) metrics from a reduced set of diffusion-weighted images (DWIs). This model addresses the challenge of prolonged data acquisition times in diffusion MRI while preserving metric accuracy.

METHODS: DiffDL was trained using data from the Human Connectome Project, including 300 training/validation subjects and 50 testing subjects. High-quality DTI and DKI metrics were generated using many DWIs and combined with subsets of DWIs to form training pairs. A UNet architecture was used for denoising, trained over 500 epochs with a linear noise schedule. Performance was evaluated against conventional DTI/DKI modeling and a reference UNet model using normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR), and Pearson correlation coefficient (PCC).

RESULTS: DiffDL showed significant improvements in the quality and accuracy of fractional anisotropy (FA) and mean diffusivity (MD) maps compared to conventional methods and the baseline UNet model. For DKI metrics, DiffDL outperformed conventional DKI modeling and the UNet model across various acceleration scenarios. Quantitative analysis demonstrated superior NMAE, PSNR, and PCC values for DiffDL, capturing the full dynamic range of DTI and DKI metrics. The generative nature of DiffDL allowed for multiple predictions, enabling uncertainty quantification and enhancing performance.

CONCLUSION: The DiffDL framework demonstrated the potential to significantly reduce data acquisition times in diffusion MRI while maintaining high metric quality. Future research should focus on optimizing computational demands and validating the model with clinical cohorts and standard MRI scanners.

PMID:39675686 | DOI:10.1016/j.mri.2024.110309

Categories: Literature Watch

Online monitoring of Haematococcus lacustris cell cycle using machine and deep learning techniques

Sun, 2024-12-15 06:00

Bioresour Technol. 2024 Dec 13:131976. doi: 10.1016/j.biortech.2024.131976. Online ahead of print.

ABSTRACT

Optimal control and process optimization of astaxanthin production from Haematococcuslacustris is directly linked to its complex cell cycle ranging from vegetative green cells to astaxanthin-rich cysts. This study developed an automated online monitoring system classifying four different cell cycle stages using a scanning microscope. Decision-tree based machine learning and deep learning convolutional neural network algorithms were developed, validated, and evaluated. SHapley Additive exPlanations was used to examine the most important system requirements for accurate image classification. The models achieved accuracies on unseen data of 92.4 and 90.9%, respectively. Furthermore, both models were applied to a photobioreactor culturing H.lacustris, effectively monitoring the transition from a green culture in the exponential growth phase to a stationary red culture. Therefore, online image analysis using artificial intelligence models has great potential for process optimization and as a data-driven decision support tool during microalgae cultivation.

PMID:39675638 | DOI:10.1016/j.biortech.2024.131976

Categories: Literature Watch

Incorporating patient-specific prior clinical knowledge to improve clinical target volume auto-segmentation generalisability for online adaptive radiotherapy of rectal cancer: A multicenter validation

Sun, 2024-12-15 06:00

Radiother Oncol. 2024 Dec 13:110667. doi: 10.1016/j.radonc.2024.110667. Online ahead of print.

ABSTRACT

BACKGROUND & PURPOSE: Deep learning (DL) based auto-segmentation has shown to be beneficial for online adaptive radiotherapy (OART). However, auto-segmentation of clinical target volumes (CTV) is complex, as clinical interpretations are crucial in their definition. The resulting variation between clinicians and institutes hampers the generalizability of DL networks. In OART the CTV is delineated during treatment preparation which makes the clinician intent explicitly available during treatment. We studied whether multicenter generalisability improves when using this prior clinical knowledge, the pre-treatment delineation, as a patient-specific prior for DL models for online auto-segmentation of the mesorectal CTV.

MATERIAL & METHODS: We included intermediate risk or locally advanced rectal cancer patients from three centers. Patient-specific weight maps were created by combining the patient-specific CTV delineation on the pre-treatment scan with population-based variation of likely inter-fraction mesorectal CTV deformations. We trained two models to auto-segment the mesorectal CTV on in-house data, one with (MRI + prior) and one without (MRI-only) priors. Both models were applied to two external datasets. An external baseline model was trained without priors from scratch for one external center. Performance was evaluated on the DSC, surface Dice, 95HD and MSD.

RESULTS: For both external centers, the MRI + prior model outperformed the MRI-only model significantly on the segmentation metrics (p-values < 0.01). There was no significant difference between the external baseline model and the MRI + prior model.

CONCLUSION: Adding patient-specific weight maps makes the CTV segmentation model more robust to institutional preferences. Performance was comparable to a model trained locally from scratch. This makes this approach suitable for generalization to multiple centers.

PMID:39675574 | DOI:10.1016/j.radonc.2024.110667

Categories: Literature Watch

Non-Generative Artificial Intelligence (AI) in Medicine: Advancements and Applications in Supervised and Unsupervised Machine Learning

Sun, 2024-12-15 06:00

Mod Pathol. 2024 Dec 13:100680. doi: 10.1016/j.modpat.2024.100680. Online ahead of print.

ABSTRACT

The use of Artificial Intelligence (AI) within pathology and healthcare has advanced extensively. We have accordingly witnessed increased adoption of various AI tools which are transforming our approach to clinical decision support, personalized medicine, predictive analytics, automation, and discovery. The familiar and more reliable AI tools that have been incorporated within healthcare thus far fall mostly under the non-generative AI domain, which includes supervised and unsupervised machine learning (ML) techniques. This review article explores how such non-generative AI methods, rooted in traditional rules-based systems, enhance diagnostic accuracy, efficiency, and consistency within medicine. Key concepts and the application of supervised learning models (i.e. classification and regression) such as decision trees, support vector machines, linear and logistic regression, K-nearest neighbor, and neural networks are explained along with the newer landscape of neural network-based non-generative foundation models. Unsupervised learning techniques including clustering, dimensionality reduction, and anomaly detection are also discussed for their role in uncovering novel disease subtypes or identifying outliers. Technical details related to the application of non-generative AI algorithms for analyzing whole slide images is also highlighted. The performance, explainability and reliability of non-generative AI models essential for clinical decision-making is also reviewed, as well as challenges related to data quality, model interpretability, and risk of data drift. An understanding of which AI-ML models to employ and which shortcomings need to be addressed is imperative to safely and efficiently leverage, integrate, and monitor these traditional AI tools in clinical practice and research.

PMID:39675426 | DOI:10.1016/j.modpat.2024.100680

Categories: Literature Watch

Is Human Oversight to AI Systems still possible?

Sun, 2024-12-15 06:00

N Biotechnol. 2024 Dec 13:S1871-6784(24)00563-6. doi: 10.1016/j.nbt.2024.12.003. Online ahead of print.

ABSTRACT

The rapid proliferation of artificial intelligence (AI) systems across diverse domains raises critical questions about the feasibility of meaningful human oversight, particularly in high-stakes domains such as new biotechnology. As AI systems grow increasingly complex, opaque, and autonomous, ensuring responsible use becomes a formidable challenge. During our editorial work for the special issue "Artificial Intelligence for Life Sciences", we placed increasing emphasis on the topic of "human oversight". Consequently, in this editorial we briefly discuss the evolving role of human oversight in AI governance, focusing on the practical, technical, and ethical dimensions of maintaining control. It examines how the complexity of contemporary AI architectures, such as large-scale neural networks and generative AI applications, undermine human understanding and decision-making capabilities. Furthermore, it evaluates emerging approaches-such as explainable AI (XAI), human-in-the-loop systems, and regulatory frameworks-that aim to enable oversight while acknowledging their limitations. Through a comprehensive analysis, the picture emerged while complete oversight may no longer be viable in certain contexts, strategic interventions leveraging human-AI collaboration and trustworthy AI design principles can preserve accountability and safety. The discussion highlights the urgent need for interdisciplinary efforts to rethink oversight mechanisms in an era where AI may outpace human comprehension.

PMID:39675423 | DOI:10.1016/j.nbt.2024.12.003

Categories: Literature Watch

Thoughtful Application of Artificial Intelligence Technique Improves Diagnostic Accuracy and Supportive Clinical Decision-Making

Sun, 2024-12-15 06:00

Arthroscopy. 2024 Dec 13:S0749-8063(24)01046-6. doi: 10.1016/j.arthro.2024.12.009. Online ahead of print.

ABSTRACT

Medical research within areas of deep learning, particularly in computer vision for medical imaging, has demonstrated promise over the past decade with an increasing volume of technical papers published in orthopedics related to imaging artificial intelligence. However, as more tools and models are developed and deployed, it is easy for clinicians to get overwhelmed with the different types of models, leaving "artificial intelligence" as an empty buzzword where true value can be unclear. As with surgery, the techniques of deep learning require thoughtful application and cannot follow a one-size-fits all approach as different problems require differential levels of technical complexity with model application. Moreover, the application of AI-based clinical tools should be both adjunctive and transparent in their stepwise integration within clinical medicine to provide additive insight. As a medical profession, we must together decide how, when, and where we want AI-based applications to offer insight.

PMID:39675394 | DOI:10.1016/j.arthro.2024.12.009

Categories: Literature Watch

A systematic review on the impact of artificial intelligence on electrocardiograms in cardiology

Sat, 2024-12-14 06:00

Int J Med Inform. 2024 Dec 9;195:105753. doi: 10.1016/j.ijmedinf.2024.105753. Online ahead of print.

ABSTRACT

BACKGROUND: Artificial intelligence (AI) has revolutionized numerous industries, enhancing efficiency, scalability, and insight generation. In cardiology, particularly through electrocardiogram (ECG) analysis, AI has the potential to improve diagnostic accuracy and reduce the time needed for diagnosis. This systematic review explores the integration of AI, machine learning (ML), and deep learning (DL) in ECG analysis, focusing on their impact on predictive diagnostics and treatment support in cardiology.

METHODS: A systematic literature review was conducted following the PRISMA 2020 framework, using four high-impact databases to identify studies from 2014 to -2024. The inclusion criteria included English-language journal articles and research papers that focused on AI applications in cardiology, specifically ECG analysis. Records were screened, duplicates were removed, and final selections were made on the basis of their relevance to AI-ECG integration for cardiac health.

RESULTS: The review included 46 studies that met the inclusion criteria, covering diverse AI models such as CNNs, RNNs, and hybrid models. These models were applied to ECG data to detect and predict heart conditions such as arrhythmia, myocardial infarction, and heart failure. These findings indicate that AI-driven ECG analysis improves diagnostic accuracy and provides significant support for early diagnosis and personalized treatment.

CONCLUSIONS: AI technologies, especially ML and DL, enhance ECG-based cardiology diagnostics by increasing accuracy, reducing diagnosis time, and supporting timely interventions and personalized care. Continued research in this area is essential to refine algorithms and integrate AI tools into clinical practice for improved patient outcomes in cardiology.

PMID:39674006 | DOI:10.1016/j.ijmedinf.2024.105753

Categories: Literature Watch

Deep learning-assistance significantly increases the detection sensitivity of neurosurgery residents for intracranial aneurysms in subarachnoid hemorrhage

Sat, 2024-12-14 06:00

J Clin Neurosci. 2024 Dec 13;132:110971. doi: 10.1016/j.jocn.2024.110971. Online ahead of print.

ABSTRACT

OBJECTIVE: The purpose of this study was to evaluate the effectiveness of a deep learning model (DLM) in improving the sensitivity of neurosurgery residents to detect intracranial aneurysms on CT angiography (CTA) in patients with aneurysmal subarachnoid hemorrhage (aSAH).

METHODS: In this diagnostic accuracy study, a set of 104 CTA scans of aSAH patients containing a total of 126 aneurysms were presented to three blinded neurosurgery residents (a first-year, third-year, and fifth-year resident), who individually assessed them for aneurysms. After the initial reading, the residents were given the predictions of a dedicated DLM previously established for automated detection and segmentation of intracranial aneurysms. The detection sensitivities for aneurysms of the DLM and the residents with and without the assistance of the DLM were compared.

RESULTS: The DLM had a detection sensitivity of 85.7%, while the residents showed detection sensitivities of 77.8%, 86.5%, and 87.3% without DLM assistance. After being provided with the DLM's results, the residents' individual detection sensitivities increased to 97.6%, 95.2%, and 98.4%, respectively, yielding an average increase of 13.2%. The DLM was particularly useful in detecting small aneurysms. In addition, interrater agreement among residents increased from a Fleiss κ of 0.394 without DLM assistance to 0.703 with DLM assistance.

CONCLUSIONS: The results of this pilot study suggest that deep learning models can help neurosurgeons detect aneurysms on CTA and make appropriate treatment decisions when immediate radiological consultation is not possible.

PMID:39673838 | DOI:10.1016/j.jocn.2024.110971

Categories: Literature Watch

Comparison between two artificial intelligence models to discriminate cancerous cell nuclei based on confocal fluorescence imaging in hepatocellular carcinoma

Sat, 2024-12-14 06:00

Dig Liver Dis. 2024 Dec 13:S1590-8658(24)01116-2. doi: 10.1016/j.dld.2024.11.026. Online ahead of print.

ABSTRACT

BACKGROUND: Hepatocellular carcinoma (HCC) exhibits an exceptional intratumoral heterogeneity that might influence diagnosis and outcome. Advances in digital microscopy and artificial intelligence (AI) may improve the HCC identification of liver cancer cells.

AIM: Two AI algorithms were designed to perform computer-assisted discrimination of tumour from non-tumour nuclei in HCC.

METHODS: Healthy livers and HCCs from commercially available tissue arrays were stained with an antibody against proliferating cell nuclear antigen and DRAQ5 dye with high affinity for double-stranded DNA, acquired by confocal microscopy imaging and then used to design machine learning (ML) and deep learning (DL) algorithms.

RESULTS: Nuclei were segmented and then used to develop the Model 1 and Model 2 algorithms, using ML and DL respectively. Model 1 was trained with some texture nuclear features extracted using discrete wavelet transform and grey-level co-occurrence matrix. Model 2 was trained with the segmented images without any additional information. The comparative analysis of the models showed that DL was more effective than ML, achieving an average accuracy of 88 % in discriminating healthy from neoplastic nuclei in HCC samples.

CONCLUSION: Our research shows that AI techniques and nuclear fluorescent staining could be useful tools for automatically detecting HCC cells in liver tissues.

PMID:39674779 | DOI:10.1016/j.dld.2024.11.026

Categories: Literature Watch

Radiomics and Artificial Intelligence Landscape for [<sup>18</sup>F]FDG PET/CT in Multiple Myeloma

Sat, 2024-12-14 06:00

Semin Nucl Med. 2024 Dec 13:S0001-2998(24)00111-9. doi: 10.1053/j.semnuclmed.2024.11.005. Online ahead of print.

ABSTRACT

[18F]FDG PET/CT is a powerful imaging modality of high performance in multiple myeloma (MM) and is considered the appropriate method for assessing treatment response in this disease. On the other hand, due to the heterogeneous and sometimes complex patterns of bone marrow infiltration in MM, the interpretation of PET/CT can be particularly challenging, hampering interobserver reproducibility and limiting the diagnostic and prognostic ability of the modality. Although many approaches have been developed to address the issue of standardization, none can yet be considered a standard method for interpretation or objective quantification of PET/CT. Therefore, advanced diagnostic quantification approaches are needed to support and potentially guide the management of MM. In recent years, radiomics has emerged as an innovative method for high-throughput mining of image-derived features for clinical decision making, which may be particularly helpful in oncology. In addition, machine learning and deep learning, both subfields of artificial intelligence (AI) closely related to the radiomics process, have been increasingly applied to automated image analysis, offering new possibilities for a standardized evaluation of imaging modalities such as CT, PET/CT and MRI in oncology. In line with this, the initial but steadily growing literature on the application of radiomics and AI-based methods in the field of [18F]FDG PET/CT in MM has already yielded encouraging results, offering a potentially reliable tool towards optimization and standardization of interpretation in this disease. The main results of these studies are presented in this review.

PMID:39674756 | DOI:10.1053/j.semnuclmed.2024.11.005

Categories: Literature Watch

Intraoperative Real-Time IDH Diagnosis for Glioma Based on Automatic Analysis of Contrast-Enhanced Ultrasound Video

Sat, 2024-12-14 06:00

Ultrasound Med Biol. 2024 Dec 13:S0301-5629(24)00432-0. doi: 10.1016/j.ultrasmedbio.2024.11.007. Online ahead of print.

ABSTRACT

OBJECTIVE: Isocitrate dehydrogenase (IDH) is the most important molecular marker of glioma and is highly correlated to the diagnosis, treatment, and prognosis of patients. We proposed a real-time diagnosis method for IDH status differentiation based on automatic analysis of intraoperative contrast-enhanced ultrasound (CEUS) video.

METHODS: Inspired by the Time Intensity Curve (TIC) analysis of CEUS utilized in clinical practice, this paper proposed an automatic CEUS video analysis method called ATAN (Automatic TIC Analysis Network). Based on tumor identification, ATAN automatically selected ROIs (region of interest) inside and outside glioma. ATAN ensures the integrity of dynamic features of perfusion changes at critical locations, resulting in optimal diagnostic performance. The transfer learning mechanism was also introduced by using two auxiliary CEUS datasets to solve the small sample problem of intraoperative glioma data.

RESULTS: Through pretraining on 258 patients on two auxiliary cohorts, ATAN produced the IDH diagnosis with accuracy and AUC of 0.9 and 0.91 respectively on the main cohort of 60 glioma patients (mean age, 50 years ± 14, 28 men) Compared with other existing IDH status differentiation methods, ATAN is a real-time IDH diagnosis method without the need of tumor samples.

CONCLUSION: ATAN is an effective automatic analysis model of CEUS, with the help of this model, real-time intraoperative diagnosis of IDH with high accuracy can be achieved. Compared with other state-of-the-art deep learning methods, the accuracy of the ATAN model is 15% higher on average.

PMID:39674714 | DOI:10.1016/j.ultrasmedbio.2024.11.007

Categories: Literature Watch

Pages