Deep learning
Automatic gait EVENT detection in older adults during perturbed walking
J Neuroeng Rehabil. 2025 Feb 28;22(1):40. doi: 10.1186/s12984-025-01560-9.
ABSTRACT
Accurate detection of gait events in older adults, particularly during perturbed walking, is essential for evaluating balance control and fall risk. Traditional force plate-based methods often face limitations in perturbed walking scenarios due to the difficulty in landing cleanly on the force plates. Subsequently, previous studies have not addressed gait event automatic detection methods for perturbed walking. This study introduces an automated gait event detection method using a bidirectional gated recurrent unit (Bi-GRU) model, leveraging ground reaction force, joint angles, and marker data, for both regular and perturbed walking scenarios from 307 healthy older adults. Our marker-based model achieved over 97% accuracy with a mean error of less than 14 ms in detecting touchdown (TD) and liftoff (LO) events for both walking scenarios. The results highlight the efficacy of kinematic approaches, demonstrating their potential in gait event detection for clinical settings. When integrated with wearable sensors or computer vision techniques, these methods enable real-time, precise monitoring of gait patterns, which is helpful for applying personalized programs for fall prevention. This work takes a significant step forward in automated gait analysis for perturbed walking, offering a reliable method for evaluating gait patterns, balance control, and fall risk in clinical settings.
PMID:40022199 | DOI:10.1186/s12984-025-01560-9
A computational spectrometer for the visible, near, and mid-infrared enabled by a single-spinning film encoder
Commun Eng. 2025 Feb 28;4(1):37. doi: 10.1038/s44172-025-00379-5.
ABSTRACT
Computational spectrometers enable low-cost, in-situ, and rapid spectral analysis, with applications in chemistry, biology, and environmental science. Traditional filter-based spectral encoding approaches typically use filter arrays, complicating the manufacturing process and hindering device consistency. Here we propose a computational spectrometer spanning visible to mid-infrared by combining the Single-Spinning Film Encoder (SSFE) with a deep learning-based reconstruction algorithm. Optimization through particle swarm optimization (PSO) allows for low-correlation and high-complexity spectral responses under different polarizations and spinning angles. The spectrometer demonstrates single-peak resolutions of 0.5 nm, 2 nm, 10 nm, and dual-peak resolutions of 3 nm, 6 nm, 20 nm for the visible, near, and mid-infrared wavelength ranges. Experimentally, it shows an average MSE of 1.05 × 10⁻³ for narrowband spectral reconstruction in the visible wavelength range, with average center-wavelength and linewidth errors of 0.61 nm and 0.56 nm. Additionally, it achieves an overall 81.38% precision for the classification of 220 chemical compounds, showcasing its potential for compact, cost-effective spectroscopic solutions.
PMID:40021937 | DOI:10.1038/s44172-025-00379-5
Software defect prediction based on residual/shuffle network optimized by upgraded fish migration optimization algorithm
Sci Rep. 2025 Feb 28;15(1):7201. doi: 10.1038/s41598-025-91784-5.
ABSTRACT
The study introduces a new method for predicting software defects based on Residual/Shuffle (RS) Networks and an enhanced version of Fish Migration Optimization (UFMO). The overall contribution is to improve the accuracy, and reduce the manual effort needed. The originality of this work rests in the synergic use of deep learning and metaheuristics to train the software code for extraction of semantic and structural properties. The model is tested on a variety of open-source projects, yielding an average accuracy of 93% and surpassing the performance of the state-of-the-art models. The results indicate an overall increase in the precision (78-98%), recall (71-98%), F-measure (72-96%), and Area Under the Curve (AUC) (78-99%). The proposed model is simple and efficient and proves to be effective in identifying potential defects, consequently decreasing the chance of missing these defects and improving the overall quality of the software as opposed to existing approaches. However, the analysis is limited to open-source projects and warrants further evaluation on proprietary software. The study enables a robust and efficient tool for developers. This approach can revolutionize software development practices in order to use artificial intelligence to solve difficult issues presented in software. The model offers high accuracy to reduce the software development cost, which can improve user satisfaction, and enhance the overall quality of software being developed.
PMID:40021906 | DOI:10.1038/s41598-025-91784-5
Exploring the application of deep learning methods for polygenic risk score estimation
Biomed Phys Eng Express. 2025 Feb 28. doi: 10.1088/2057-1976/adbb71. Online ahead of print.
ABSTRACT

Polygenic risk scores (PRS) summarise genetic information into a single number with clinical and research uses. Machine learning (ML) has revolutionised multiple fields, however, the impact of ML on PRSs has been less significant. We explore how ML can improve the generation of PRSs.
Methods:
We train ML models on known PRSs using UK Biobank data. We explore whether the models can recreate human programmed PRSs, including using a single model to generate multiple PRSs, and ML difficulties in PRS generation. We investigate how ML can compensate for missing data and constraints on performance.
Results:
We demonstrate almost perfect generation of multiple PRSs with little loss of performance with reduced quantity of training data. For an example set of missing SNPs the MLP produces predictions that enable separation of cases from population samples with an area under the receiver operating characteristic curve of 0.847 (95% CI: 0.828-0.864) compared to 0.798 (95% CI: 0.779-0.818) for the PRS.
Conclusions:
ML can accurately generate PRSs, including with one model for multiple PRSs. The models are transferable and have high longevity. With certain missing SNPs the ML models can improve on PRS generation. Further improvements likely require use of additional input data.
.
PMID:40020248 | DOI:10.1088/2057-1976/adbb71
Derivation of an artificial intelligence-based electrocardiographic model for the detection of acute coronary occlusive myocardial infarction
Arch Cardiol Mex. 2025 Feb 28. doi: 10.24875/ACM.24000195. Online ahead of print.
ABSTRACT
OBJECTIVES: We aimed to assess the performance of an artificial intelligence-electrocardiogram (AI-ECG)-based model capable of detecting acute coronary occlusion myocardial infarction (ACOMI) in the setting of patients with acute coronary syndrome (ACS).
METHODS: This was a prospective, observational, longitudinal, and single-center study including patients with the initial diagnosis of ACS (both ST-elevation acute myocardial infarction [STEMI] & non-ST-segment elevation myocardial infarction [NSTEMI]). To train the deep learning model in recognizing ACOMI, manual digitization of a patient's ECG was conducted using smartphone cameras of varying quality. We relied on the use of convolutional neural networks as the AI models for the classification of ECG examples. ECGs were also independently evaluated by two expert cardiologists blinded to clinical outcomes; each was asked to determine (a) whether the patient had a STEMI, based on universal criteria or (b) if STEMI criteria were not met, to identify any other ECG finding suggestive of ACOMI. ACOMI was defined by coronary angiography findings meeting any of the following three criteria: (a) total thrombotic occlusion, (b) TIMI thrombus grade 2 or higher + TIMI grade flow 1 or less, or (c) the presence of a subocclusive lesion (> 95% angiographic stenosis) with TIMI grade flow < 3. Patients were classified into four groups: STEMI + ACOMI, NSTEMI + ACOMI, STEMI + non-ACOMI, and NSTEMI + non-ACOMI.
RESULTS: For the primary objective of the study, AI outperformed human experts in both NSTEMI and STEMI, with an area under the curve (AUC) of 0.86 (95% confidence interval [CI] 0.75-0.98) for identifying ACOMI, compared with ECG experts using their experience (AUC: 0.33, 95% CI 0.17-0.49) or under universal STEMI criteria (AUC: 0.50, 95% CI 0.35-0.54), (p value for AUC receiver operating characteristic comparison < 0.001). The AI model demonstrated a PPV of 0.84 and an NPV of 1.0.
CONCLUSION: Our AI-ECG model demonstrated a higher diagnostic precision for the detection of ACOMI compared with experts and the use of STEMI criteria. Further research and external validation are needed to understand the role of AI-based models in the setting of ACS.
PMID:40020200 | DOI:10.24875/ACM.24000195
Phantom-metasurface cooperative system trained by a deep learning network driven by a bound state for a magnetic resonance-enhanced system
Opt Lett. 2025 Mar 1;50(5):1723-1726. doi: 10.1364/OL.546727.
ABSTRACT
With the development of medical imaging technology, magnetic resonance imaging (MRI) has become an important tool for diagnosing and monitoring a variety of diseases. However, traditional MRI techniques are limited in terms of imaging speed and resolution. In this study, we developed an efficient body mode metasurface composite MRI enhancement system based on deep learning network training and realized the design and control of metasurface in the MHz band. Firstly, forward neural network is used to predict the electromagnetic response characteristics quickly. On this basis, the network is reverse-designed and the structural parameters of the metasurface are predicted. The experimental results show that the combination of deep neural network and electromagnetic metasurface significantly improves the design efficiency of metasurface and has great application potential in the MRI system.
PMID:40020024 | DOI:10.1364/OL.546727
Physics-driven deep learning for high-fidelity photon-detection ghost imaging
Opt Lett. 2025 Mar 1;50(5):1719-1722. doi: 10.1364/OL.541330.
ABSTRACT
Single-photon detection has significant potential in the field of imaging due to its high sensitivity and has been widely applied across various domains. However, achieving high spatial and depth resolution through scattering media remains challenging because of the limitations of low light intensity, high background noise, and inherent time jitter of the detector. This paper proposes a physics-driven, learning-based photon-detection ghost imaging method to address these challenges. By co-designing the computational ghost imaging system and the network, we integrate imaging and reconstruction more closely to surpass the physical resolution limitations. Fringe patterns are employed to encode the depth information of the object into different channels of an image cube. A specialized depth fusion network with attention mechanisms is then designed to extract inter-depth correlation features, enabling super-resolution reconstruction at 256 × 256 pixels. Experimental results demonstrate that the proposed method presents superior imaging performance across various scenarios, offering a more compact and cost-effective alternative for photon-detection imaging.
PMID:40020023 | DOI:10.1364/OL.541330
Comparing the performance of a large language model and naive human interviewers in interviewing children about a witnessed mock-event
PLoS One. 2025 Feb 28;20(2):e0316317. doi: 10.1371/journal.pone.0316317. eCollection 2025.
ABSTRACT
PURPOSE: The present study compared the performance of a Large Language Model (LLM; ChatGPT) and human interviewers in interviewing children about a mock-event they witnessed.
METHODS: Children aged 6-8 (N = 78) were randomly assigned to the LLM (n = 40) or the human interviewer condition (n = 38). In the experiment, the children were asked to watch a video filmed by the researchers that depicted behavior including elements that could be misinterpreted as abusive in other contexts, and then answer questions posed by either an LLM (presented by a human researcher) or a human interviewer.
RESULTS: Irrespective of condition, recommended (vs. not recommended) questions elicited more correct information. The LLM posed fewer questions overall, but no difference in the proportion of the questions recommended by the literature. There were no differences between the LLM and human interviewers in unique correct information elicited but questions posed by LLM (vs. humans) elicited more unique correct information per question. LLM (vs. humans) also elicited less false information overall, but there was no difference in false information elicited per question.
CONCLUSIONS: The findings show that the LLM was competent in formulating questions that adhere to best practice guidelines while human interviewers asked more questions following up on the child responses in trying to find out what the children had witnessed. The results indicate LLMs could possibly be used to support child investigative interviewers. However, substantial further investigation is warranted to ascertain the utility of LLMs in more realistic investigative interview settings.
PMID:40019879 | DOI:10.1371/journal.pone.0316317
MultiKD-DTA: Enhancing Drug-Target Affinity Prediction Through Multiscale Feature Extraction
Interdiscip Sci. 2025 Feb 28. doi: 10.1007/s12539-025-00697-4. Online ahead of print.
ABSTRACT
The discovery and development of novel pharmaceutical agents is characterized by high costs, lengthy timelines, and significant safety concerns. Traditional drug discovery involves pharmacologists manually screening drug molecules against protein targets, focusing on binding within protein cavities. However, this manual process is slow and inherently limited. Given these constraints, the use of deep learning techniques to predict drug-target interaction (DTI) affinities is both significant and promising for future applications. This paper introduces an innovative deep learning architecture designed to enhance the prediction of DTI affinities. The model ingeniously combines graph neural networks, pre-trained large-scale protein models, and attention mechanisms to improve performance. In this framework, molecular structures are represented as graphs and processed through graph neural networks and multiscale convolutional networks to facilitate feature extraction. Simultaneously, protein sequences are encoded using pre-trained ESM-2 large models and processed with bidirectional long short-term memory networks. Subsequently, the molecular and protein embeddings derived from these processes are integrated within a fusion module to compute affinity scores. Experimental results demonstrate that our proposed model outperforms existing methods on two publicly available datasets.
PMID:40019659 | DOI:10.1007/s12539-025-00697-4
A novel approach for estimating postmortem intervals under varying temperature conditions using pathology images and artificial intelligence models
Int J Legal Med. 2025 Feb 28. doi: 10.1007/s00414-025-03447-9. Online ahead of print.
ABSTRACT
Estimating the postmortem interval (PMI) is a critical yet complex task in forensic investigations, with accurate and timely determination playing a key role in case resolution and legal outcomes. Traditional methods often suffer from environmental variability and subjective biases, emphasizing the need for more reliable and objective approaches. In this study, we present a novel predictive model for PMI estimation, introduced here for the first time, that leverages pathological tissue images and artificial intelligence (AI). The model is designed to perform under three temperature conditions: 25 °C, 37 °C, and 4 °C. Using a ResNet50 neural network, patch-level images were analyzed to extract deep learning-derived features, which were integrated with machine learning algorithms for whole slide image (WSI) classification. The model achieved strong performance, with micro and macro AUC values of at least 0.949 at the patch-level and 0.800 at the WSI-level in both training and testing sets. In external validation, micro and macro AUC values at the patch-level exceeded 0.960. These results highlight the potential of AI to improve the accuracy and efficiency of PMI estimation. As AI technology continues to advance, this approach holds promise for enhancing forensic investigations and supporting more precise case resolutions.
PMID:40019556 | DOI:10.1007/s00414-025-03447-9
Artificial intelligence in otorhinolaryngology: current trends and application areas
Eur Arch Otorhinolaryngol. 2025 Feb 17. doi: 10.1007/s00405-025-09272-5. Online ahead of print.
ABSTRACT
PURPOSE: This study aims to perform a bibliometric analysis of scientific research on the use of artificial intelligence (AI) in the field of Otorhinolaryngology (ORL), with a specific focus on identifying emerging AI trend topics within this discipline.
METHODS: A total of 498 articles on AI in ORL, published between 1982 and 2024, were retrieved from the Web of Science database. Various bibliometric techniques, including trend keyword analysis and factor analysis, were applied to analyze the data.
RESULTS: The most prolific journal was the European Archives of Oto-Rhino-Laryngology (n = 67). The USA (n = 200) and China (n = 61) were the most productive countries in AI-related ORL research. The most productive institutions were Harvard University / Harvard Medical School (n = 71). The leading authors in this field were Lechien JR. (n = 18) and Rameau A. (n = 17). The most frequently used keywords in the AI research were cochlear implant, head and neck cancer, magnetic resonance imaging (MRI), hearing loss, patient education, diagnosis, radiomics, surgery, hearing aids, laryngology ve otitis media. Recent trends in otorhinolaryngology research reflect a dynamic focus, progressing from hearing-related technologies such as hearing aids and cochlear implants in earlier years, to diagnostic innovations like audiometry, psychoacoustics, and narrow band imaging. The emphasis has recently shifted toward advanced applications of MRI, radiomics, and computed tomography (CT) for conditions such as head and neck cancer, chronic rhinosinusitis, laryngology, and otitis media. Additionally, increasing attention has been given to patient education, quality of life, and prognosis, underscoring a holistic approach to diagnosis, surgery, and treatment in otorhinolaryngology.
CONCLUSION: AI has significantly impacted the field of ORL, especially in diagnostic imaging and therapeutic planning. With advancements in MRI and CT-based technologies, AI has proven to enhance disease detection and management. The future of AI in ORL suggests a promising path toward improving clinical decision-making, patient care, and healthcare efficiency.
PMID:40019544 | DOI:10.1007/s00405-025-09272-5
Pd-Modified Microneedle Array Sensor Integration with Deep Learning for Predicting Silica Aerogel Properties in Real Time
ACS Appl Mater Interfaces. 2025 Feb 28. doi: 10.1021/acsami.4c17680. Online ahead of print.
ABSTRACT
The continuous global effort to predict material properties through artificial intelligence has predominantly focused on utilizing material stoichiometry or structures in deep learning models. This study aims to predict material properties using electrochemical impedance data, along with frequency and time parameters, that can be obtained during processing stages. The target material, silica aerogel, is widely recognized for its lightweight structure and excellent insulating properties, which are attributed to its large surface area and pore size. However, production is often delayed due to the prolonged aging process. Real-time prediction of material properties during processing can significantly enhance process optimization and monitoring. In this study, we developed a system to predict the physical properties of silica aerogel, specifically pore diameter, pore volume, and surface area. This system integrates a 3 × 3 array Pd/Au sensor, which exhibits high sensitivity to varying pH levels during aerogel synthesis and is capable of acquiring a large data set (impedance, frequency, time) in real-time. The collected data is then processed through a deep neural network algorithm. Because the system is trained with data obtained during the processing stage, it enables real-time predictions of the critical properties of silica aerogel, thus facilitating process optimization and monitoring. The final performance evaluation demonstrated an optimal alignment between true and predicted values for silica aerogel properties, with a mean absolute percentage error of approximately 0.9%. This approach holds great promise for significantly improving the efficiency and effectiveness of silica aerogel production by providing accurate real-time predictions.
PMID:40019213 | DOI:10.1021/acsami.4c17680
Quantifying Facial Gestures Using Deep Learning in a New World Monkey
Am J Primatol. 2025 Mar;87(3):e70013. doi: 10.1002/ajp.70013.
ABSTRACT
Facial gestures are a crucial component of primate multimodal communication. However, current methodologies for extracting facial data from video recordings are labor-intensive and prone to human subjectivity. Although automatic tools for this task are still in their infancy, deep learning techniques are revolutionizing animal behavior research. This study explores the distinctiveness of facial gestures in cotton-top tamarins, quantified using markerless pose estimation algorithms. From footage of captive individuals, we extracted and manually labeled frames to develop a model that can recognize a custom set of landmarks positioned on the face of the target species. The trained model predicted landmark positions and subsequently transformed them into distance matrices representing landmarks' spatial distributions within each frame. We employed three competitive machine learning classifiers to assess the ability to automatically discriminate facial configurations that cooccur with vocal emissions and are associated with different behavioral contexts. Initial analysis showed correct classification rates exceeding 80%, suggesting that voiced facial configurations are highly distinctive from unvoiced ones. Our findings also demonstrated varying context specificity of facial gestures, with the highest classification accuracy observed during yawning, social activity, and resting. This study highlights the potential of markerless pose estimation for advancing the study of primate multimodal communication, even in challenging species such as cotton-top tamarins. The ability to automatically distinguish facial gestures in different behavioral contexts represents a critical step in developing automated tools for extracting behavioral cues from raw video data.
PMID:40019116 | DOI:10.1002/ajp.70013
Deep learning for named entity recognition in Turkish radiology reports
Diagn Interv Radiol. 2025 Feb 28. doi: 10.4274/dir.2025.243100. Online ahead of print.
ABSTRACT
PURPOSE: The primary objective of this research is to enhance the accuracy and efficiency of information extraction from radiology reports. In addressing this objective, the study aims to develop and evaluate a deep learning framework for named entity recognition (NER).
METHODS: We used a synthetic dataset of 1,056 Turkish radiology reports created and labeled by the radiologists in our research team. Due to privacy concerns, actual patient data could not be used; however, the synthetic reports closely mimic genuine reports in structure and content. We employed the four-stage DYGIE++ model for the experiments. First, we performed token encoding using four bidirectional encoder representations from transformers (BERT) models: BERTurk, BioBERTurk, PubMedBERT, and XLM-RoBERTa. Second, we introduced adaptive span enumeration, considering the word count of a sentence in Turkish. Third, we adopted span graph propagation to generate a multidirectional graph crucial for coreference resolution. Finally, we used a two-layered feed-forward neural network to classify the named entity.
RESULTS: The experiments conducted on the labeled dataset showcase the approach's effectiveness. The study achieved an F1 score of 80.1 for the NER task, with the BioBERTurk model, which is pre-trained on Turkish Wikipedia, radiology reports, and biomedical texts, proving to be the most effective of the four BERT models used in the experiment.
CONCLUSION: We show how different dataset labels affect the model's performance. The results demonstrate the model's ability to handle the intricacies of Turkish radiology reports, providing a detailed analysis of precision, recall, and F1 scores for each label. Additionally, this study compares its findings with related research in other languages.
CLINICAL SIGNIFICANCE: Our approach provides clinicians with more precise and comprehensive insights to improve patient care by extracting relevant information from radiology reports. This innovation in information extraction streamlines the diagnostic process and helps expedite patient treatment decisions.
PMID:40018795 | DOI:10.4274/dir.2025.243100
Diagnostic accuracy of convolutional neural network algorithms to distinguish gastrointestinal obstruction on conventional radiographs in a pediatric population
Diagn Interv Radiol. 2025 Feb 28. doi: 10.4274/dir.2025.242950. Online ahead of print.
ABSTRACT
PURPOSE: Gastrointestinal (GI) dilatations are frequently observed in radiographs of pediatric patients who visit emergency departments with acute symptoms such as vomiting, pain, constipation, or diarrhea. Timely and accurate differentiation of whether there is an obstruction requiring surgery in these patients is crucial to prevent complications such as necrosis and perforation, which can lead to death. In this study, we aimed to use convolutional neural network (CNN) models to differentiate healthy children with normal intestinal gas distribution in abdominal radiographs from those with GI dilatation or obstruction. We also aimed to distinguish patients with obstruction requiring surgery and those with other GI dilatation or ileus.
METHODS: Abdominal radiographs of patients with a surgical, clinical, and/or laboratory diagnosis of GI diseases with GI dilatation were retrieved from our institution's Picture Archiving and Communication System archive. Additionally, abdominal radiographs performed to detect abnormalities other than GI disorders were collected to form a control group. The images were labeled with three tags according to their groups: surgically-corrected dilatation (SD), inflammatory/infectious dilatation (ID), and normal. To determine the impact of standardizing the imaging area on the model's performance, an additional dataset was created by applying an automated cropping process. Five CNN models with proven success in image analysis (ResNet50, InceptionResNetV2, Xception, EfficientNetV2L, and ConvNeXtXLarge) were trained, validated, and tested using transfer learning.
RESULTS: A total of 540 normal, 298 SD, and 314 ID were used in this study. In the differentiation between normal and abnormal images, the highest accuracy rates were achieved with ResNet50 (93.3%) and InceptionResNetV2 (90.6%) CNN models. Then, after using automated cropping preprocessing, the highest accuracy rates were achieved with ConvNeXtXLarge (96.9%), ResNet50 (95.5%), and InceptionResNetV2 (95.5%). The highest accuracy in the differentiation between SD and ID was achieved with EfficientNetV2L (94.6%).
CONCLUSION: Deep learning models can be integrated into radiographs located in the emergency departments as a decision support system with high accuracy rates in pediatric GI obstructions by immediately alerting the physicians about abnormal radiographs and possible etiologies.
CLINICAL SIGNIFICANCE: This paper describes a novel area of utilization of well-known deep learning algorithm models. Although some studies in the literature have shown the efficiency of CNN models in identifying small bowel obstruction with high accuracy for the adult population or some specific diseases, our study is unique for the pediatric population and for evaluating the requirement of surgical versus medical treatment.
PMID:40018794 | DOI:10.4274/dir.2025.242950
Advancing antibiotic discovery with bacterial cytological profiling: a high-throughput solution to antimicrobial resistance
Front Microbiol. 2025 Feb 13;16:1536131. doi: 10.3389/fmicb.2025.1536131. eCollection 2025.
ABSTRACT
Developing new antibiotics poses a significant challenge in the fight against antimicrobial resistance (AMR), a critical global health threat responsible for approximately 5 million deaths annually. Finding new classes of antibiotics that are safe, have acceptable pharmacokinetic properties, and are appropriately active against pathogens is a lengthy and expensive process. Therefore, high-throughput platforms are needed to screen large libraries of synthetic and natural compounds. In this review, we present bacterial cytological profiling (BCP) as a rapid, scalable, and cost-effective method for identifying antibiotic mechanisms of action. Notably, BCP has proven its potential in drug discovery, demonstrated by the identification of the cellular target of spirohexenolide A against methicillin-resistant Staphylococcus aureus. We present the application of BCP for different bacterial organisms and different classes of antibiotics and discuss BCP's advantages, limitations, and potential improvements. Furthermore, we highlight the studies that have utilized BCP to investigate pathogens listed in the Bacterial Priority Pathogens List 2024 and we identify the pathogens whose cytological profiles are missing. We also explore the most recent artificial intelligence and deep learning techniques that could enhance the analysis of data generated by BCP, potentially advancing our understanding of antibiotic resistance mechanisms and the discovery of novel druggable pathways.
PMID:40018674 | PMC:PMC11865948 | DOI:10.3389/fmicb.2025.1536131
Deep learning models using intracranial and scalp EEG for predicting sedation level during emergence from anaesthesia
BJA Open. 2024 Oct 12;12:100347. doi: 10.1016/j.bjao.2024.100347. eCollection 2024 Dec.
ABSTRACT
BACKGROUND: Maintaining an appropriate depth of anaesthesia is important for avoiding adverse effects from undermedication or overmedication during surgery. Electroencephalography (EEG) has become increasingly used to achieve this balance. Investigating the predictive power of intracranial EEG (iEEG) and scalp EEG for different levels of sedation could increase the utility of EEG monitoring.
METHODS: Simultaneous iEEG, scalp EEG, and Observer's Assessment of Alertness/Sedation (OAA/S) scores were recorded during emergence from anaesthesia in seven patients undergoing placement of intracranial electrodes for medically refractory epilepsy. A deep learning model was constructed to predict an OAA/S score of 0-2 vs 3-5 using iEEG, scalp EEG, and their combination. An additional five patients with only scalp EEG data were used for independent validation. Models were evaluated using the area under the receiver-operating characteristic curve (AUC).
RESULTS: Combining scalp EEG and iEEG yielded significantly better prediction (AUC=0.795, P<0.001) compared with iEEG only (AUC=0.750, P=0.02) or scalp EEG only (AUC=0.764, P<0.001). The validation scalp EEG only data resulted in an AUC of 0.844. Combining the two modalities appeared to capture spatiotemporal advantages from both modalities.
CONCLUSIONS: The combination of iEEG and scalp EEG better predicted sedation level than either modality alone. The scalp EEG only model achieved a similar AUC to the combined model and maintained its performance in additional patients, suggesting that scalp EEG models are likely sufficient for real-time monitoring. Deep learning approaches using multiple leads to capture a wider area of brain activity may help augment existing EEG monitors for prediction of sedation.
PMID:40018289 | PMC:PMC11867133 | DOI:10.1016/j.bjao.2024.100347
Automated CT image prescription of the gallbladder using deep learning: Development, evaluation, and health promotion
Acute Med Surg. 2025 Feb 27;12(1):e70049. doi: 10.1002/ams2.70049. eCollection 2025 Jan-Dec.
ABSTRACT
AIM: Most previous research on AI-based image diagnosis of acute cholecystitis (AC) has utilized ultrasound images. While these studies have shown promising outcomes, the results were based on still images captured by physicians, introducing inevitable selection bias. This study aims to develop a fully automated system for precise gallbladder detection among various abdominal structures, aiding clinicians in the rapid assessment of AC requiring cholecystectomy.
METHODS: The dataset comprised images from 250 AC patients and 270 control participants. The VGG-16 architecture was employed for gallbladder recognition. Post-processing techniques such as the flood fill algorithm and centroid calculation were integrated into the model. U-Net was utilized for segmentation and features extraction. All models were combined to develop a fully automated AC detection system.
RESULTS: The gallbladder identification accuracy among various abdominal organs was 95.3%, with the model effectively filtering out CT images lacking a gallbladder. In diagnosing AC, the model was tested on 120 cases, achieving an accuracy of 92.5%, sensitivity of 90.4%, and specificity of 94.1%. After integrating all components, the ensemble model achieved an overall accuracy of 86.7%. The automated process required 0.029 seconds of computation time per CT slice and 3.59 seconds per complete CT set.
CONCLUSIONS: The proposed system achieves promising performance in the automatic detection and diagnosis of gallbladder conditions in patients requiring cholecystectomy, with robust accuracy and computational efficiency. With further clinical validation, this computer-assisted system could serve as an auxiliary tool in identifying patients requiring emergency surgery.
PMID:40018053 | PMC:PMC11865635 | DOI:10.1002/ams2.70049
Injecting structure-aware insights for the learning of RNA sequence representations to identify m6A modification sites
PeerJ. 2025 Feb 24;13:e18878. doi: 10.7717/peerj.18878. eCollection 2025.
ABSTRACT
N6-methyladenosine (m6A) represents one of the most prevalent methylation modifications in eukaryotes and it is crucial to accurately identify its modification sites on RNA sequences. Traditional machine learning based approaches to m6A modification site identification primarily focus on RNA sequence data but often incorporate additional biological domain knowledge and rely on manually crafted features. These methods typically overlook the structural insights inherent in RNA sequences. To address this limitation, we propose M6A-SAI, an advanced predictor for RNA m6A modifications. M6A-SAI leverages a transformer-based deep learning framework to integrate structure-aware insights into sequence representation learning, thereby enhancing the precision of m6A modification site identification. The core innovation of M6A-SAI lies in its ability to incorporate structural information through a multi-step process: initially, the model utilizes a Transformer encoder to learn RNA sequence representations. It then constructs a similarity graph based on Manhattan distance to capture sequence correlations. To address the limitations of the smooth similarity graph, M6A-SAI integrates a structure-aware optimization block, which refines the graph by defining anchor sets and generating an awareness graph through PageRank. Following this, M6A-SAI employs a self-correlation fusion graph convolution framework to merge information from both the similarity and awareness graphs, thus producing enriched sequence representations. Finally, a support vector machine is utilized for classifying these representations. Experimental results validate that M6A-SAI substantially improves the recognition of m6A modification sites by incorporating structure-aware insights, demonstrating its efficacy as a robust method for identifying RNA m6A modification sites.
PMID:40017651 | PMC:PMC11867033 | DOI:10.7717/peerj.18878
Research trends and hotspots evolution of artificial intelligence for cholangiocarcinoma over the past 10 years: a bibliometric analysis
Front Oncol. 2025 Feb 13;14:1454411. doi: 10.3389/fonc.2024.1454411. eCollection 2024.
ABSTRACT
OBJECTIVE: To analyze the research hotspots and potential of Artificial Intelligence (AI) in cholangiocarcinoma (CCA) through visualization.
METHODS: A comprehensive search of publications on the application of AI in CCA from January 1, 2014, to December 31, 2023, within the Web of Science Core Collection, was conducted, and citation information was extracted. CiteSpace 6.2.R6 was used for the visualization analysis of citation information.
RESULTS: A total of 736 publications were included in this study. Early research primarily focused on traditional treatment methods and care strategies for CCA, but since 2019, there has been a significant shift towards the development and optimization of AI algorithms and their application in early cancer diagnosis and treatment decision-making. China emerged as the country with the highest volume of publications, while Khon Kaen University in Thailand was the academic institution with the highest number of publications. A core group of authors involved in a dense network of international collaboration was identified. HEPATOLOGY was found to be the most influential journal in the field. The disciplinary development pattern in this domain exhibits the characteristic of multiple disciplines intersecting and integrating.
CONCLUSION: The current research hotspots primarily revolve around three directions: AI in the diagnosis and classification of CCA, AI in the preoperative assessment of cancer metastasis risk in CCA, and AI in the prediction of postoperative recurrence in CCA. The complementarity and interdependence among different AI applications will facilitate future applications of AI in the CCA field.
PMID:40017633 | PMC:PMC11865243 | DOI:10.3389/fonc.2024.1454411