Deep learning

Near-zero photon bioimaging by fusing deep learning and ultralow-light microscopy

Mon, 2025-05-19 06:00

Proc Natl Acad Sci U S A. 2025 May 27;122(21):e2412261122. doi: 10.1073/pnas.2412261122. Epub 2025 May 19.

ABSTRACT

Enhancing the reliability and reproducibility of optical microscopy by reducing specimen irradiance continues to be an important biotechnology target. As irradiance levels are reduced, however, the particle nature of light is heightened, giving rise to Poisson noise, or photon sparsity that restricts only a few (0.5%) image pixels to comprise a photon. Photon sparsity can be addressed by collecting approximately 200 photons per pixel; this, however, requires long acquisitions and, as such, suboptimal imaging rates. Here, we introduce near-zero photon bioimaging, a method that operates at kHz rates and 10,000-fold lower irradiance than standard microscopy. To achieve this level of performance, we uniquely combined a judiciously designed epifluorescence microscope enabling ultralow background levels and AI that learns to reconstruct biological images from as low as 0.01 photons per pixel. We demonstrate that near-zero photon bioimaging captures the structure of multicellular and subcellular features with high fidelity, including features represented by nearly zero photons. Beyond optical microscopy, the near-zero photon bioimaging paradigm can be applied in remote sensing, covert applications, and biomedical imaging that utilize damaging or quantum light.

PMID:40388622 | DOI:10.1073/pnas.2412261122

Categories: Literature Watch

Hybrid deep learning model for accurate and efficient android malware detection using DBN-GRU

Mon, 2025-05-19 06:00

PLoS One. 2025 May 19;20(5):e0310230. doi: 10.1371/journal.pone.0310230. eCollection 2025.

ABSTRACT

The rapid growth of Android applications has led to an increase in security threats, while traditional detection methods struggle to combat advanced malware, such as polymorphic and metamorphic variants. To address these challenges, this study introduces a hybrid deep learning model (DBN-GRU) that integrates Deep Belief Networks (DBN) for static analysis and Gated Recurrent Units (GRU) for dynamic behavior modeling to enhance malware detection accuracy and efficiency. The model extracts static features (permissions, API calls, intent filters) and dynamic features (system calls, network activity, inter-process communication) from Android APKs, enabling a comprehensive analysis of application behavior.The proposed model was trained and tested on the Drebin dataset, which includes 129,013 applications (5,560 malware and 123,453 benign).Performance evaluation against NMLA-AMDCEF, MalVulDroid, and LinRegDroid demonstrated that DBN-GRU achieved 98.7% accuracy, 98.5% precision, 98.9% recall, and an AUC of 0.99, outperforming conventional models.In addition, it exhibits faster preprocessing, feature extraction, and malware classification times, making it suitable for real-time deployment.By bridging static and dynamic detection methodologies, the DBN-GRU enhances malware detection capabilities while reducing false positives and computational overhead.These findings confirm the applicability of the proposed model in real-world Android security applications, offering a scalable and high-performance malware detection solution.

PMID:40388500 | DOI:10.1371/journal.pone.0310230

Categories: Literature Watch

Anomaly recognition in surveillance based on feature optimizer using deep learning

Mon, 2025-05-19 06:00

PLoS One. 2025 May 19;20(5):e0313692. doi: 10.1371/journal.pone.0313692. eCollection 2025.

ABSTRACT

Surveillance systems are integral to ensuring public safety by detecting unusual incidents, yet existing methods often struggle with accuracy and robustness. This study introduces an advanced framework for anomaly recognition in surveillance, leveraging deep learning to address these challenges and achieve significant improvements over current techniques. The framework begins with preprocessing input images using histogram equalization to enhance feature visibility. It then employs two DCNNs for feature extraction: a novel 63-layer CNN, "Up-to-the-Minute-Net," and the established Inception-Resnet-v2. The features extracted by both models are fused and optimized through two sophisticated feature selection techniques: Dragonfly and Genetic Algorithm (GA). The optimization process involves rigorous experimentation with 5- and 10-fold cross-validation to evaluate performance across various feature sets. The proposed approach achieves an unprecedented 99.9% accuracy in 5-fold cross-validation using the GA optimizer with 2500 selected features, demonstrating a substantial leap in accuracy compared to existing methods. This study's contribution lies in its innovative combination of deep learning models and advanced feature optimization techniques, setting a new benchmark in the field of anomaly recognition for surveillance systems and showcasing the potential for practical real-world applications.

PMID:40388481 | DOI:10.1371/journal.pone.0313692

Categories: Literature Watch

Predictive hybrid model of a grid-connected photovoltaic system with DC-DC converters under extreme altitude conditions at 3800 meters above sea level

Mon, 2025-05-19 06:00

PLoS One. 2025 May 19;20(5):e0324047. doi: 10.1371/journal.pone.0324047. eCollection 2025.

ABSTRACT

This study aims to develop a predictive hybrid model for a grid-connected PV system with DC-DC optimizers, designed to operate in extreme altitude conditions at 3800 m above sea level. This approach seeks to address the "curse of dimensionality" by reducing model complexity and improving its accuracy by combining the recursive feature removal (RFE) method with advanced regularization techniques, such as Lasso, Ridge, and Bayesian Ridge. The research used a photovoltaic system composed of monocrystalline modules, DC-DC optimizers and a 3000 W inverter. The data obtained from the system were divided into training and test sets, where RFE identified the most relevant variables, eliminating the reactive power of AC. Subsequently, the three regularization models were trained with these selected variables and evaluated using metrics such as precision, mean absolute error, mean square error and coefficient of determination. The results showed that RFE - Bayesian Ridge obtained the highest accuracy (0.999935), followed by RFE - Ridge, while RFE - Lasso had a slightly lower performance and also obtained an exceptionally low MASE (0.0034 for Bayesian and Ridge, compared to 0.0065 for Lasso). All models complied with the necessary statistical validations, including linearity, error normality, absence of autocorrelation and homoscedasticity, which guaranteed their reliability. This hybrid approach proved effective in optimizing the predictive performance of PV systems under challenging conditions. Future work will explore the integration of these models with energy storage systems and smart control strategies to improve operational stability. In addition, the application of the hybrid model in extreme climates, such as desert or polar areas, will be investigated, as well as its extension through deep learning techniques to capture non-linear relationships and increase adaptability to abrupt climate variations.

PMID:40388424 | DOI:10.1371/journal.pone.0324047

Categories: Literature Watch

AI-driven educational transformation in ICT: Improving adaptability, sentiment, and academic performance with advanced machine learning

Mon, 2025-05-19 06:00

PLoS One. 2025 May 19;20(5):e0317519. doi: 10.1371/journal.pone.0317519. eCollection 2025.

ABSTRACT

This study significantly contributes to the sphere of educational technology by deploying state-of-the-art machine learning and deep learning strategies for meaningful changes in education. The hybrid stacking approach did an excellent implementation using Decision Trees, Random Forest, and XGBoost as base learners with Gradient Boosting as a meta-learner, which managed to record an accuracy of 90%. That indeed puts into great perspective the huge potential it possesses for accuracy measures while predicting in educational setups. The CNN model, which predicted with an accuracy of 89%, showed quite impressive capability in sentiment analysis to acquire further insight into the emotional status of the students. RCNN, Random Forests, and Decision Trees contribute to the possibility of educational data complexity with valuable insight into the complex interrelationships within ML models and educational contexts. The application of the bagging XGBoost algorithm, which attained a high accuracy of 88%, further stamps its utility toward enhancement of academic performance through strong robust techniques of model aggregation. The dataset that was used in this study was sourced from Kaggle, with 1205 entries of 14 attributes concerning adaptability, sentiment, and academic performance; the reliability and richness of the analytical basis are high. The dataset allows rigorous modeling and validation to be done to ensure the findings are considered robust. This study has several implications for education and develops on the key dimensions: teacher effectiveness, educational leadership, and well-being of the students. From the obtained information about student adaptability and sentiment, the developed system helps educators to make modifications in instructional strategy more efficiently for a particular student to enhance effectiveness in teaching. All these aspects could provide critical insights for the educational leadership to devise data-driven strategies that would enhance the overall school-wide academic performance, as well as create a caring learning atmosphere. The integration of sentiment analysis within the structure of education brings an inclusive, responsive attitude toward ensuring students' well-being and, thus, a caring educational environment. The study is closely aligned with sustainable ICT in education objectives and offers a transformative approach to integrating AI-driven insights with practice in this field. By integrating notorious ML and DL methodologies with educational challenges, the research puts the basis for future innovations and technology in this area. Ultimately, it contributes to sustainable improvement in the educational system.

PMID:40388422 | DOI:10.1371/journal.pone.0317519

Categories: Literature Watch

Transfer learning in ECG diagnosis: Is it effective?

Mon, 2025-05-19 06:00

PLoS One. 2025 May 19;20(5):e0316043. doi: 10.1371/journal.pone.0316043. eCollection 2025.

ABSTRACT

The adoption of deep learning in ECG diagnosis is often hindered by the scarcity of large, well-labeled datasets in real-world scenarios, leading to the use of transfer learning to leverage features learned from larger datasets. Yet the prevailing assumption that transfer learning consistently outperforms training from scratch has never been systematically validated. In this study, we conduct the first extensive empirical study on the effectiveness of transfer learning in multi-label ECG classification, by investigating comparing the fine-tuning performance with that of training from scratch, covering a variety of ECG datasets and deep neural networks. Firstly, We confirm that fine-tuning is the preferable choice for small downstream datasets; however, it does not necessarily improve performance. Secondly, the improvement from fine-tuning declines when the downstream dataset grows. With a sufficiently large dataset, training from scratch can achieve comparable performance, albeit requiring a longer training time to catch up. Thirdly, fine-tuning can accelerate convergence, resulting in faster training process and lower computing cost. Finally, we find that transfer learning exhibits better compatibility with convolutional neural networks than with recurrent neural networks, which are the two most prevalent architectures for time-series ECG applications. Our results underscore the importance of transfer learning in ECG diagnosis, yet depending on the amount of available data, researchers may opt not to use it, considering the non-negligible cost associated with pre-training.

PMID:40388401 | DOI:10.1371/journal.pone.0316043

Categories: Literature Watch

LeFood-set: Baseline performance of predicting level of leftovers food dataset in a hospital using MT learning

Mon, 2025-05-19 06:00

PLoS One. 2025 May 19;20(5):e0320426. doi: 10.1371/journal.pone.0320426. eCollection 2025.

ABSTRACT

Monitoring the remaining food in patients' trays is a routine activity in healthcare facilities as it provides valuable insights into the patients' dietary intake. However, estimating food leftovers through visual observation is time-consuming and biased. To tackle this issue, we have devised an efficient deep learning-based approach that promises to revolutionize how we estimate food leftovers. Our first step was creating the LeFoodSet dataset, a pioneering large-scale open dataset explicitly designed for estimating food leftovers. This dataset is unique in its ability to estimate leftover rates and types of food. To the best of our knowledge, this is the first comprehensive dataset for this type of analysis. The dataset comprises 524 image pairs representing 34 Indonesian food categories, each with images captured before and after consumption. Our prediction models employed a combined visual feature extraction and late fusion approach utilizing soft parameter sharing. Here, we used multi-task (MT) models that simultaneously predict leftovers and food types in training. In the experiments, we tested the single task (ST) model, the ST Model with Ground Truth (ST-GT), the MT model, and the MT model with Inter-task Connection (MT-IC). Our AI-based models, particularly the MT and MT-IC models, have shown promising results, outperforming human observation in predicting leftover food. These findings show the best with the ResNet101 model, where the Mean Average Error (MAE) of leftover task and food classification accuracy task is 0.0801 and 90.44% in the MT Model and 0.0817 and 92.56% in the MT-IC Model, respectively. It is proved that the proposed solution has a bright future for AI-based approaches in medical and nursing applications.

PMID:40388400 | DOI:10.1371/journal.pone.0320426

Categories: Literature Watch

DeepProtein: Deep Learning Library and Benchmark for Protein Sequence Learning

Mon, 2025-05-19 06:00

Bioinformatics. 2025 May 19:btaf165. doi: 10.1093/bioinformatics/btaf165. Online ahead of print.

ABSTRACT

MOTIVATION: Deep learning has deeply influenced protein science, enabling breakthroughs in predicting protein properties, higher-order structures, and molecular interactions.

RESULTS: This paper introduces DeepProtein, a comprehensive and user-friendly deep learning library tailored for protein-related tasks. It enables researchers to seamlessly address protein data with cutting-edge deep learning models. To assess model performance, we establish a benchmark that evaluates different deep learning architectures across multiple protein-related tasks, including protein function prediction, subcellular localization prediction, protein-protein interaction prediction, and protein structure prediction. Furthermore, we introduce DeepProt-T5, a series of fine-tuned Prot-T5-based models that achieve state-of-the-art performance on four benchmark tasks, while demonstrating competitive results on six of others. Comprehensive documentation and tutorials are available which could ensure accessibility and support reproducibility.

AVAILABILITY AND IMPLEMENTATION: Built upon the widely used drug discovery library DeepPurpose, DeepProtein is publicly available at https://github.com/jiaqingxie/DeepProtein.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

PMID:40388205 | DOI:10.1093/bioinformatics/btaf165

Categories: Literature Watch

Artificial intelligence based pulmonary vessel segmentation: an opportunity for automated three-dimensional planning of lung segmentectomy

Mon, 2025-05-19 06:00

Interdiscip Cardiovasc Thorac Surg. 2025 May 19:ivaf101. doi: 10.1093/icvts/ivaf101. Online ahead of print.

ABSTRACT

OBJECTIVES: This study aimed to develop an automated method for pulmonary artery and vein segmentation in both left and right lungs from computed tomography (CT) images using artificial intelligence (AI). The segmentations were evaluated using PulmoSR software, which provides 3D visualizations of patient-specific anatomy, potentially enhancing a surgeon's understanding of the lung structure.

METHODS: A dataset of 125 CT scans from lung segmentectomy patients at Erasmus MC was used. Manual annotations for pulmonary arteries and veins were created with 3D Slicer. nnU-Net models were trained for both lungs, assessed using Dice score, sensitivity, and specificity. Intraoperative recordings demonstrated clinical applicability. A paired t-test evaluated statistical significance of the differences between automatic and manual segmentations.

RESULTS: The nnU-Net model, trained at full 3D resolution, achieved a mean Dice score between 0.91 and 0.92. The mean sensitivity and specificity were: left artery: 0.86 and 0.99, right artery: 0.84 and 0.99, left vein: 0.85 and 0.99, right vein: 0.85 and 0.99. The automatic method reduced segmentation time from ∼1.5 hours to under 5 min. Five cases were evaluated to demonstrate how the segmentations support lung segmentectomy procedures. P-values for Dice scores were all below 0.01, indicating statistical significance.

CONCLUSIONS: The nnU-Net models successfully performed automatic segmentation of pulmonary arteries and veins in both lungs. When integrated with visualization tools, these automatic segmentations can enhance preoperative and intraoperative planning by providing detailed 3D views of patients anatomy.

PMID:40388152 | DOI:10.1093/icvts/ivaf101

Categories: Literature Watch

Single-Protein Determinations by Magnetofluorescent Qubit Imaging with Artificial-Intelligence Augmentation at the Point-Of-Care

Mon, 2025-05-19 06:00

ACS Nano. 2025 May 19. doi: 10.1021/acsnano.5c04340. Online ahead of print.

ABSTRACT

Conventional point-of-care testing (POCT) has limitations in sensitivity with high risks of missed detection or false positive, which restrains its applications for routine outpatient care analysis and early clinical diagnosis. By merits of the cutting-edge quantum precision metrology, this study devised a mini quantum sensor via magnetofluorescent qubit tagging and tunning on core-shelled fluorescent nanodiamond FND@SiO2. Comprehensive characterizations confirmed the formation of FND biolabels, while spectroscopies secured no degradation in spin-state transition after surface modification. A methodical parametrization was deliberated and decided, accomplishing a wide-field modulation depth ≥15% in ∼ zero field, which laid foundation for supersensitive sensing at single-FND resolution. Using viral nucleocapsid protein as a model marker, an ultralow limit of detection (LOD) was obtained by lock-in analysis, outperforming conventional colorimetry and immunofluorescence by > 2000 fold. Multianalyte and affinity assays were also enabled on this platform. Further by resort to artificial-intelligence (AI) augmentation in the Unet-ConvLSTM-Attention architecture, authentic qubit dots were identified by pixelwise survey through pristine qubit queues. Such processing not just improved pronouncedly the probing precision but also achieved deterministic detections down to a single protein in human saliva with an ultimate LOD as much as 7800-times lower than that of colloidal Au approach, which competed with the RT-qPCR threshold and the certified critical value of SIMOA, the gold standard. Hence, by AI-aided digitization on optic qubits, this REASSURED-compliant contraption may promise a next-generation POCT solution with unparalleled sensitivity, speed, and cost-effectiveness, which in whole confers a conclusive proof of the prowess of the burgeoning quantum metrics in biosensing.

PMID:40388114 | DOI:10.1021/acsnano.5c04340

Categories: Literature Watch

Assessing fetal lung maturity: Integration of ultrasound radiomics and deep learning

Mon, 2025-05-19 06:00

Afr J Reprod Health. 2025 May 16;29(5s):51-64. doi: 10.29063/ajrh2025/v29i5s.7.

ABSTRACT

This study built a model to forecast the maturity of lungs by blending radiomics and deep learning methods. We examined ultrasound images from 263 pregnancies in the pregnancy stages. Utilizing the GE VOLUSON E8 system we captured images to extract and analyze radiomic features. These features were integrated with clinical data by means of deep learning algorithms such as DenseNet121 to enhance the accuracy of assessing fetal lung maturity. This combined model was validated by receiver operating characteristic (ROC) curve, calibration diagram, as well as decision curve analysis (DCA). We discovered that the accuracy and reliability of the diagnosis indicated that this method significantly improves the level of prediction of fetal lung maturity. This novel non-invasive diagnostic technology highlights the potential advantages of integrating diverse data sources to enhance prenatal care and infant health. The study lays groundwork, for validation and refinement of the model across various healthcare settings.

PMID:40387939 | DOI:10.29063/ajrh2025/v29i5s.7

Categories: Literature Watch

The Application of Anisotropically Collapsing Gels, Deep Learning, and Optical Microscopy for Chemical Characterization of Nanoparticles and Nanoplastics

Mon, 2025-05-19 06:00

Langmuir. 2025 May 19. doi: 10.1021/acs.langmuir.5c00769. Online ahead of print.

ABSTRACT

The surface chemistry of nanomaterials, particularly the density of functional groups, governs their behavior in applications such as bioanalysis, bioimaging, and environmental impact studies. Here, we report a precise method to quantify carboxyl groups per nanoparticle by combining anisotropically collapsing agarose gels for nanoparticle immobilization with fluorescence microscopy and acid-base titration. We applied this approach to photon-upconversion nanoparticles (UCNPs) coated with poly(acrylic acid) (PAA) and fluorescence-labeled polystyrene nanoparticles (PNs), which serve as models for bioimaging and environmental pollutants, respectively. UCNPs exhibited 152 ± 14 thousand carboxyl groups per particle (∼11 groups/nm2), while PNs were characterized with 38 ± 3.6 thousand groups (∼1.7 groups/nm2). The limit of detection was 6.4 and 1.9 thousand carboxyl groups per nanoparticle, and the limit of quantification was determined at 21 and 6.2 thousand carboxyl groups per nanoparticle for UCNP-PAAs and PNs, respectively. High intrinsic luminescence enabled direct imaging of UCNPs, while PNs required fluorescence staining with Nile Red to overcome low signal-to-noise ratios. The study also discussed the critical influence of nanoparticle concentration and titration conditions on the assay performance. This method advances the precise characterization of surface chemistry, offering insights into nanoparticle structure that extend beyond the resolution of electron microscopy. Our findings establish a robust platform for investigating the interplay of surface chemistry with nanoparticle function and fate in technological and environmental contexts, with broad applicability across nanomaterials.

PMID:40387864 | DOI:10.1021/acs.langmuir.5c00769

Categories: Literature Watch

Robust automatic train pass-by detection combining deep learning and sound level analysis

Mon, 2025-05-19 06:00

JASA Express Lett. 2025 May 1;5(5):053601. doi: 10.1121/10.0036754.

ABSTRACT

The increasing needs for controlling high noise levels motivate development of automatic sound event detection and classification methods. Little work deals with automatic train pass-by detection despite a high degree of annoyance. To this matter, an innovative approach is proposed in this paper. A generic classifier identifies vehicle noise on the raw audio signal. Then, combined short sound level analysis and mel-spectrogram-based classification refine this outcome to discard anything but train pass-bys. On various long-term signals, a 90% temporal overlap with reference demarcation is observed. This high detection rate allows a proper railway noise contribution estimation in different soundscapes.

PMID:40387613 | DOI:10.1121/10.0036754

Categories: Literature Watch

Leukaemia Stem Cells and Their Normal Stem Cell Counterparts Are Morphologically Distinguishable by Artificial Intelligence

Mon, 2025-05-19 06:00

J Cell Mol Med. 2025 May;29(10):e70564. doi: 10.1111/jcmm.70564.

ABSTRACT

Leukaemia stem cells (LSCs) are a rare population among the bulk of leukaemia cells and are responsible for disease initiation, progression/relapse and insensitivity to therapies in numerous haematologic malignancies. Identification of LSCs and monitoring of their quantity before, during, and after treatments will provide a guidance for choosing a correct treatment and assessing therapy response and disease prognosis, but such a method is still lacking simply because there are no distinct morphological features recognisable for distinguishing LSCs from normal stem cell counterparts. Using artificial intelligence (AI) deep learning and polycythemia vera (PV) as a disease model (a type of human myeloproliferative neoplasms derived from a haematopoietic stem cell harbouring the JAK2V617F oncogene), we combine 19 convolutional neural networks as a whole to build AI models for analysing single-cell images, allowing for distinguishing between LSCs from JAK2V617F knock-in mice and normal stem counterparts from healthy mice with a high accuracy (> 99%). We prove the concept that LSCs possess unique morphological features compared to their normal stem cell counterparts, and AI, but not microscopic visualisation by pathologists, can extract and identify these features. In addition, we show that LSCs and other cell lineages in PV mice are also distinguishable by AI. Our study opens up a potential AI morphology field for identifying various primitive leukaemia cells, especially including LSCs, to help assess therapy responses and disease prognosis in the future.

PMID:40387596 | DOI:10.1111/jcmm.70564

Categories: Literature Watch

Non-orthogonal kV imaging guided patient position verification in non-coplanar radiation therapy with dataset-free implicit neural representation

Mon, 2025-05-19 06:00

Med Phys. 2025 May 19. doi: 10.1002/mp.17885. Online ahead of print.

ABSTRACT

BACKGROUND: Cone-beam CT (CBCT) is crucial for patient alignment and target verification in radiation therapy (RT). However, for non-coplanar beams, potential collisions between the treatment couch and the on-board imaging system limit the range that the gantry can be rotated. Limited-angle measurements are often insufficient to generate high-quality volumetric images for image-domain registration, therefore limiting the use of CBCT for position verification. An alternative to image-domain registration is to use a few 2D projections acquired by the onboard kV imager to register with the 3D planning CT for patient position verification, which is referred to as 2D-3D registration.

PURPOSE: The 2D-3D registration involves converting the 3D volume into a set of digitally reconstructed radiographs (DRRs) expected to be comparable to the acquired 2D projections. The domain gap between the generated DRRs and the acquired projections can happen due to the inaccurate geometry modeling in DRR generation and artifacts in the actual acquisitions. We aim to improve the efficiency and accuracy of the challenging 2D-3D registration problem in non-coplanar RT with limited-angle CBCT scans.

METHOD: We designed an accelerated, dataset-free, and patient-specific 2D-3D registration framework based on an implicit neural representation (INR) network and a composite similarity measure. The INR network consists of a lightweight three-layer multilayer perception followed by average pooling to calculate rigid motion parameters, which are used to transform the original 3D volume to the moving position. The Radon transform and imaging specifications at the moving position are used to generate DRRs with higher accuracy. We designed a composite similarity measure consisting of pixel-wise intensity difference and gradient differences between the generated DRRs and acquired projections to further reduce the impact of their domain gap on registration accuracy. We evaluated the proposed method on both simulation data and real phantom data acquired from a Varian TrueBeam machine. Comparisons with a conventional non-deep-learning registration approach and ablation studies on the composite similarity measure were conducted to demonstrate the efficacy of the proposed method.

RESULTS: In the simulation data experiments, two X-ray projections of a head-and-neck image with 45 ∘ ${45}^\circ$ discrepancy were used for the registration. The accuracy of the registration results was evaluated on experiments set up at four different moving positions with ground-truth moving parameters. The proposed method achieved sub-millimeter accuracy in translations and sub-degree accuracy in rotations. In the phantom experiments, a head-and-neck phantom was scanned at three different positions involving couch translations and rotations. We achieved translation errors of < 2 mm $< 2\nobreakspace {\rm mm}$ and subdegree accuracy for pitch and roll. Experiments on registration using different numbers of projections with varying angle discrepancies demonstrate the improved accuracy and robustness of the proposed method, compared to both the conventional registration approach and the proposed approach without certain components of the composite similarity measure.

CONCLUSION: We proposed a dataset-free lightweight INR-based registration with a composite similarity measure for the challenging 2D-3D registration problem with limited-angle CBCT scans. Comprehensive evaluations of both simulation data and experimental phantom data demonstrated the efficiency, accuracy, and robustness of the proposed method.

PMID:40387508 | DOI:10.1002/mp.17885

Categories: Literature Watch

The Future of Parasomnias

Mon, 2025-05-19 06:00

J Sleep Res. 2025 May 19:e70090. doi: 10.1111/jsr.70090. Online ahead of print.

ABSTRACT

Parasomnias are abnormal behaviours or mental experiences during sleep or the sleep-wake transition. As disorders of arousal (DOA) or REM sleep behaviour disorder (RBD) can be difficult to capture in the sleep laboratory and may need to be diagnosed in large communities, new home diagnostic devices are being developed, including actigraphy, EEG headbands, as well as 2D infrared and 3D time of flight home cameras (often with automatic analysis). Traditional video-polysomnographic diagnostic criteria for RBD and DOA are becoming more accurate, and deep learning methods are beginning to accurately classify abnormal polysomnographic signals in these disorders. Big data from vast collections of clinical, cognitive, brain imaging, DNA and polysomnography data have provided new information on the factors that are associated with parasomnia and, in the case of RBD, may predict the individual risk of conversion to an overt neurodegenerative disease. Dream engineering, including targeted reactivation of memory during sleep, combined with image repetition therapy and lucid dreaming, is helping to alleviate nightmares in patients. On a political level, RBD has brought together specialists in abnormal movements and sleep neurologists, and research into nightmares and sleep-wake dissociations has brought together sleep and consciousness scientists.

PMID:40387303 | DOI:10.1111/jsr.70090

Categories: Literature Watch

Development and Validation an Integrated Deep Learning Model to Assist Eosinophilic Chronic Rhinosinusitis Diagnosis: A Multicenter Study

Mon, 2025-05-19 06:00

Int Forum Allergy Rhinol. 2025 May 19:e23595. doi: 10.1002/alr.23595. Online ahead of print.

ABSTRACT

BACKGROUND: The assessment of eosinophilic chronic rhinosinusitis (eCRS) lacks accurate non-invasive preoperative prediction methods, relying primarily on invasive histopathological sections. This study aims to use computed tomography (CT) images and clinical parameters to develop an integrated deep learning model for the preoperative identification of eCRS and further explore the biological basis of its predictions.

METHODS: A total of 1098 patients with sinus CT images were included from two hospitals and were divided into training, internal, and external test sets. The region of interest of sinus lesions was manually outlined by an experienced radiologist. We utilized three deep learning models (3D-ResNet, 3D-Xception, and HR-Net) to extract features from CT images and calculate deep learning scores. The clinical signature and deep learning score were inputted into a support vector machine for classification. The receiver operating characteristic curve, sensitivity, specificity, and accuracy were used to evaluate the integrated deep learning model. Additionally, proteomic analysis was performed on 34 patients to explore the biological basis of the model's predictions.

RESULTS: The area under the curve of the integrated deep learning model to predict eCRS was 0.851 (95% confidence interval [CI]: 0.77-0.93) and 0.821 (95% CI: 0.78-0.86) in the internal and external test sets. Proteomic analysis revealed that in patients predicted to be eCRS, 594 genes were dysregulated, and some of them were associated with pathways and biological processes such as chemokine signaling pathway.

CONCLUSIONS: The proposed integrated deep learning model could effectively predict eCRS patients. This study provided a non-invasive way of identifying eCRS to facilitate personalized therapy, which will pave the way toward precision medicine for CRS.

PMID:40387008 | DOI:10.1002/alr.23595

Categories: Literature Watch

Portable Ultrasound Bladder Volume Measurement Over Entire Volume Range Using a Deep Learning Artificial Intelligence Model in a Selected Cohort: A Proof of Principle Study

Mon, 2025-05-19 06:00

Neurourol Urodyn. 2025 May 19. doi: 10.1002/nau.70057. Online ahead of print.

ABSTRACT

OBJECTIVE: We aimed to prospectively investigate whether bladder volume measured using deep learning artificial intelligence (AI) algorithms (AI-BV) is more accurate than that measured using conventional methods (C-BV) if using a portable ultrasound bladder scanner (PUBS).

PATIENTS AND METHODS: Patients who underwent filling cystometry because of lower urinary tract symptoms between January 2021 and July 2022 were enrolled. Every time the bladder was filled serially with normal saline from 0 mL to maximum cystometric capacity in 50 mL increments, C-BV was measured using PUBS. Ultrasound images obtained during this process were manually annotated to define the bladder contour, which was used to build a deep learning AI model. The true bladder volume (T-BV) for each bladder volume range was compared with C-BV and AI-BV for analysis.

RESULTS: We enrolled 250 patients (213 men and 37 women), and a deep learning AI model was established using 1912 bladder images. There was a significant difference between C-BV (205.5 ± 170.8 mL) and T-BV (190.5 ± 165.7 mL) (p = 0.001), but no significant difference between AI-BV (197.0 ± 161.1 mL) and T-BV (190.5 ± 165.7 mL) (p = 0.081). In bladder volume ranges of 101-150, 151-200, and 201-300 mL, there were significant differences in the percentage of volume differences between [C-BV and T-BV] and [AI-BV and T-BV] (p < 0.05), but no significant difference if converted to absolute values (p > 0.05). C-BV (R2 = 0.91, p < 0.001) and AI-BV (R2 = 0.90, p < 0.001) were highly correlated with T-BV. The mean difference between AI-BV and T-BV (6.5 ± 50.4) was significantly smaller than that between C-BV and T-BV (15.0 ± 50.9) (p = 0.001).

CONCLUSION: Following image pre-processing, deep learning AI-BV more accurately estimated true BV than conventional methods in this selected cohort on internal validation. Determination of the clinical relevance of these findings and performance in external cohorts requires further study.

TRIAL REGISTRATION: The clinical trial was conducted using an approved product for its approved indication, so approval from the Ministry of Food and Drug Safety (MFDS) was not required. Therefore, there is no clinical trial registration number.

PMID:40384598 | DOI:10.1002/nau.70057

Categories: Literature Watch

Baseline correction of Raman spectral data using triangular deep convolutional networks

Mon, 2025-05-19 06:00

Analyst. 2025 May 19. doi: 10.1039/d5an00253b. Online ahead of print.

ABSTRACT

Raman spectroscopy requires baseline correction to address fluorescence- and instrumentation-related distortions. The existing baseline correction methods can be broadly classified into traditional mathematical approaches and deep learning-based techniques. While traditional methods often require manual parameter tuning for different spectral datasets, deep learning methods offer greater adaptability and enhance automation. Recent research on deep learning-based baseline correction has primarily focused on optimizing existing methods or designing new network architectures to improve correction performance. This study proposes a novel deep learning network architecture to further enhance baseline correction effectiveness, building upon prior research. Experimental results demonstrate that the proposed method outperforms existing approaches by achieving superior correction accuracy, reducing computation time, and more effectively preserving peak intensity and shape.

PMID:40384579 | DOI:10.1039/d5an00253b

Categories: Literature Watch

Federated Learning for Renal Tumor Segmentation and Classification on Multi-Center MRI Dataset

Mon, 2025-05-19 06:00

J Magn Reson Imaging. 2025 May 19. doi: 10.1002/jmri.29819. Online ahead of print.

ABSTRACT

BACKGROUND: Deep learning (DL) models for accurate renal tumor characterization may benefit from multi-center datasets for improved generalizability; however, data-sharing constraints necessitate privacy-preserving solutions like federated learning (FL).

PURPOSE: To assess the performance and reliability of FL for renal tumor segmentation and classification in multi-institutional MRI datasets.

STUDY TYPE: Retrospective multi-center study.

POPULATION: A total of 987 patients (403 female) from six hospitals were included for analysis. 73% (723/987) had malignant renal tumors, primarily clear cell carcinoma (n = 509). Patients were split into training (n = 785), validation (n = 104), and test (n = 99) sets, stratified across three simulated institutions.

FIELD STRENGTH/SEQUENCE: MRI was performed at 1.5 T and 3 T using T2-weighted imaging (T2WI) and contrast-enhanced T1-weighted imaging (CE-T1WI) sequences.

ASSESSMENT: FL and non-FL approaches used nnU-Net for tumor segmentation and ResNet for its classification. FL-trained models across three simulated institutional clients with central weight aggregation, while the non-FL approach used centralized training on the full dataset.

STATISTICAL TESTS: Segmentation was evaluated using Dice coefficients, and classification between malignant and benign lesions was assessed using accuracy, sensitivity, specificity, and area under the curves (AUCs). FL and non-FL performance was compared using the Wilcoxon test for segmentation Dice and Delong's test for AUC (p < 0.05).

RESULTS: No significant difference was observed between FL and non-FL models in segmentation (Dice: 0.43 vs. 0.45, p = 0.202) or classification (AUC: 0.69 vs. 0.64, p = 0.959) on the test set. For classification, no significant difference was observed between the models in accuracy (p = 0.912), sensitivity (p = 0.862), or specificity (p = 0.847) on the test set.

DATA CONCLUSION: FL demonstrated comparable performance to non-FL approaches in renal tumor segmentation and classification, supporting its potential as a privacy-preserving alternative for multi-institutional DL models.

EVIDENCE LEVEL: 4.

TECHNICAL EFFICACY: Stage 2.

PMID:40384349 | DOI:10.1002/jmri.29819

Categories: Literature Watch

Pages