Deep learning

Automatic melanoma and non-melanoma skin cancer diagnosis using advanced adaptive fine-tuned convolution neural networks

Wed, 2025-04-30 06:00

Discov Oncol. 2025 Apr 30;16(1):645. doi: 10.1007/s12672-025-02279-8.

ABSTRACT

Skin Cancer is an extensive and possibly dangerous disorder that requires early detection for effective treatment. Add specific global statistics on skin cancer prevalence and mortality to emphasize the importance of early detection. Example: "Skin cancer accounts for 1 in 5 diagnosed cancers globally, with melanoma causing over 60,000 deaths annually. Manual skin cancer screening is both time-intensive and expensive. Deep learning (DL) techniques have shown exceptional performance in various applications and have been applied to systematize skin cancer diagnosis. However, training DL models for skin cancer diagnosis is challenging due to limited available data and the risk of overfitting. Traditionally approaches have High computational costs, a lack of interpretability, deal with numerous hyperparameters and spatial variation have always been problems with machine learning (ML) and DL. An innovative method called adaptive learning has been developed to overcome these problems. In this research, we advise an intelligent computer-aided system for automatic skin cancer diagnosis using a two-stage transfer learning approach and Pre-trained Convolutional Neural Networks (CNNs). CNNs are well-suited for learning hierarchical features from images. Annotated skin cancer photographs are utilized to detect ROIs and reset the initial layer of the pre-trained CNN. The lower-level layers learn about the characteristics and patterns of lesions and unaffected areas by fine-tuning the model. To capture high-level, global features specific to skin cancer, we replace the fully connected (FC) layers, responsible for encoding such features, with a new FC layer based on principal component analysis (PCA). This unsupervised technique enables the mining of discriminative features from the skin cancer images, effectively mitigating overfitting concerns and letting the model adjust structural features of skin cancer images, facilitating effective detection of skin cancer features. The system shows great potential in facilitating the initial screening of skin cancer patients, empowering healthcare professionals to make timely decisions regarding patient referrals to dermatologists or specialists for further diagnosis and appropriate treatment. Our advanced adaptive fine-tuned CNN approach for automatic skin cancer diagnosis offers a valuable tool for efficient and accurate early detection. By leveraging DL and transfer learning techniques, the system has the possible to transform skin cancer diagnosis and improve patient outcomes.

PMID:40304929 | DOI:10.1007/s12672-025-02279-8

Categories: Literature Watch

Association between the retinal age gap and systemic diseases in the Japanese population: the Nagahama study

Wed, 2025-04-30 06:00

Jpn J Ophthalmol. 2025 Apr 30. doi: 10.1007/s10384-025-01205-3. Online ahead of print.

ABSTRACT

PURPOSE: To investigate the retinal age gap, defined as the difference between deep learning-predicted retinal age and chronological age, as a potential biomarker of systemic health in the Japanese population.

STUDY DESIGN: Prospective cohort study.

METHODS: Data from the Nagahama Study, a large-scale Japanese cohort study, were used. Participants were divided into fine-tuning (n=2,261) and analysis (n=6,070) cohorts based on their visit status across the two periods. The fine-tuning cohort only included individuals without a history of systemic or cardiovascular diseases. A deep learning model, originally released in the Japan Ocular Imaging Registry, was fine-tuned using a fine-tuning cohort to predict retinal age from images. This refined model was then applied to the analysis cohort to calculate retinal age gaps. We conducted cross-sectional and longitudinal analyses to examine the association of these gaps with systemic and cardiovascular diseases.

RESULTS: The retinal age-prediction model achieved a mean absolute error of 3.00-3.42 years. Cross-sectional analysis revealed significant associations between the retinal age gap and a history of diabetes (β = 1.08, p < 0.001) and hyperlipidemia (β = -0.67, p < 0.001). Longitudinal analysis showed no significant association between the baseline retinal age gap and disease onset. However, onset of hypertension (β = 0.35, p = 0.049) and hyperlipidemia (β = 0.34, p = 0.035) showed marginal associations with an increase in retinal age gap over time.

CONCLUSION: The retinal age gap is a promising biomarker for systemic health, particularly in relation to diabetes, hypertension, and hyperlipidemia.

PMID:40304887 | DOI:10.1007/s10384-025-01205-3

Categories: Literature Watch

HoRNS-CNN model: an energy-efficient fully homomorphic residue number system convolutional neural network model for privacy-preserving classification of dyslexia neural-biomarkers

Wed, 2025-04-30 06:00

Brain Inform. 2025 Apr 30;12(1):11. doi: 10.1186/s40708-025-00256-z.

ABSTRACT

Recent advancements in cloud-based machine learning (ML) now allow for the rapid and remote identification of neural-biomarkers associated with common neuro-developmental disorders from neuroimaging datasets. Due to the sensitive nature of these datasets, secure deep learning (DL) algorithms are essential. Although, fully homomorphic encryption (FHE)-based methods have been proposed to maintain data confidentiality and privacy, however, existing FHE deep convolutional neural network (CNN) models still face some issues such as low accuracy, high encryption/decryption latency, energy inefficiency, long feature extraction times, and significant cipher-image expansion. To address these issues, this study introduces the HoRNS-CNN model, which integrates the energy-efficient features of the residue number system FHE scheme (RNS-FHE scheme) with the high accuracy of pre-trained deep CNN models in the cloud for efficient, privacy-preserving predictions and provide some proofs of its energy efficiency and homomorphism. The RNS-FHE scheme's FPGA implementation includes embedded RNS pixel-bitstream homomorphic encoder/decoder circuits for encrypting 8-bit grayscale pixels, with cloud CNN models performing remote classification on the encrypted images. In the HoRNS-CNN architecture, the ReLU activation functions of deep CNNs were initially trained for stability and later adapted for homomorphic computations using a Taylor polynomial approximation of degree 3 and batch normalization to achieve high accuracy. The findings show that the HoRNS-CNN model effectively manages cipher-image expansion with an asymptotic complexity of O n 3 , offering better performance and faster feature extraction compared to its peers. The model can predict 400,000 neural-biomarker features in one hour, providing an effective tool for analyzing neuroimages while ensuring privacy and security.

PMID:40304880 | DOI:10.1186/s40708-025-00256-z

Categories: Literature Watch

Explainable CNN for brain tumor detection and classification through XAI based key features identification

Wed, 2025-04-30 06:00

Brain Inform. 2025 Apr 30;12(1):10. doi: 10.1186/s40708-025-00257-y.

ABSTRACT

Despite significant advancements in brain tumor classification, many existing models suffer from complex structures that make them difficult to interpret. This complexity can hinder the transparency of the decision-making process, causing models to rely on irrelevant features or normal soft tissues. Besides, these models often include additional layers and parameters, which further complicate the classification process. Our work addresses these limitations by introducing a novel methodology that combines Explainable AI (XAI) techniques with a Convolutional Neural Network (CNN) architecture. The major contribution of this paper is ensuring that the model focuses on the most relevant features for tumor detection and classification, while simultaneously reducing complexity, by minimizing the number of layers. This approach enhances the model's transparency and robustness, giving clear insights into its decision-making process through XAI techniques such as Gradient-weighted Class Activation Mapping (Grad-Cam), Shapley Additive explanations (Shap), and Local Interpretable Model-agnostic Explanations (LIME). Additionally, the approach demonstrates better performance, achieving 99% accuracy on seen data and 95% on unseen data, highlighting its generalizability and reliability. This balance of simplicity, interpretability, and high accuracy represents a significant advancement in the classification of brain tumor.

PMID:40304860 | DOI:10.1186/s40708-025-00257-y

Categories: Literature Watch

Computer-aided diagnosis tool utilizing a deep learning model for preoperative T-staging of rectal cancer based on three-dimensional endorectal ultrasound

Wed, 2025-04-30 06:00

Abdom Radiol (NY). 2025 Apr 30. doi: 10.1007/s00261-025-04966-0. Online ahead of print.

ABSTRACT

BACKGROUND: The prognosis and treatment outcomes for patients with rectal cancer are critically dependent on an accurate and comprehensive preoperative evaluation.Three-dimensional endorectal ultrasound (3D-ERUS) has demonstrated high accuracy in the T staging of rectal cancer. Thus, we aimed to develop a computer-aided diagnosis (CAD) tool using a deep learning model for the preoperative T-staging of rectal cancer with 3D-ERUS.

METHODS: We retrospectively analyzed the data of 216 rectal cancer patients who underwent 3D-ERUS. The patients were randomly assigned to a training cohort (n = 156) or a testing cohort (n = 60). Radiologists interpreted the 3D-ERUS images of the testing cohort with and without the CAD tool. The diagnostic performance of the CAD tool and its impact on the radiologists' interpretations were evaluated.

RESULTS: The CAD tool demonstrated high diagnostic efficacy for rectal cancer tumors of all T stages, with the best diagnostic performance achieved for T1-stage tumors (AUC, 0.85; 95% CI, 0.73-0.93). With assistance from the CAD tool, the AUC for T1 tumors improved from 0.76 (95% CI, 0.63-0.86) to 0.80 (95% CI, 0.68-0.94) (P = 0.020) for junior radiologist 2. For junior radiologist 1, the AUC improved from 0.61 (95% CI, 0.48-0.73) to 0.79 (95% CI, 0.66-0.88) (P = 0.013) for T2 tumors and from 0.73 (95% CI, 0.60-0.84) to 0.84 (95% CI, 0.72-0.92) (P = 0.038) for T3 tumors. The diagnostic consistency (κ value) also improved from 0.31 to 0.64 (P = 0.005) for the junior radiologists and from 0.52 to 0.66 (P = 0.005) for the senior radiologists.

CONCLUSION: A CAD tool utilizing a deep learning model based on 3D-ERUS images showed strong performance in T staging rectal cancer. This tool could improve the performance of and consistency between radiologists in preoperatively assessing rectal cancer patients.

PMID:40304753 | DOI:10.1007/s00261-025-04966-0

Categories: Literature Watch

Functional blepharoptosis screening with generative augmented deep learning from external ocular photography

Wed, 2025-04-30 06:00

Orbit. 2025 Apr 30:1-7. doi: 10.1080/01676830.2025.2497460. Online ahead of print.

ABSTRACT

PURPOSE: To develop and validate a deep learning model for the detection of functional blepharoptosis from external ocular photographs, and to quantify the impact of augmenting the training data with synthetic images on model performance.

METHODS: External ocular photographs of 771 eyes from patients aged ≥ 21 years seen at a tertiary oculoplastic clinic. including 639 with clinically diagnosed functional blepharoptosis and 132 without, were obtained and cropped. These were then randomly assigned into training (n = 539), validation (n = 76) and test (n = 156) subsets, to train and evaluate a baseline deep learning model. Additional synthetic data from a pretrained StyleGAN model was then used to augment the training set (n = 2000), to train and evaluate an augmented deep learning model. Analysis of the performance of both models was then performed.

RESULTS: Accuracy of the deep learning models was assessed in terms of sensitivity and specificity in identifying eye images with functionally significant blepharoptosis. A sensitivity of 0.68 (0.60-0.76), specificity of 0.89 (0.77-1.00) and AUC of 0.87 (0.81-0.93) was obtained by the baseline model, and a sensitivity of 0.95 (0.92-0.99), specificity of 0.67 (0.49-0.84) and AUC of 0.91 (0.86-0.96) by the GAN augmented model.

CONCLUSIONS: Functional blepharoptosis can be detected from external ocular photographs with high confidence, and the use of synthetic data from generative models has the potential to further improve the model performance.

PMID:40304715 | DOI:10.1080/01676830.2025.2497460

Categories: Literature Watch

Predicting Mortality with Deep Learning: Are Metrics Alone Enough?

Wed, 2025-04-30 06:00

Radiol Artif Intell. 2025 May;7(3):e250224. doi: 10.1148/ryai.250224.

NO ABSTRACT

PMID:40304577 | DOI:10.1148/ryai.250224

Categories: Literature Watch

Automated Operative Phase and Step Recognition in Vestibular Schwannoma Surgery: Development and Preclinical Evaluation of a Deep Learning Neural Network (IDEAL Stage 0)

Wed, 2025-04-30 06:00

Neurosurgery. 2025 Apr 30. doi: 10.1227/neu.0000000000003466. Online ahead of print.

ABSTRACT

BACKGROUND AND OBJECTIVES: Machine learning (ML) in surgical video analysis offers promising prospects for training and decision support in surgery. The past decade has seen key advances in ML-based operative workflow analysis, though existing applications mostly feature shorter surgeries (<2 hours) with limited scene changes. The aim of this study was to develop and evaluate a ML model capable of automated operative workflow recognition for retrosigmoid vestibular schwannoma (VS) resection. In doing so, this project furthers previous research by applying workflow prediction platforms to lengthy (median >5 hours duration), data-heavy surgeries, using VS resection as an exemplar.

METHODS: A video dataset of 21 microscopic retrosigmoid VS resections was collected at a single institution over 3 years and underwent workflow annotation according to a previously agreed expert consensus (Approach, Excision, and Closure phases; and Debulking or Dissection steps within the Excision phase). Annotations were used to train a ML model consisting of a convolutional neural network and a recurrent neural network. 5-fold cross-validation was used, and performance metrics (accuracy, precision, recall, F1 score) were assessed for phase and step prediction.

RESULTS: Median operative video time was 5 hours 18 minutes (IQR 3 hours 21 minutes-6 hours 1 minute). The "Tumor Excision" phase accounted for the majority of each case (median 4 hours 23 minutes), whereas "Approach and Exposure" (28 minutes) and "Closure" (17 minutes) comprised shorter phases. The ML model accurately predicted operative phases (accuracy 81%, weighted F1 0.83) and dichotomized steps (accuracy 86%, weighted F1 0.86).

CONCLUSION: This study demonstrates that our ML model can accurately predict the surgical phases and intraphase steps in retrosigmoid VS resection. This demonstrates the successful application of ML in operative workflow recognition on low-volume, lengthy, data-heavy surgical videos. Despite this, there remains room for improvement in individual step classification. Future applications of ML in low-volume high-complexity operations should prioritize collaborative video sharing to overcome barriers to clinical translation.

PMID:40304484 | DOI:10.1227/neu.0000000000003466

Categories: Literature Watch

Unlocking Responsive and Unresponsive Signatures: A Transfer Learning Approach for Automated Classification in Cutaneous Leishmaniasis Lesions

Wed, 2025-04-30 06:00

Transbound Emerg Dis. 2025 Jan 21;2025:5018632. doi: 10.1155/tbed/5018632. eCollection 2025.

ABSTRACT

Cutaneous leishmaniasis (CL) remains a significant global public health disease, with the critical distinction and exact detection between responsive and unresponsive cases dictating treatment strategies and patient outcomes. However, image-based methods for differentiating these groups are unexplored. This study addresses this gap by developing a deep learning (DL) model utilizing transfer learning to automatically identify responses in CL lesions. A dataset of 102 lesion images (51 per class; equally distributed across train, test, and validation sets) is employed. The DenseNet161, VGG16, and ResNet18 networks, pretrained on a massive image dataset, are fine-tuned for our specific task. The models achieved an accuracy of 76.47%, 73.53%, and 55.88% on the test data, respectively, with a sensitivity of 80%, 75%, and 100% and specificity of 73.68%, 72.22%, and 53.12%, individually. Transfer learning successfully addressed the limited sample size challenge, demonstrating the models' potential for real-world application. This work underscores the significance of automated response detection in CL, paving the way for treatment and improved patient outcomes. While acknowledging limitations like the sample size, the need for collaborative efforts is emphasized to expand datasets and further refine the model. This approach stands as a beacon of hope in the contest against CL, illuminating the path toward a future where data-driven diagnostics guide effective treatment and alleviate the suffering of countless patients. Moreover, the study could be a turning point in eliminating this important global public health and widespread disease.

PMID:40302757 | PMC:PMC12016710 | DOI:10.1155/tbed/5018632

Categories: Literature Watch

Enhanced heart disease risk prediction using adaptive botox optimization based deep long-term recurrent convolutional network

Wed, 2025-04-30 06:00

Technol Health Care. 2025 Apr 30:9287329251333750. doi: 10.1177/09287329251333750. Online ahead of print.

ABSTRACT

BackgroundHeart disease is the leading cause of death worldwide and predicting it is a complex task requiring extensive expertise. Recent advancements in IoT-based illness prediction have enabled accurate classification using sensor data.ObjectiveThis research introduces a methodology for heart disease classification, integrating advanced data preprocessing, feature selection, and deep learning (DL) techniques tailored for IoT sensor data.MethodsThe work employs Clustering-based Data Imputation and Normalization (CDIN) and Robust Mahalanobis Distance-based Outlier Detection (RMDBOD) for preprocessing, ensuring data quality. Feature selection is achieved using the Improved Binary Quantum-based Avian Navigation Optimization (IBQANO) algorithm, and classification is performed with the Deep Long-Term Recurrent Convolutional Network (DLRCN), fine-tuned using the Adaptive Botox Optimization Algorithm (ABOA).ResultsThe proposed models tested on the Hungarian, UCI, and Cleveland heart disease datasets demonstrate significant improvements over existing methods. Specifically, the Cleveland dataset model achieves an accuracy of 99.72%, while the UCI dataset model achieves an accuracy of 99.41%.ConclusionThis methodology represents a significant advancement in remote healthcare monitoring, crucial for managing conditions like high blood pressure, especially in older adults, offering a reliable and accurate solution for heart disease prediction.

PMID:40302494 | DOI:10.1177/09287329251333750

Categories: Literature Watch

Engaging the Community: CASP Special Interest Groups

Wed, 2025-04-30 06:00

Proteins. 2025 Apr 30. doi: 10.1002/prot.26833. Online ahead of print.

ABSTRACT

The Critical Assessment of Structure Prediction (CASP) brings together a diverse group of scientists, from deep learning experts to NMR specialists, all aimed at developing accurate prediction algorithms that can effectively characterize the structural aspects of biomolecules relevant to their functions. Engagement within the CASP community has traditionally been limited to the prediction season and the conference, with limited discourse in the 1.5 years between CASP seasons. CASP special interest groups (SIGs) were established in 2023 to encourage continuous dialogue within the community. The online seminar series has drawn global participation from across disciplines and career stages. This has facilitated cross-disciplinary discussions fostering collaborations. The archives of these seminars have become a vital learning tool for newcomers to the field, lowering the barrier to entry.

PMID:40304050 | DOI:10.1002/prot.26833

Categories: Literature Watch

Association prediction of lncRNAs and diseases using multiview graph convolution neural network

Wed, 2025-04-30 06:00

Front Genet. 2025 Apr 15;16:1568270. doi: 10.3389/fgene.2025.1568270. eCollection 2025.

ABSTRACT

Long noncoding RNAs (lncRNAs) regulate physiological processes via interactions with macromolecules such as miRNAs, proteins, and genes, forming disease-associated regulatory networks. However, predicting lncRNA-disease associations remains challenging due to network complexity and isolated entities. Here, we propose MVIGCN, a graph convolutional network (GCN)-based method integrating multimodal data to predict these associations. Our framework constructs a heterogeneous network combining disease semantics, lncRNA similarity, and miRNA-lncRNA-disease interactions to address isolation issues. By modeling topological features and multiscale relationships through deep learning with attention mechanisms, MVIGCN prioritizes critical nodes and edges, enhancing prediction accuracy. Cross-validation demonstrated improved reliability over single-view methods, highlighting its potential to identify disease-related lncRNA biomarkers. This work advances network-based computational strategies for decoding lncRNA functions in disease biology and provides a scalable tool for prioritizing therapeutic targets.

PMID:40303981 | PMC:PMC12037633 | DOI:10.3389/fgene.2025.1568270

Categories: Literature Watch

Advanced computational tools, artificial intelligence and machine-learning approaches in gut microbiota and biomarker identification

Wed, 2025-04-30 06:00

Front Med Technol. 2025 Apr 15;6:1434799. doi: 10.3389/fmedt.2024.1434799. eCollection 2024.

ABSTRACT

The microbiome of the gut is a complex ecosystem that contains a wide variety of microbial species and functional capabilities. The microbiome has a significant impact on health and disease by affecting endocrinology, physiology, and neurology. It can change the progression of certain diseases and enhance treatment responses and tolerance. The gut microbiota plays a pivotal role in human health, influencing a wide range of physiological processes. Recent advances in computational tools and artificial intelligence (AI) have revolutionized the study of gut microbiota, enabling the identification of biomarkers that are critical for diagnosing and treating various diseases. This review hunts through the cutting-edge computational methodologies that integrate multi-omics data-such as metagenomics, metaproteomics, and metabolomics-providing a comprehensive understanding of the gut microbiome's composition and function. Additionally, machine learning (ML) approaches, including deep learning and network-based methods, are explored for their ability to uncover complex patterns within microbiome data, offering unprecedented insights into microbial interactions and their link to host health. By highlighting the synergy between traditional bioinformatics tools and advanced AI techniques, this review underscores the potential of these approaches in enhancing biomarker discovery and developing personalized therapeutic strategies. The convergence of computational advancements and microbiome research marks a significant step forward in precision medicine, paving the way for novel diagnostics and treatments tailored to individual microbiome profiles. Investigators have the ability to discover connections between the composition of microorganisms, the expression of genes, and the profiles of metabolites. Individual reactions to medicines that target gut microbes can be predicted by models driven by artificial intelligence. It is possible to obtain personalized and precision medicine by first gaining an understanding of the impact that the gut microbiota has on the development of disease. The application of machine learning allows for the customization of treatments to the specific microbial environment of an individual.

PMID:40303946 | PMC:PMC12037385 | DOI:10.3389/fmedt.2024.1434799

Categories: Literature Watch

Artificial intelligence in traditional Chinese medicine: advances in multi-metabolite multi-target interaction modeling

Wed, 2025-04-30 06:00

Front Pharmacol. 2025 Apr 15;16:1541509. doi: 10.3389/fphar.2025.1541509. eCollection 2025.

ABSTRACT

Traditional Chinese Medicine (TCM) utilizes multi-metabolite and multi-target interventions to address complex diseases, providing advantages over single-target therapies. However, the active metabolites, therapeutic targets, and especially the combination mechanisms remain unclear. The integration of advanced data analysis and nonlinear modeling capabilities of artificial intelligence (AI) is driving the transformation of TCM into precision medicine. This review concentrates on the application of AI in TCM target prediction, including multi-omics techniques, TCM-specialized databases, machine learning (ML), deep learning (DL), and cross-modal fusion strategies. It also critically analyzes persistent challenges such as data heterogeneity, limited model interpretability, causal confounding, and insufficient robustness validation in practical applications. To enhance the reliability and scalability of AI in TCM target prediction, future research should prioritize continuous optimization of the AI algorithms using zero-shot learning, end-to-end architectures, and self-supervised contrastive learning.

PMID:40303920 | PMC:PMC12037568 | DOI:10.3389/fphar.2025.1541509

Categories: Literature Watch

International Importation Risk Estimation of SARS-CoV-2 Omicron Variant with Incomplete Mobility Data

Wed, 2025-04-30 06:00

Transbound Emerg Dis. 2023 Sep 14;2023:5046932. doi: 10.1155/2023/5046932. eCollection 2023.

ABSTRACT

A novel Omicron subvariant named BQ.1 emerged in Nigeria in July 2022 and has since become a dominant strain, causing a significant number of repeated infections even in countries with high-vaccination rates. Due to the high flow of people between Western Africa and other non-African countries, there is a high risk of Omicron BQ.1 being introduced to other countries from Western Africa. In this context, we developed a model based on deep neural networks to estimate the probability that the Omicron BQ.1 introduced to other countries from Western Africa based on the incomplete population mobility data from Western Africa to other non-African countries. Our study found that the highest risk was in France and Spain during the study period, while the importation risk of other 13 non-African countries including Canada and the United States is also high. Our approach sheds light on how deep learning techniques can assist in the development of public health policies, and it has the potential to be extended to other types of viruses.

PMID:40303718 | PMC:PMC12016809 | DOI:10.1155/2023/5046932

Categories: Literature Watch

Application and research progress of artificial intelligence in allergic diseases

Wed, 2025-04-30 06:00

Int J Med Sci. 2025 Apr 9;22(9):2088-2102. doi: 10.7150/ijms.105422. eCollection 2025.

ABSTRACT

Artificial intelligence (AI), as a new technology that can assist or even replace some human functions, can collect and analyse large amounts of textual, visual and auditory data through techniques such as Reinforcement Learning, Machine Learning, Deep Learning and Natural Language Processing to establish complex, non-linear relationships and construct models. These can support doctors in disease prediction, diagnosis, treatment and management, and play a significant role in clinical risk prediction, improving the accuracy of disease diagnosis, assisting in the development of new drugs, and enabling precision treatment and personalised management. In recent years, AI has been used in the prediction, diagnosis, treatment and management of allergic diseases. Allergic diseases are a type of chronic non-communicable disease that have the potential to affect a number of different systems and organs, seriously impacting people's mental health and quality of life. In this paper, we focus on asthma and summarise the application and research progress of AI in asthma, atopic dermatitis, food allergies, allergic rhinitis and urticaria, from the perspectives of disease prediction, diagnosis, treatment and management. We also briefly analyse the advantages and limitations of various intelligent assistance methods, in order to provide a reference for research teams and medical staff.

PMID:40303497 | PMC:PMC12035833 | DOI:10.7150/ijms.105422

Categories: Literature Watch

Automatic pelvic fracture segmentation: a deep learning approach and benchmark dataset

Wed, 2025-04-30 06:00

Front Med (Lausanne). 2025 Apr 15;12:1511487. doi: 10.3389/fmed.2025.1511487. eCollection 2025.

ABSTRACT

INTRODUCTION: Accurate segmentation of pelvic fractures from computed tomography (CT) is crucial for trauma diagnosis and image-guided reduction surgery. The traditional manual slice-by-slice segmentation by surgeons is time-consuming, experience-dependent, and error-prone. The complex anatomy of the pelvic bone, the diversity of fracture types, and the variability in fracture surface appearances pose significant challenges to automated solutions.

METHODS: We propose an automatic pelvic fracture segmentation method based on deep learning, which effectively isolates hipbone and sacrum fragments from fractured pelvic CT. The method employs two sequential networks: an anatomical segmentation network for extracting hipbones and sacrum from CT images, followed by a fracture segmentation network that isolates the main and minor fragments within each bone region. We propose a distance-weighted loss to guide the fracture segmentation network's attention on the fracture surface. Additionally, multi-scale deep supervision and smooth transition strategies are incorporated to enhance overall performance.

RESULTS: Tested on a curated dataset of 150 CTs, which we have made publicly available, our method achieves an average Dice coefficient of 0.986 and an average symmetric surface distance of 0.234 mm.

DISCUSSION: The method outperformed traditional max-flow and a transformer-based method, demonstrating its effectiveness in handling complex fracture.

PMID:40303367 | PMC:PMC12039937 | DOI:10.3389/fmed.2025.1511487

Categories: Literature Watch

Scoping Review of Deep Learning Techniques for Diagnosis, Drug Discovery, and Vaccine Development in Leishmaniasis

Wed, 2025-04-30 06:00

Transbound Emerg Dis. 2024 Jan 17;2024:6621199. doi: 10.1155/2024/6621199. eCollection 2024.

ABSTRACT

Leishmania, a single-cell parasite prevalent in tropical and subtropical regions worldwide, can cause varying degrees of leishmaniasis, ranging from self-limiting skin lesions to potentially fatal visceral complications. As such, the parasite has been the subject of much interest in the scientific community. In recent years, advances in diagnostic techniques such as flow cytometry, molecular biology, proteomics, and nanodiagnosis have contributed to progress in the diagnosis of this deadly disease. Additionally, the emergence of artificial intelligence (AI), including its subbranches such as machine learning and deep learning, has revolutionized the field of medicine. The high accuracy of AI and its potential to reduce human and laboratory errors make it an especially promising tool in diagnosis and treatment. Despite the promising potential of deep learning in the medical field, there has been no review study on the applications of this technology in the context of leishmaniasis. To address this gap, we provide a scoping review of deep learning methods in the diagnosis of the disease, drug discovery, and vaccine development. In conducting a thorough search of available literature, we analyzed articles in detail that used deep learning methods for various aspects of the disease, including diagnosis, drug discovery, vaccine development, and related proteins. Each study was individually analyzed, and the methodology and results were presented. As the first and only review study on this topic, this paper serves as a quick and comprehensive resource and guide for the future research in this field.

PMID:40303156 | PMC:PMC12019899 | DOI:10.1155/2024/6621199

Categories: Literature Watch

Impact of synthetic data on training a deep learning model for lesion detection and classification in contrast-enhanced mammography

Wed, 2025-04-30 06:00

J Med Imaging (Bellingham). 2025 Nov;12(Suppl 2):S22006. doi: 10.1117/1.JMI.12.S2.S22006. Epub 2025 Apr 28.

ABSTRACT

PURPOSE: Predictive models for contrast-enhanced mammography often perform better at detecting and classifying enhancing masses than (non-enhancing) microcalcification clusters. We aim to investigate whether incorporating synthetic data with simulated microcalcification clusters during training can enhance model performance.

APPROACH: Microcalcification clusters were simulated in low-energy images of lesion-free breasts from 782 patients, considering local texture features. Enhancement was simulated in the corresponding recombined images. A deep learning (DL) model for lesion detection and classification was trained with varying ratios of synthetic and real (850 patients) data. In addition, a handcrafted radiomics classifier was trained using delineations and class labels from real data, and predictions from both models were ensembled. Validation was performed on internal (212 patients) and external (279 patients) real datasets.

RESULTS: The DL model trained exclusively with synthetic data detected over 60% of malignant lesions. Adding synthetic data to smaller real training sets improved detection sensitivity for malignant lesions but decreased precision. Performance plateaued at a detection sensitivity of 0.80. The ensembled DL and radiomics models performed worse than the standalone DL model, decreasing the area under this receiver operating characteristic curve from 0.75 to 0.60 on the external validation set, likely due to falsely detected suspicious regions of interest.

CONCLUSIONS: Synthetic data can enhance DL model performance, provided model setup and data distribution are optimized. The possibility to detect malignant lesions without real data present in the training set confirms the utility of synthetic data. It can serve as a helpful tool, especially when real data are scarce, and it is most effective when complementing real data.

PMID:40302983 | PMC:PMC12036226 | DOI:10.1117/1.JMI.12.S2.S22006

Categories: Literature Watch

Prediction of the Therapeutic Response to Neoadjuvant Chemotherapy for Rectal Cancer Using a Deep Learning Model

Wed, 2025-04-30 06:00

J Anus Rectum Colon. 2025 Apr 25;9(2):202-212. doi: 10.23922/jarc.2024-085. eCollection 2025.

ABSTRACT

OBJECTIVES: Predicting the response to chemotherapy can lead to the optimization of neoadjuvant chemotherapy (NAC). The present study aimed to develop a non-invasive prediction model of therapeutic response to NAC for rectal cancer (RC).

METHODS: A dataset of the prechemotherapy computed tomography (CT) images of 57 patients from multiple institutions who underwent rectal surgery after three courses of S-1 and oxaliplatin (SOX) NAC for RC was collected. The therapeutic response to NAC was pathologically confirmed. It was then predicted whether they were pathologic responders or non-responders. Cases were divided into training, validation and test datasets. A CT patch-based predictive model was developed using a residual convolutional neural network and the predictive performance was evaluated. Binary logistic regression analysis of prechemotherapy clinical factors showed that none of the independent variables were significantly associated with the non-responders.

RESULTS: Among the 49 patients in the training and validation datasets, there were 21 (42.9%) and 28 (57.1%) responders and non-responders, respectively. A total of 3,857 patches were extracted from the 49 patients. In the validation dataset, the average sensitivity, specificity and accuracy was 97.3, 95.7 and 96.8%, respectively. Furthermore, the area under the receiver operating characteristic curve (AUC) was 0.994 (95% CI, 0.991-0.997; P<0.001). In the test dataset, which included 750 patches from 8 patients, the predictive model demonstrated high specificity (89.9%) and the AUC was 0.846 (95% CI, 0.817-0.875; P<0.001).

CONCLUSIONS: The non-invasive deep learning model using prechemotherapy CT images exhibited high predictive performance in predicting the pathological therapeutic response to SOX NAC.

PMID:40302856 | PMC:PMC12035344 | DOI:10.23922/jarc.2024-085

Categories: Literature Watch

Pages