نوع مقاله : مقاله پژوهشی
نویسنده
گروه اشاعه اطلاعات، پژوهشگاه علوم و فرهنگ اسلامی، قم، ایران
چکیده
کلیدواژهها
عنوان مقاله [English]
نویسنده [English]
Background and Aim: Explainable Artificial Intelligence (XAI) is emerging as a strategic and growing field within AI research. Its primary goal is to enhance the transparency, trustworthiness, and explainability of intelligent systems. In parallel, knowledge graphs (KGs) play a crucial role in improving the conceptual understanding of data by providing a structured framework for representing and organizing complex relationships between data entities. The convergence of XAI and KGs can lead to improved qualitative performance and increased explainability in AI-based systems, especially in critical applications such as aerospace, nuclear technology, and medicine, which demand high operational accuracy and reliability. Despite the significant potential of this integrated approach, few studies have systematically and comprehensively investigated it. This research aims to analyze the trends and scientific structure within the fields of explainable AI and knowledge graphs using bibliometric methods.
Materials and Methods: This applied research utilizes bibliometric tools, including VOSviewer and Ucinet. The study analyzes all indexed information records related to knowledge graphs and explainable AI from 2020 to 2024, comprising 13,818 records (including articles, books, abstracts, etc.). The goal is to identify global research trends, prominent authors, institutions, and research clusters.
Findings: Analysis of the scientific data reveals that China leads research in this domain with 6,027 research documents, accounting for a significant share of scientific output. The United States and Germany follow, indicating that these three countries are the primary global research hubs. From a thematic perspective, the keywords knowledge graph, explainable AI, and machine learning had the highest frequency in articles, underscoring their importance in recent research. Cluster analysis results indicate that research in this field has primarily developed in four main directions: (1) graph-based modeling, which examines structural relationships in data; (2) semantic applications, emphasizing practical aspects; (3) explainability in machine learning, related to making models transparent; and finally, (4) feature extraction and predictive modeling, focused on improving system performance. These findings demonstrate that the scientific community has systematically addressed both the theoretical and practical aspects of this field.
Conclusion: The data analysis suggests that the fields of knowledge graphs and explainable AI have received significant attention in leading AI countries, resulting in an increased volume of scientific output in these nations. This research provides a comprehensive overview of the research landscape and offers valuable insights for researchers and policymakers to advance the development of explainable AI and knowledge graphs, thereby addressing existing gaps in transparency and explainability within AI systems.
کلیدواژهها [English]