Semantic Parsing for Question and Answering over Scholarly Knowledge Graph with Large Language Models
2024; Springer Science+Business Media; Linguagem: Inglês
10.1007/978-981-97-3076-6_20
ISSN1611-3349
AutoresLe-Minh Nguyen, Le-Nguyen Khang, Kieu Que Anh, Nguyen Dieu Hien, Yukari Nagai,
Tópico(s)Natural Language Processing Techniques
ResumoThis paper presents a study to answer the question of how to map a natural language (NL) sentence to a semantic representation and its application to question answering over the DBLP database. We investigate the deep learning approach using pre-trained models and their fine-tuning on training data for semantic parsing tasks. Experimental results on standard datasets show the effectiveness of pre-trained models in mapping an NL sentence to SPARQL, a query language for semantic databases. The results also show that the T5 and Flan-T5 models outperform other models in terms of translation accuracy. In addition to the empirical results on pre-trained models, we also consider the problem of examining large language models (LLMs) such as Llama and Mistras, or Qwen models for answering questions on the DBLP database. Experimental results showed the potentiality of using LLMs with chain-of-thought prompting methods. The results indicated that without using training data, we were able to obtain promising results for some types of questions when translating them to SPARQL.
Referência(s)