Changxuan Wan


2024

pdf bib
Enhancing Text-to-SQL Capabilities of Large Language Models through Tailored Promptings
Zhao Tan | Xiping Liu | Qing Shu | Xi Li | Changxuan Wan | Dexi Liu | Qizhi Wan | Guoqiong Liao
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Large language models (LLMs) with prompting have achieved encouraging results on many natural language processing (NLP) tasks based on task-tailored promptings. Text-to-SQL is a critical task that generates SQL queries from natural language questions. However, prompting on LLMs haven’t show superior performance on Text-to-SQL task due to the absence of tailored promptings. In this work, we propose three promptings specifically designed for Text-to-SQL: SL-prompt, CC-prompt, and SL+CC prompt. SL-prompt is designed to guide LLMs to identify relevant tables; CC-prompt directs LLMs to generate SQL clause by clause; and SL+CC prompt is proposed to combine the strengths of these above promptings. The three prompting strategies makes three solutions for Text-to-SQL. Then, another prompting strategy, the RS-prompt is proposed to direct LLMs to select the best answer from the results of the solutions. We conducted extensive experiments, and experimental results show that our method achieved an execution accuracy of 86.2% and a test-suite accuracy of 76.9%, which is 1.1%, and 2.7% higher than the current state-of-the-art Text-to-SQL methods, respectively. The results confirmed that the proposed promptings enhanced the capabilities of LLMs on Text-to-SQL. Experimental results also show that the granularity of schema linking and the order of clause generation have great impact on the performance, which are considered little in previous research.

2023

pdf bib
Joint Document-Level Event Extraction via Token-Token Bidirectional Event Completed Graph
Qizhi Wan | Changxuan Wan | Keli Xiao | Dexi Liu | Chenliang Li | Bolong Zheng | Xiping Liu | Rong Hu
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

We solve the challenging document-level event extraction problem by proposing a joint exaction methodology that can avoid inefficiency and error propagation issues in classic pipeline methods. Essentially, we address the three crucial limitations in existing studies. First, the autoregressive strategy of path expansion heavily relies on the orders of argument role. Second, the number of events in documents must be specified in advance. Last, unexpected errors usually exist when decoding events based on the entity-entity adjacency matrix. To address these issues, this paper designs a Token-Token Bidirectional Event Completed Graph (TT-BECG) in which the relation eType-Role1-Role2 serves as the edge type, precisely revealing which tokens play argument roles in an event of a specific event type. Exploiting the token-token adjacency matrix of the TT-BECG, we develop an edge-enhanced joint document-level event extraction model. Guided by the target token-token adjacency matrix, the predicted token-token adjacency matrix can be obtained during the model training. Then, extracted events and event records in a document are decoded based on the predicted matrix, including the graph structure and edge type decoding. Extensive experiments are conducted on two public datasets, and the results confirm the effectiveness of our method and its superiority over the state-of-the-art baselines.

2022

pdf bib
基于知识迁移的情感-原因对抽取(Emotion-Cause Pair Extraction Based on Knowledge-Transfer)
Fengyuan Zhao (赵凤园) | Dexi Liu (刘德喜) | Qizhi Wan (万齐智) | Changxuan Wan (万常选) | Xiping Liu (刘喜平) | Guoqiong Liao (廖国琼)
Proceedings of the 21st Chinese National Conference on Computational Linguistics

“现有的情感瘭原因对抽取模型均没有通过加入外部知识来提升情感瘭原因对的抽取效果。本文提出基于知识迁移的情感瘭原因对抽取模型瘨癅癃癐癅瘭癋癔瘩,采用知识库获取文本的显性知识编码;随后引入外部情感分类语料库迁移得到子句的隐性知识编码;最后拼接两个知识编码,加入情感瘨原因瘩子句预测概率及相对位置,搭配癔癲癡癮癳癦癯癲癭癥癲机制融合上下文,并采用窗口机制优化计算压力,实现情感瘭原因对抽取。在癅癃癐癅数据集上的实验结果显示,本文提出的方法超过当前最先进的模型癅癃癐癅瘭瘲癄。”