Wen Zheng


2023

pdf bib
Contextual Knowledge Learning for Dialogue Generation
Wen Zheng | Natasa Milic-Frayling | Ke Zhou
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Incorporating conversational context and knowledge into dialogue generation models has been essential for improving the quality of the generated responses. The context, comprising utterances from previous dialogue exchanges, is used as a source of content for response generation and as a means of selecting external knowledge. However, to avoid introducing irrelevant content, it is key to enable fine-grained scoring of context and knowledge. In this paper, we present a novel approach to context and knowledge weighting as an integral part of model training. We guide the model training through a Contextual Knowledge Learning (CKL) process which involves Latent Vectors for context and knowledge, respectively. CKL Latent Vectors capture the relationship between context, knowledge, and responses through weak supervision and enable differential weighting of context utterances and knowledge sentences during the training process. Experiments with two standard datasets and human evaluation demonstrate that CKL leads to a significant improvement compared with the performance of six strong baseline models and shows robustness with regard to reduced sizes of training sets.

2021

pdf bib
Knowledge-Grounded Dialogue Generation with Term-level De-noising
Wen Zheng | Natasa Milic-Frayling | Ke Zhou
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

2020

pdf bib
Approximation of Response Knowledge Retrieval in Knowledge-grounded Dialogue Generation
Wen Zheng | Natasa Milic-Frayling | Ke Zhou
Findings of the Association for Computational Linguistics: EMNLP 2020

This paper is concerned with improving dialogue generation models through injection of knowledge, e.g., content relevant to the post that can increase the quality of responses. Past research extends the training of the generative models by incorporating statistical properties of posts, responses and related knowledge, without explicitly assessing the knowledge quality. In our work, we demonstrate the importance of knowledge relevance and adopt a two-phase approach. We first apply a novel method, Transformer & Post based Posterior Approximation (TPPA) to select knowledge, and then use the Transformer with Expanded Decoder (TED) model to generate responses from both the post and the knowledge. TPPA method processes posts, post related knowledge, and response related knowledge at both word and sentence level. Our experiments with the TED generative model demonstrate the effectiveness of TPPA as it outperforms a set of strong baseline models. Our TPPA method is extendable and supports further optimization of knowledge retrieval and injection.

2013

pdf bib
A Hybrid Model For Grammatical Error Correction
Yang Xiang | Bo Yuan | Yaoyun Zhang | Xiaolong Wang | Wen Zheng | Chongqiang Wei
Proceedings of the Seventeenth Conference on Computational Natural Language Learning: Shared Task

pdf bib
Grammatical Error Correction Using Feature Selection and Confidence Tuning
Yang Xiang | Yaoyun Zhang | Xiaolong Wang | Chongqiang Wei | Wen Zheng | Xiaoqiang Zhou | Yuxiu Hu | Yang Qin
Proceedings of the Sixth International Joint Conference on Natural Language Processing