Zhonglei Guo


2024

pdf bib
Linking Adaptive Structure Induction and Neuron Filtering: A Spectral Perspective for Aspect-based Sentiment Analysis
Hao Niu | Maoyi Wang | Yun Xiong | Biao Yang | Xing Jia | Zhonglei Guo
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Recently, it has been discovered that incorporating structure information (e.g., dependency trees) can improve the performance of aspect-based sentiment analysis (ABSA). The structure information is often obtained from off-the-shelf parsers, which are sub-optimal and unwieldy. Therefore, adaptively inducing task-specific structures is helpful in resolving this issue. In this work, we concentrate on adaptive graph structure induction for ABSA and explore the impact of neuron-level manipulation from a spectral perspective on structure induction. Specifically, we consider word representations from PLMs (pre-trained language models) as node features and employ a graph learning module to adaptively generate adjacency matrices, followed by graph neural networks (GNNs) to capture both node features and structural information. Meanwhile, we propose the Neuron Filtering (NeuLT), a method to conduct neuron-level manipulations on word representations in the frequency domain. We conduct extensive experiments on three public datasets to observe the impact of NeuLT on structure induction and ABSA. The results and further analysis demonstrate that performing neuron-level manipulation through NeuLT can shorten Aspects-sentiment Distance of induced structures and be beneficial to improve the performance of ABSA. The effects of our method can achieve or come close to SOTA (state-of-the-art) performance.

2023

pdf bib
Adaptive Structure Induction for Aspect-based Sentiment Analysis with Spectral Perspective
Hao Niu | Yun Xiong | Xiaosu Wang | Wenjing Yu | Yao Zhang | Zhonglei Guo
Findings of the Association for Computational Linguistics: EMNLP 2023

Recently, incorporating structure information (e.g. dependency syntactic tree) can enhance the performance of aspect-based sentiment analysis (ABSA). However, this structure information is obtained from off-the-shelf parsers, which is often sub-optimal and cumbersome. Thus, automatically learning adaptive structures is conducive to solving this problem. In this work, we concentrate on structure induction from pre-trained language models (PLMs) and throw the structure induction into a spectrum perspective to explore the impact of scale information in language representation on structure induction ability. Concretely, the main architecture of our model is composed of commonly used PLMs (e.g. RoBERTa, etc), and a simple yet effective graph structure learning (GSL) module (graph learner + GNNs). Subsequently, we plug in spectral filters with different bands respectively after the PLMs to produce filtered language representations and feed them into the GSL module to induce latent structures. We conduct extensive experiments on three public benchmarks for ABSA. The results and further analyses demonstrate that introducing this spectral approach can shorten Aspects-sentiment Distance (AsD) and be beneficial to structure induction. Even based on such a simple framework, the effects on three datasets can reach SOTA (state of the art) or near SOTA performance. Additionally, our exploration also has the potential to be generalized to other tasks or to bring inspiration to other similar domains.