FlattenQuant: Breaking through the Inference Compute-bound for Large Language Models with Per-tensor Quantization

Yi Zhang, Fei Yang, Shuang Peng, Fangyu Wang, Aimin Pan


Abstract
Large language models (LLMs) have demonstrated state-of-the-art accuracies across various tasks. However, the latency of inference and the large GPU memory consumption of LLMs restrict their deployment performance. Recently, there have been some efficient attempts to quantize LLMs, yet inference with large batch size or long sequence still has the issue of being compute-bound. Fine-grained quantization methods have showcased their proficiency in achieving low-bit quantization for LLMs, while requiring FP16 data type for linear layer computations, which is time-consuming when dealing with large batch size or long sequence. In this paper, we introduce a method called FlattenQuant, which significantly reduces the maximum value of the tensor by flattening the larger channels in the tensor, to achieve low bit per-tensor quantization with minimal accuracy loss. Our experiments show that FlattenQuant can directly use 4 bits to achieve 48.29% of the linear layer calculation in LLMs, with the remaining layer using 8 bits. The 4-bit matrix multiplication introduced in the FlattenQuant method can effectively address the compute-bound caused by large matrix calculation. Our work achieves up to 2× speedup and 2.3× memory reduction for LLMs with negligible loss in accuracy.
Anthology ID:
2024.lrec-main.648
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
7356–7365
Language:
URL:
https://aclanthology.org/2024.lrec-main.648
DOI:
Bibkey:
Cite (ACL):
Yi Zhang, Fei Yang, Shuang Peng, Fangyu Wang, and Aimin Pan. 2024. FlattenQuant: Breaking through the Inference Compute-bound for Large Language Models with Per-tensor Quantization. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 7356–7365, Torino, Italia. ELRA and ICCL.
Cite (Informal):
FlattenQuant: Breaking through the Inference Compute-bound for Large Language Models with Per-tensor Quantization (Zhang et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-main.648.pdf