Squeeze Every Drop of Performance from Your LLM with AWQ (Activation-Aware Quantization)

Adithya S K
7 min readOct 21, 2023

A Guide to Quantize LLMs Using AWQ on a Google Colab Notebook

Ever Wondered how to quantize LLMs? Here is a comprehensive guide to Quantize LLMs using AWQ. GGUF Quantization Blog will be out soon

Introduction

Large language models (LLMs) like GPT-3, PaLM, and LLaMA have proven tremendously powerful. But their hundreds of billions of parameters also make them incredibly computationally demanding. To deploy these models in production, we need ways to make them more efficient.

This is where quantization comes in. By reducing the precision of weights in a neural network from float32 to lower bitwidths like INT8, INT4, or even INT2, we can shrink the model size and significantly speed up computation. However, naive quantization that simply rounds weights to lower precision can seriously hurt model accuracy. We need smarter quantization techniques optimized specifically for large language models.

Enter Activation-Aware Weight Quantization (AWQ) — a method tailored for quantizing LLMs with minimal impact on accuracy. In this post, we’ll dive into what AWQ is, how to use it, and the performance benefits you can realize.

What is Quantization?

Let’s first understand how quantization works. Neural network models typically use 32-bit floating point weights (float32). These weights require 4 bytes or 32 bits of memory per value.

Quantization reduces this precision. For example, INT8 quantization represents each weight using just 8-bit integers rather than 32-bit floats. This immediately shrinks the model size 4x. But more importantly, it also speeds up computation. That’s because modern GPUs and TPUs have specialized 8-bit matrix multiplication kernels optimized for fast INT8 performance.

We can quantize to even lower bitwidths like INT4 or INT2 for further compression, at the cost of some accuracy drop. The holy grail is to maintain the original full-precision accuracy as much as possible.

Naive quantization that simply rounds weights to the nearest quantized value works decently for CNNs. But for large language models, it results in a…

--

--

Adithya S K

Post blogs about Gen AI | Cloud | Web Dev | Founder @CognitiveLab spending time fine-tuning LLMs ,Diffusion models and developing production ready application