TurboQuant: Redefining AI efficiency with extreme compression

(research.google)

101 points | by ray__ 2 hours ago

6 comments

  • amitport 3 minutes ago
    This is a great development for KV cache compression. I did notice a missing citation in the related works regarding the core mathematical mechanism, though. The foundational technique of applying a geometric rotation prior to extreme quantization, specifically for managing the high-dimensional geometry and enabling proper bias correction, was introduced in our NeurIPS 2021 paper, "DRIVE" (https://proceedings.neurips.cc/paper/2021/hash/0397758f8990c...). We used this exact rotational approach and a similar bias correction mechanism to achieve optimal distributed mean estimation. I also presented this work and subsequent papers in a private invited talk at Google shortly after publication. Given the strong theoretical overlap with the mechanisms in TurboQuant and PolarQuant, I hope to see this prior art acknowledged in the upcoming camera-ready versions.
  • maurelius2 5 minutes ago
    I'm somewhat at a loss here other than understanding the fundamentals. Can someone tell me how the compression impact performance?
  • benob 48 minutes ago
    This is the worst lay-people explanation of an AI component I have seen in a long time. It doesn't even seem AI generated.
    • spencerflem 46 minutes ago
      I think it is though-

      “ TurboQuant, QJL, and PolarQuant are more than just practical engineering solutions; they’re fundamental algorithmic contributions backed by strong theoretical proofs. These methods don't just work well in real-world applications; they are provably efficient and operate near theoretical lower bounds.”

      • benob 41 minutes ago
        Maybe they quantized a bit too much the model parameters...
  • moktonar 19 minutes ago
    Aren’t polar coordinates still n-1 + 1 for radius for n-dim vector? If so I understand that angles can be quantized better but when radius r is big the error is large for highly quantized angles right? What am I missing?
    • amitport 17 minutes ago
      r is a single value per vector. You don't have to quantize it, you can keep it and quantize the billion+ other coordinates of the vector.
  • bluequbit 1 hour ago
    I did not understand what polarQuant is.

    Is is something like pattern based compression where the algorithm finds repeating patterns and creates an index of those common symbols or numbers?

    • Maxious 51 minutes ago
      • spencerflem 40 minutes ago
        I like the visualization, but I don’t understand the grid quantization. If every point is on the unit circle aren’t all the center grid cords unused?
        • vincnetas 12 minutes ago
          i think grid can be a surface of the unit sphere
    • mrugge 54 minutes ago
      1. Efficient recursive transform of kv embeddings into polar coordinates 2. Quantize resulting angles without the need for explicit normalization. This saves memory via key insight: angles follow a distribution and have analytical form.
      • quotemstr 35 minutes ago
        Reminds me vaguely of Burrows-Wheeler transformations in bzip2.
  • hikaru_ai 43 minutes ago
    [dead]