So, noticing that linearized models have tiny KV caches ahem i mean state spaces, this approach tries to increase their size along the embedding dimension. Increasing this enormously by applying a different softmax (which is compatible with the expanding tensor product) yields a very symmetric mathematical structure that can be exploited to recover some efficiency.
Yes. That is mostly the idea. But calling the state of a linear transformer KV cache is not quite right. A KV cache grows with the sequence length. But the linear transformer state just stores V @ K.T, an object with fixed size.
Formatted like a formal academic publication. No way (that I can tell) to grab a pdf. Comes across as a blog masquerading as academic literature to me. Am I wrong? Did I miss something and there's an offline version available?
Pages served up over http are ephemeral. An absolutely essential part of formal academic literature is the archival aspect - self contained, immutable, and referenceable in an unambiguous manner.
There's also an immediate practical aspect for me. I will likely never get around to reading this because I will forget it exists because my "reading list" consists of a pile of pdf files.
Is that right?