The keys-as-hash codes trick only works for fixed-width integers; otherwise you have to explicitly store the hash codes. I assumed unique hash codes in my implementation, but I think you could easily adapt the algorithm to allow duplicate hash codes. Tolerating hash code collisions would avoid having to increase hash code size (for collisions in their case, you'd just need to probe multiple offsets in the key/value array).
Since the hashes are stored, you could order by hash. This would leave keys with the same hash unordered, so if you find an entry with equal hash but unequal key you have to keep probing, but that only matters on a full-hash collision.