Ensure that the dataset containing your postgres data is configured with record size equal to postgres page size or close enough (Lots of places use 8kB ZFS records for 4kB pages).
This will reduce write amplification due to excessive read-modify-write cycles.
So, with the major caveat that I am not an expert and your mileage will vary: After some playing around with it, I intentionally reverted our postgres datasets back to the default ZFS size (EDIT: 128K) because we weren't super performance sensitive and the smaller pages killed compression. Obviously compression ratio vs speed is going to depend very heavily on exactly what you're doing, but it seems to have been a good trade for us.
an interesting hack is to create two tablespaces, one with record size of 8kB, one with recordsize set to maximum, and then appropriately assign tables to them according to ones performance needs. Rarely-written (for example historical) data can be put into partitions living on the large record tablespace (for example 1M recordsize) and have indexes redone with 100% fillfactor.
Of course all of that should be informed by getting actual data about performance first ;)