> an 8 CPU installation with 64gb memory will probably be hundred times faster then postgres.
"Probably" not.
The way this usually goes down is that there may be a few synthetic benchmarks show a large performance benefit over existing established databases (x2, not x100), with any non-synthetic benchmark showing very poor performance (1/10th, 1/100th, sometimes even worse), and also often very unstable performance.
The product is then also usually beta quality, as it is hard to compete with the 36 years Postgres has been in development since its inception in 1982 (and that's not counting the 9 years of Ingres development, which Postgres—"Post-Ingres"—spawned from). Important features are usually also quite lacking.
If someone claims x10 or x100 performance improvement over established databases, they better have published a few papers about all the computer science research they must necessarily have done to get there.
Full disclosure - I currently work for Exasol.. but I thought I'd just clarify that Exasol has been around for over 15 years and is far from 'beta' (currently on version 6 with hundreds of production installations worldwide). I've also been in the industry for > 40 years and worked with many database products (including Ingres and Postgres) - and all I can say is download the free community edition from the Exasol website or the Docker image as described above and try it for yourself - you will be up and running very quickly and I think you will be pleasantly surprised regarding both functionality and performance.
My comment was more general in the sense that such a grand performance statement needs some serious backing, and new products claiming to be several orders of magnitude faster than established products are usually unable to deliver anything at all.
Would you mind sharing some of the differences to, say, Postgres, and what to expect if moving from Postgres to Exasol? Porting my applications to Exasol to benchmark would be time consuming (synthetic benchmarks are very uninteresting), and without any information about what to expect, it simply wouldn't be sensible.
I tried to look at the website, but I am not interested in accepting a privacy policy just to get a white-paper, which frankly leaves me with no usable information at all. The rest of the website is basically empty, short of graphs without data and marketing "You want to do X? We can do that too! <no additional info>". The only real thing I could extract was "in-memory database".
To me, "in-memory database" would appear to be the catch that makes it an entirely different product than Postgres, catering to an entirely different payload with different pros and cons, rather than an faster all-round product. None of my tables fit in RAM anyway.
"Probably" not.
The way this usually goes down is that there may be a few synthetic benchmarks show a large performance benefit over existing established databases (x2, not x100), with any non-synthetic benchmark showing very poor performance (1/10th, 1/100th, sometimes even worse), and also often very unstable performance.
The product is then also usually beta quality, as it is hard to compete with the 36 years Postgres has been in development since its inception in 1982 (and that's not counting the 9 years of Ingres development, which Postgres—"Post-Ingres"—spawned from). Important features are usually also quite lacking.
If someone claims x10 or x100 performance improvement over established databases, they better have published a few papers about all the computer science research they must necessarily have done to get there.