Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> but it's not that the PostgreSQL design is a loser in all regards

the article literally says that pg's mvcc design is from the 90s and no one does it like that any more. that is technology that is outdated by over 30 years. i'd say it does not make it a loser in all regards, but in the most important aspects.



When it comes to your data store, some people might consider using technology that’s been reliably used in production by many organizations for 30 years a feature not a bug.

I’d prefer not to be the first person running up against a limit or discovering a bug in my DB software.


Well every product has issues. The question is, do you feel like dealing with those issues or not?

Flat files have also been reliably used in production for decades. That doesn't mean they're ideal...although amusingly enough s3 and its equivalent of flat files is what we've migrated to as a data store.


It would be quite nice to have some of the S3 semantics on local files. Like no one else can the see the file until after you’ve finished writing the file and committed it. And being able to put almost any chara in the file name (key). That is quite nice in S3


Tell this to developers of Ariane 5, who used old proven software from Ariane 4.

Many people consider this most expensive bug in history, when on first flight of Ariane 5, it enters speed range, which was hard prohibited in Ariane 4 software and caused software exception and then 1 billion crashed.

Honesty, they could re-check all ranges, but they decided, it would cost like write new software, so to save money, was made decision to just use old software, without additional checks.


> the article literally says that pg's mvcc design is from the 90s and...

Actually, it is 1980s. The article:

> Its design is a relic of the 1980s and before the proliferation of log-structured system patterns from the 1990s.


Still I am very happy to use every day the technology designed in early 70s by Ken Thompson and colleagues, so far in that specific field many tried to invent something more "modern" and "better" and failed, with an exception of a certain Finnish clone of that tech, also started in 80s by the way.

So, newer not always means better, just saying


Speaking of which, if you try an actual System V in an emulator, or look at C code in K&R style, certain progress, as in "much more actually usable", can be noticed.

While persisting key architectural ideas certainly has benefits, so does evolving their implementations.


Yes I agree that implementations must evolve. Still, there are cases where old architectures are just brilliant.

Having said that, I need to add, I am not an expert to say MVCC is good enough to be considered equally good like other write-concurrency mechanism in SQL databases. My example was given to just have a caution when judging, especially that the original counterexample had mentioned notoriously bad architectures (hello, MySQL...)


Err, Linux is a child of the 90s...

Linus began work on it in April 1991: https://groups.google.com/g/comp.os.minix/c/dlNtH7RRrGA/m/_R...


I was under impression that he started around 1989 and also that's when he had a debate with prof. Tanenbaum, but now I see it was later. My mistake


Btw, hard not to love the line "it's just a hobby, won't be big" from original announcement of Linus... Be careful what you promise ;)


> exception of a certain Finnish clone of that tech

Are you referring to C++? That was actually created by a Danish guy, who was also inspired by the object oriented Simula language created in the 60s


Pretty sure the OP was referring to UNIX and its “Finnish clone” Linux.


At least couchdb is also append only with vacuum. So it's maybe not completely outdated.


High performance has never been a reason to use couchdb.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: