Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Which is precisely why it matters that I can write functionally pure code that is trivial to parallelize. At $FORMER_WORK, I even wrote (the same!) code as reasonably idiomatic-looking Clojure (essentially, reducer fns) that transparently runs locally single-threadedly, multi-threadedly, and on Hadoop.

That C program might be fast, but it's not a great tool for processing petabytes of data on a stampeding herd of angry elephants. So, I think performance arguments need a little nuance about what you're doing.

(My experiments include soft real time with deadline scheduling. Clojure's fine.)



I can't remember the source, but I remember reading about a case where someone replaced a large hadoop cluster with one node running highly tuned C. Distributed computing comes with a lot of overhead and you might be surprised at just how much you can get out of a highly tuned C application.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: