Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'd do Merge Sort before Quicksort. There's less to say about it, and these days it seems to be edging out Quicksort in the behind-the-scenes-implementation-of-standard-library department.

And, yes, including some kind of self-balancing tree would seem to be a good idea. But how the heck does one discuss such things concisely and simply? I couldn't say.

In any case, nice site.



Another amazing algorithm (or data structure really) is the pairing heap. I just wrote this down, so it's possible that it's not correct, but here he goes:

    data Heap k v = Empty | Heap k v [Heap k v]
    merge Empty h = h
    merge (Heap k1 v hs) h@(Heap k2 _ _) | k1 <= k2 = Heap k1 v (h:hs)
    merge a b = merge b a
    extractmin (Heap k v hs) = (v, foldr merge Empty hs)
    insert h k v = merge h (Heap k v [])
These 6 lines of code provide a heap data structure with amortized logarithmic time for extractmin and constant time insertion and merging (!).


Nice. Never heard of that.

Honestly, though, I don't think such a thing is what this website is aiming at. It seems to be more for the established, widely used stuff.

Still nice, though. (It's cute how, for so many of the "heap" data structures, the merge operation is really all you need to give much thought to. See also Binomial Heap, etc.)

> These 6 lines of code provide a heap data structure with amortized logarithmic time for extractmin and constant time insertion and merging (!).

Hmmm ... maybe. We need to be exceptionally careful in our reasoning about time complexity, when the code uses lazy evaluation (perhaps more careful than people know how to be, yet).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: