Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The practice of naming things after their inventors, while well-intentioned, introduces so much friction that it ought to just be abolished. Imagine how worse things would be if a directed acyclic graph were instead a "Euler path", or if a hash table were instead a "Luhn collection". The former terms may be jargon, but at least they're consistent, identifiable, and reasonably self-describing. I'm already frustrated that we're stuck with "boolean" as the most fundamental of all data types.


In reality, those kinds of names are generally not intended as some kind of trophy in the first place.

It's more organic. They just grow out of colleagues, speaking amongst themselves, referring to a fellow colleague's particular idea or elaboration. They all share a common base understanding of their field, many know of the colleague directly, and all know how to look something up if they know the colleagues name and gist of the idea. And so that's how they refer to it.

Once in a while, these so-named insights prove really important or lasting -- after the fact -- and the name continues to stick because it's the one everybody was using. Meanwhile, most of the time, the insights just kind of fade back into the baseline body of knowledge and either don't break out at all or evolve through some collaborative work that earns a more formal name.


I had a math teacher who warned us:

> If you don't give your creations good names, they might name them after you.

From his tone it was clear that this was something to be avoided. I don't know whether too late to retcon existing names but let's try to do better going forward.


On the other hand, trying too hard to shoehorn semantic descriptions on names ends up with pathological cases (yes, chemistry, I’m looking at you!).

Jokes apart, words are symbols that even if they have some semantics through etymology, in general they are quite arbitrary. I’d rather go with outlandish names that help mnemonics, if I were to choose. Names from people can serve that purpose; I still remember what a Kohonen map is, back from Uni, because of the childish resemblance with “cohone” (Andalusian for cojones), and a silly joke from a close friend.


There's no winning with chemistry. It's either 2-ethyl-cis-alpha-nonsenium, or it's "the sonic hedgehog domain".

It's like you get a choice between math hell or cartoon hell.


But if we choose words that are famous names, they are far less likely to be systematically used as a building block. And woe be to us if that individual invents too many things, because then the meaning of their name will be too ambiguous.


> I'm already frustrated that we're stuck with "boolean" as the most fundamental of all data types.

We do have another name for it. "Bit." You could probably roll out a new programming language today that uses something like `let shouldUpdate: bit = true;` or without blowing too much of your novelty budget. Or `u1`, if you wanted to allow arbitrary integer sizes.


I think 'bit' risks confusing storage with the type (a Boolean is often stored using at least a byte, not a single bit).


My favorite is when we do have two words for the same thing. People out there really think there is a major distinction between stochastic and random.

Or, worse, when people let the names of things keep them from learning them. Imaginary numbers being high on that list.


Imaginary numbers sound more approachable than complex numbers.


A personal matter I suppose, of how comfortable you are with imagining things :^)

The real/imaginary dichotomy is the confusing part, having a rotational component doesn't make them any less real. O that reminds me, I'll just drop this link for my favorite video lecture on the matter full of visualizations, 13 parts, "Imaginary numbers are real" https://youtube.com/playlist?list=PLiaHhY2iBX9g6KIvZ_703G3KJ...

If I had it my way the distinction would be straight and twisted.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: