This article completely loses me at the end where it suggests that using square brackets for assignment and collection access is an unnecessary/dangerous convenience.
It might not be the author's preference, but the argument made about it getting abandoned as the language ages anyway is demonstrably false with the most popular languages in the world (Python, JavaScript, Java, C, C++), and in languages with operator overloading it could never be true no matter how far the language evolved.
> How do you disambiguate that from a function call? It's now just abusing ()
You don't. If you apply an integer to an array, it will return an entry. To me, it makes perfect sense to use a(i) instead of a[i]. In fact, in case you need some more complicated data structure to organise your data one day, you could switch to a function call for lookup without rewriting the code that uses a(i).
The question is not 'how do you disambiguate?', but 'why do you want to distinguish?' Because an array is just a special case of a function that maps an integer to something else. It is logically not different from a function with a large contiguous switch statement.
> To me, it makes perfect sense to use a(i) instead of a[i].
I disagree. Square brackets links to memory, paren just returns data. Of course you could implement the paren function to return a reference, but it makes the code a lot harder to read since that is not the normal case. See comparison:
If it is a list, access like that is a function. Even in the context where it's an array with the depedence we have these days on function inlining, it could still be a function
Another way to make that argument is that arrays are an exponential type and therefore are logically equivalent to functions.
That is, the cardinality of the type Array<T> is exactly the same as the cardinality of the type Function<Integer, T>. Any pure function that takes an integer and returns T can be replaced with an array, and vice versa.
Same deal with Map<K, V>, that's logically equivalent to Function<K, V>.
Two problems:
1. People don't think in category theory, they have containers to contain thinks and functions to calculate things.
2. People are right, the function call really is doing something different than an array index. If a thing is different, it should look different.
You are wrong, collections are mutable and hence different from functions. Index operators are expected to return l-values while functions are not, that difference is extremely important so is worth the compiler overhead in order to make code easier for users to read.
You can't analyze this stuff with the specifics of a language in mind, or even assume there's a heap, let alone talk about what compiler optimizations apply. Because if we try to do that, what's our common basis of understanding? You could be talking about gcc and I'd be talking about clang and we'd go in circles.
If we limit ourselves to the math, we can agree that a type is a domain that contains certain values and excludes others. But we have to throw out mutability because values, by mathematical definition, are immutable.
And once we've got a clear notion of what values are, we can then talk about how many values there are in a type. And that's where I'm coming from in claiming functions are equivalent to containers.
I mean, really? Isn't the whole point of the STL to paper over those differences and supply functions with similar semantics for those kind of accesses regardless of the data structure?
About the only reason I can see for array access syntax existing is that in the days before optimizing compilers, people would have lost their mind if array access called into a function each time (with good reason). It took manual programmer intervention to hint what compilers couldn't cleanly infer themselves. We don't need that anymore.
One is looking up a value, while the other is (in most of these languages) executing a subroutine.
While they are mathematically equivalent, in these languages they are doing different things. Function application and array access are two different concepts the symbols are trying to convey to a programmer.
This is why, e.g., in Haskell where a string is literally a list of characters, they nevertheless have a different syntax for the two.
I think .get would be more plausible. It's just strange that we can't use [] for indexing, which is an operation that can apply to any container, but << and >> must be reserved for bit-shifting.
If I'm adding shifting to a new language, I'm inclined to use functions just so it's clear what's going on, and to put operations like rotation on an even footing. Also, I don't think people have an intuition for the precedence of shift operators, which makes them less useful.
The article proposes using basic function call syntax. That makes no sense to me. And with .(), you now have 3 characters to type instead of 2 with []. Obviously all of this is pedantic, but then again so is the article.
It might not be the author's preference, but the argument made about it getting abandoned as the language ages anyway is demonstrably false with the most popular languages in the world (Python, JavaScript, Java, C, C++), and in languages with operator overloading it could never be true no matter how far the language evolved.