A lot of people seem to be missing an overarching point, which is the benefits of a language having Sum types, so that edge cases can be represented clearly, and in a way where the consumer of the api can't fail to know they exist, and can't fail to handle them. Anyone thinking of making a new language today, should really get some familiarity with Option and Result types. They make so many things not only safer, but also nicer to use.
I surprises me that most people here aren't up in arms in agreement with this point. Code that is silently incorrect is an absolute disaster on an enterprise level. I spend a lot of time writing seemingly redundant double and triple error checking into my code, only to have the designers of the LANGUAGE say, "yeah, most filepaths are utf-8 so seems good enough to me".
There's a huge cult of Golang being the "one true way" right now and any logic that could potentially contradict that is going to cause folks to throw the blinders up.
Your point about that being a disaster in enterprise is exactly correct and I have huge misgivings about these people above writing the large majority of our software architecture. This is after we switched to Go from Java where these same people did some of the same things.
I’ve written go full-time for the last 3.5 years and it still amazes me that by default the linter doesn’t at least warn about unused/uncaptured return error values.
Golang is such a joke of a language. The compiler won't even compile if there is an unused variable but won't warn you if there is an unchecked error! This language is meant to produce buggy incorrect code that can only be mitigated with writing excessive repetitive tests that have nothing to do with business logic itself.
Golang is probably the biggest embarrassment of a modern programming language ever conceived. Again, if you don't believe me, just start writing your first Kubernetes controller.
This doesn't match my experience. Go apps tend to be extremely clean and reliable. It's of course possible to write crap code in Go, but you can write crap code in any language.
The unused variable thing is mildly annoying but fits with the cleanliness philosophy. Not checking errors is very easily detected by a LINTer such as the one built into the JetBrains GoLand IDE. It highlights failure to check errors and requires that you explicitly ignore the error return with something like "_, foo = bar.baz()".
Go is spectacularly productive when used properly. It's a very nice language.
Go the language lets the Go the standard library play with things that nothing else can. If you can't implement it yourself, is it really a library and not just part of the language runtime?
That's a valid point. Enterprise software can be continually buggy and broken and still be commercially viable.
So that things are silently wrong is not a disaster as much as it is a dumpster fire that corporations are happy to shovel cash into while a whole lot of people huddle around it for warmth.
Checked exceptions were universally rejected not because they are intrinsically bad but because the language support was awful (e.g. could not wrap or abstract over a nested object possibly rethrowing), they were sitting right next to unchecked exception with limited clarity, guidance and coherence as to which was which, and they are so god damn ungodly verbose, both to (re)throw and to convert.
Results are so much more convenient it's not even funny, but even without that you could probably build a language with checked exceptions where they're not infuriatingly bad (Swift has something along those lines, though IIRC it doesn't statically check all the error types potentially bubbling up so you know that you have to catch something, not necessarily what).
A very large part of that though is Java not being 'generic' over checked exception types. So if you e.g. build something that supports end-user callback code, you need to either throw Exception (accepting all code but losing all signal as to what's possible) or nothing (forcing RuntimeException boxing).
That's Java. And I agree it is a wildly painful and incomplete implementation. I wish we'd stop conflating it with checked exceptions as a language feature.
Basically, exceptions have a "happy path" which is very simple but deviating from that path is often quite inconvenient and painful. A well-built result type makes it easy to opt into the happy path of exceptions, and also quite easy to use different schemes and deviate from that path, all the while being much safer than exceptions because you're not relying on runtime type informations and assumptions.
Furthermore, results make it much less likely to "overscope" error handlers (there a try block catches unrelated exceptions from 3 different calls) as the overhead is relatively low and there's necessarily a 1:1 correspondance between calls and results; and it's also less likely to "miscatch" exceptions (e.g. have too broad or too narrow catch clauses) because you should know exactly what the call can fail with at runtime. It's still possible to make mistakes, don't get me wrong, but I think it's easier to get things right.
"Path unification" is a big one in my experience: by design exceptions completely split the path of "success" and "failure" (the biggest split being when you do nothing at all where they immediately return from the enclosing function).
This is by far the most common thing you want so in a way it makes sense as a default, but it's problematic when you don't want the default because then things get way worse e.g. if you have two functions which return a value and can fail and you need to call them both, now you need some sort of sentinel garbage for the result you don't get, and you need a bunch of shenanigans to get all the crap you need out
int a;
SomeException e_a = null;
try {
a = something();
} except (SomeException e) {
a = -1;
e_a = e;
}
int b;
SomeException e_b = null;
try {
b = something();
} except (SomeException e) {
b = -1;
e_b = e;
}
if (e_a != null or e_b != null) { // don't mess that up because both a and b are "valid" here
…
}
or you duplicate the path in both the rest of the body and the except clause (possibly creating a function to hold that), etc…
By comparison, results are a reification so splitting the path is an explicit operation, but at the same time they still don't allow accessing the success in case of failure, or the failure in case of success.
let result_a = something();
let result_b = something();
if let Err(_) = result_a.and(result_b) { // or pattern matching or something else
…
}
Having a reified object also allows building abstractions on top of it much more easily e.g. if you call a library and you want to convert its exceptions into yours you need to remember to
try {
externalCall()
} except (LibraryException e} {
throw MyException.from(e); // because that might want to dispatch between various sub-types
}
and if you don't remember to put this everywhere the inner exception will leak out (that's assuming you don't have checked exceptions because Java's are terrible and nobody else has them).
Meanwhile with results the Result from `externalCall` is not compatible with yours so this:
return externalCall();
will fail to compile with a type mismatch, and then you can add convenience utilities to make it easy to convert between the errors of the external library and your own, and further make it easy to opt into an exception-style pattern. e.g. Rust's `?`
(there's actually more that's involved into it these days an a second intermediate trait but you get the point, in case of success it just returns the success value and in case of failure it converts the failure value into whatever the enclosing function expects then directly returns from said enclosing function).
People are quick to advocate anything from functional programming / academia here. Doesn't mean it would necessarily improve life of an ordinary programmer.
Having done a big tour of functional programming ideas in the last couple of years, I've found almost none of them to be generally helpful for the ordinary programmer, except sum types, which enable the option type and result types. Though even just having the one special casing those two and not exposing sum types to the language would be most of the benefit.
that's true, but realize that golang is largely shepherded forward by a company with a legacy of C++ (maybe lesser C/java) heritage. you aren't getting legacy C++ programmers on board with "optional" types: they'll riot and pull the purity card (this doesn't look like "my C++"). google is probably grateful these people are no longer returning -1, -2 etc. for errors from their functions.
consider things like the golang date formatting string. to anyone not well versed in a C++/C ecosystem, the golang date formatting string is absolutely nuts. it is complicated as hell and doesn't really make any sense. but consider the reaction of a C++/C developer: they're probably quite comfortable with it, because it's basically stolen from C. functions like itoa and atoi harken back to a """simpler""" time, despite being virtually nondescript for anyone who didn't start their careers with that stuff.
Java has Optional as well, but as you well know, you can't just bolt ADTs onto the side of a language and wash your hands of it.
The whole system has to be designed around it to get the benefit of it. Java programmers will be checking for null until the last line of Java is written.
Sure, but the argument was that folks would reject it, when they in fact have explicitly added it. Its effectiveness is another story; not only along the axis you were talking about, but also others, but that's a separate conversation.
The language maintainers adding new features to appeal to people newer to the language is not mutually exclusive with veterans of the language not adopting the new features.
The authors just happened to work at Google ,while having a manager that supported their work (check Go Time podcast), most of Google's relevant products keep being done in C++, Java and Python, and they are one of the biggest contributors to LLVM/clang, and ISO C++.