It's 'interesting' that the author critices Go for having a GC making it 'inappropriate' as a system programming language, but then also critices Go for not having exceptions (which are IMHO at least as 'inapproriate' for that type of language).
If anything, the latest round of systems programming languages which all use error unions instead of exceptions have demonstrated quite clearly that exceptions are actually quite pointless.
System programming doesn't mean kernel and embedded only. The huge portion of C++ programs are running in the user space because one may still want performance benefits, direct access to memory via well-defined struct layout and direct calls to the system libraries to get maximum functionality from the OS. User space programs can handle exceptions.
On the contrary, that window is both huge and also you're assuming exceptions require expensive allocations but they don't.
Exceptions are for exceptional situations, so the performance impact of them when thrown is rarely of concern - both in C++ and in pretty much every other language with exceptions. There's a reason something like Java still uses error return values for an awful lot of things, after all. But importantly exceptions means that you're not doing return value checking for errors that rarely happen, which can get expensive on the whole. As in, exceptions gives your code a path to both be cleaner and faster on average - they are really very useful when implemented well.
The problem with C++ exceptions has nothing to do with the memory allocation cost when thrown (which as mentioned, there's other strategies you can deploy than the default of just `new`). Rather it comes from the significant impact on binary size and the reliance on RTTI.
The bloat from the exception tables isn't really ideal, but it is the tradeoff of allowing for exceptions to be zero cost in the non-exceptional cases.
I'm not sure RTTI for exceptions is such a big deal.
I'm guessing you are referring instead to the the fact that exception catching needs pull in most of the dynamic casting machinery in many cases, since exception hierarchies at least in theory can have multiple inheritance, virtual inheritance, and other complications, which means that not only is determining if a specific exception thrown is compatible with a given catch clause non-trivial at runtime, but even once determined, the cast may be non-trivial, not to mention the possibility of catching by value needing to invoke a copy constructor.
If a language catches by type, some form of run time type indication in the exception object is fundamentally needed if the language allows throwing an exception whose concrete type is not known at initial throw time. C++ is such a language, as you can throw an exception that was allocated elsewhere, and which you received via a pointer to a non-final class.
In a language where the concrete type is always known at initial exception throw time, then the stack unwinding code could conceptually simply identify any catch blocks that would apply from data in the exception tables because all superclasses are statically known. And it could pregenerate code for any possible upcasting or copy construction needed, so no dynamic casting machinery would be needed.
(But many other static languages with exceptions only allow single inheritance, don't allow catching by interface, and only allow catching by reference, so no fancy copying or upcasting code is needed. Almost all of them do allow throwing without knowing the concrete type, so they need some form of RTTI data nevertheless.)
The RTTI requirement is more of an issue since that's forcing RTTI on for all types, not just those that can be thrown as exceptions. An explicit "throws" syntax would eliminate that (since the thrown type is now statically known and doesn't require RTTI) and thus significantly cut down on the cost.
Alternatively, and this isn't very "C++" but would work, it could be required that all thrown exceptions must inherit from a base "Exception" type such that you then only need to require RTTI for that chain of types instead of all types.
'Performance benefits' may also include low and/or predictable latency. GC is a real problem there. Not insurmountable by any means (witness the number of people who use java for latency sensitive applications), but a problem nonetheless.
Edit: Also, depending on how they're used, exceptions can be easier to avoid than GC. IOW, if an exception has been thrown at all, you may well have bigger problems than the latency cost of throwing.
Mutators are threads that allocate memory and manipulate pointers, they can work completely independently of the GC. A mutator needs only to tag an object when copies or moves a pointer to this object. The GC detects this tag and marks the object as alive. Here is a working implementation for C++: https://github.com/pebal/sgcl
I could listen to you or whoever talk about this all day. Just on the chance you know a good one, do you know any good conference talks or podcasts I can listen to on the same topic?
Go does have exceptions, via panic/recover. You could argue that Rust lacks true exceptions because a Rust panic can be configured to be fatal at the whole-program level. (Which is actually great for deep-embedded scenarios where even C++ use is with no-exceptions support.)
Those are not exceptions by most reasonable definitions. Exceptions are a general purpose error handling method. Try, catch and all that. They are almost always objects with multiple types.
Go and Rust's panic is for unrecoverable errors. They both have the ability to catch panics because sometimes you really need to do that (e.g. when interacting with FFI or sometimes multithreaded code).
They are not exceptions though. I've seen this myth repeated a few times lately (always about Go and not Rust for some reason) and I wish it would die.
Maybe that's just me, but I wish more people would say that C "has exceptions", if only because longjmp/setjmp[0] interacts with exception systems of other languages, sometimes badly.
That being said, I think an important difference is that in C setjmp/longjmp do not "unwind" objects, due to the lack of destructors.
However panics at least in Rust are just exceptions by another name. The difference solely comes from culture. Even the fact that unwind can be turned to abort is not significant, considering that pretty much all production implementations of C++ have fno-exceptions
> That being said, I think an important difference is that in C setjmp/longjmp do not "unwind" objects, due to the lack of destructors.
I don't think that is a requirement of exceptions. It's just a sensible thing that most implementations do.
> The difference solely comes from culture.
Maybe, but that is a huge difference! You can implement anything in any Turing complete language but you wouldn't say that they all "have" every feature...
> It is *not* recommended to use this function for a general try/catch mechanism. The Result type is more appropriate to use for functions that can fail on a regular basis. Additionally, this function is not guaranteed to catch all panics, see the “Notes” section below.
Yeah, it's funny how go does have exceptions, but no notion of exception safety. Manual mutex locks/unlocks are everywhere, and don't even start me on defer, which is just terrible
> the latest round of systems programming languages which all use error unions instead of exceptions have demonstrated quite clearly that exceptions are actually quite pointless.
What they have actually shown is that error unions are not a panacea and a pain to handle manually. And that hardcore killing your app in the presence of even the tiniest of unhandled errors isn't suitable for any programming, especially systems programming.
That's why Rust ended up introducing try!, ? and catching panics. Go also added panic recovery.
- unions are not a panacea and are a pain to handle manually, as you wrote;
- but with a little syntactic sugar, they turn out to work really, really well, much better than C++-style exceptions.
There's something to be said for OCaml-style exceptions, which are actually even closer to zero-cost, but I wouldn't call OCaml a systems programming language.
Writing this as one of the persons who advocated for try! around the time of Rust 0.4 :)
try! and '?' is just a syntactic convenience for passing error returns to the caller. It's not even monad-like because the outer code still has to wrap the happy path return with Some(...) or Ok(...), which wouldn't be the case with an actual monad.
(I.e. it's quite different from what was done with "async fn" support, where the Future return type is hidden via the 'async' specifier and the control flow is totally changed.)
> try! and '?' is just a syntactic convenience for passing error returns to the caller.
Indeed. Because manually handling those are a pain in the butt. So they made a shortcut. That still needs to be handled by someone up in the hierarchy. Exceptions in all but name.
The crucial thing about Try (the ? operator) is actually pretty easy to see if you look at the Trait which makes it go:
The result of the branch method on Try is ControlFlow<Self::Residual, Self::Output>
Try isn't stable, but ControlFlow is. They've reified the control flow! Rust has a type which represents the idea of a decision whether to return early. This is in my opinion pure genius, and it happened almost by accident. It seemed natural at first that the decision to return early is manifested directly in Result, but it isn't. That's the insight. Failure and returning early are distinct ideas, and we might just as easily want to return early in consequence of success as failure.
In separating "Success versus Failure" from "Return early versus keep going" Rust gets a lot more value here than is encapsulated in C++ Exceptions.
This is a very interesting article that discusses the isomorphism between checked exceptions and error return types, they ended up with checked exceptions where a function Foo that may throw must be invoked with "try Foo()" - very similar to Rust's ADT-based macro/syntax.
> There is nothing about exceptions that's inappropriate for system programming.
There's lots about exceptions which is inappropriate for system programming, starting from FFI unsafety and the lack of signaling to callers (which makes resilient use more difficult).
> If anything it enables the enforcement of strong invariants
It doesn't do that.
> and leads to better and safer code.
It only does that in comparison to truly deficient (e.g. c-style) error reporting, and that's being generous.
Well it seems you don't understand exceptions. They eliminate erroneous states entirely, since the objects just don't get created if an error occurs.
The alternative that the parent said was making all of your state be a union with some kind of error, and making sure all accesses handle the fact the variable might be in a erroneous state. That is a huge explosion of possible states in your program, and essentially making every invariant weak everywhere.
Then FFI, I suppose you mean interfacing with C. Problems that arise when interfacing with other programming languages are orthogonal to a language's ability to be used for system programming. Obviously you wouldn't let an exception propagate through some C code, that's forbidden.
> Well it seems you don't understand exceptions. They eliminate erroneous states entirely, since the objects just don't get created if an error occurs.
Error sum types do the exact same thing.
> The alternative that the parent said was making all of your state be a union with some kind of error, and making sure all accesses handle the fact the variable might be in a erroneous state.
Which is a non-issue as it is lifted to the type system. The type system will not let you forget about that.
> That is a huge explosion of possible states in your program, and essentially making every invariant weak everywhere.
You get an error state added to a given value, which you also get via exceptions, except implicitly and without notification of the additional state.
Type-safe error values also provide simpler error handling and recovery in many case, because they don't require split-path handling.
> Then FFI, I suppose you mean interfacing with C. Problems that arise when interfacing with other programming languages are orthogonal to a language's ability to be used for system programming.
It very much isn't, part of the system programming workload is to provide reusable components.
> Obviously you wouldn't let an exception propagate through some C code, that's forbidden.
And rarely if every checkable statically, hence unsafe.
> You get an error state added to a given value, which you also get via exceptions, except implicitly and without notification of the additional state.
This is incorrect. With error return values you're adding a branch to every function call which is quite expensive on the whole. You're adding i-cache pressure & you're adding branch prediction pressure.
Exceptions in nearly every language that supports them (including C++) don't go through return values at all. Rather when thrown the stack is walked to find an exception handler. So exceptions are more expensive to throw than return values, but completely free when not thrown unlike return values.
> It very much isn't, part of the system programming workload is to provide reusable components.
There's absolutely no issue with exceptions & library boundaries in general. Statically checked exceptions also exist (see Java - although there's a big debate on if that's a good idea or not, but also see https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p07... ), I'm not sure why you're arguing as if they don't.
> So exceptions […] completely free when not thrown unlike return values.
In C++, they are not. Since C++ allows objects to be created on the stack within the each current scope, every constructor call for such a new object has to register its destructor with the exception handler's object cleanup jump tables.
Consider an edge case with a loop where 100k local objects are created. Each constructor invocation will incur an overhead of an indexed memory store of the destructor's address in the exception handler's table[0] for each object.
An indexed memory store is typically 1x instruction for a CISC architecture (it can be more if the data section is located too «far» in the address space, and the ISA limits the offset width in the instruction encoding). It is typically several load low and shift, load low and store instructions on a RISC architecture. L1 D-cache and L2 cache, sometimes L3 cache as well (if the exception table grows large), and TLB reloads[1] get involved at all times. All of the aforementioned is just to register a destructor for an exception that might not occur. So, no, it is not free and (occasionally) the time dimension is not even clearly defined.
This is rather unique and specific to C++ because it is an outlier and allows new objects to be created on the heap AND also on the local stack. Languages that allow new objects to be only created on the heap do not incur such an overhead.
[0] Bonus point: 100k local objects being created inside a try/catch block will also blow the jump table out of proportions and add more cache pressure and cache line reloads.
[1] And even page faults might occur – if the exception table crosses memory pages.
Calling destructors on scope exit is a C++ language feature with or without exceptions. I'm no expert but I believe these addresses can be determined relative to the stack frame pointer, so they don't need to be registered in advance. Instead extra code is created to call all those destructors; this code is only ever called if an exception is thrown. That results larger executables, but no extra CPU instructions are executed unless an exception is actually thrown.
(In theory a compiler-writer could try to reuse destructor-calling code between the normal exit case and the exception case, but that might force one extra branch in the non-exception case.)
> So exceptions are more expensive to throw than return values, but completely free when not thrown unlike return values
Is that true though? Placing exception handlers in the stack, so examining every stack frame for one, is equivalent to testing the sum type to see if it is in error state, surely?
My disclaimer: I know very little about compilers, so this is an actual question.
That's only when an exception is thrown, though. If an exception isn't thrown, "unroll the stack" is just a normal `ret` instruction. There's no exception handling code at all when a function returns normally without an exception, which is the point. By contrast when an error sum type returns without an error, you're still doing a branch at the call site to verify that.
In the non-throwing exception path, there's literally no error or exception handling code executed at all. Whereas in the sum-type error-returning version, you have a branch at every call site that's always executed regardless of if there's an error or not.
Now the exception handler generates ".cold" clones of the function, so the total assembly for the exception handling one is larger. However, that assembly isn't every executed if an exception isn't thrown, which is the broader point. So it's not taking up CPU cache space & it's not taking up branch predictor slots.
That's a bad point? Exceptions are not normal control flow. They are rare or, as one might say, exceptional. The performance of them when thrown isn't of key concern, it's the performance when they are not thrown that matters since that's the >90% case. And in that case, code using exceptions is faster than code using sum type return values, especially if those errors propagate deeply across the call stack which they very often do.
You mean destructors? An exception handler would be a catch block.
Anyway, the typical implementation involves two phases, one which uses a table to identify the matxhing catch clause, then another one going through landing pads for each frame of the stack. Just consult the Itanium ABI spec for technical details.
The problem is not "forgetting about it", it's that it increases the possible values of your working set.
If you have 3 variables, each of which can be in 10 states, that's 10^3 states your program can be in.
If instead you have 11 states because all your variables are actually unions with an error, that's 11^3 states (assuming all error states are equivalent to a single state).
Now in practice it's even worse since what you care about isn't the possible states of your values, but rather how many different paths you have in your control flow to handle them.
Then you're comparing 1 (none of my values are in an erroneous state) with 2^3=8 (any of my values can be in an erroneous state or not).
What exceptions do is enforcing that your working set does not have to encode any erroneous states, preventing the combinatorial explosion of states, which of course is a net win, there isn't really any valid argument that can be made against it.
Where people are debating is that sometimes you do want errors to be part of your working set, in which case you shouldn't use exceptions. But choice is difficult for some, especially those seeking absolute doctrines.
> It very much isn't, part of the system programming workload is to provide reusable components.
That's already somewhat dubious, since a lot of system programming tasks are really purpose-built for a usecase or for specific hardware, and regardless, there is nothing about that which has anything to do with interfacing with C.
I do a lot of system programming and I write it all in C++, which has a lot of advantages over C beyond exceptions.
> If instead you have 11 states because all your variables are actually unions with an error, that's 11^3 states (assuming all error states are equivalent to a single state).
This isn't what actually happens though, what actually happens is that people declare local and member variables that are the_type_i_actually_want instead of Result<err, the_type_i_actually_want> and bubble up their errors like they would exceptions. So they get the benefits that you've claimed, but they don't need to pay the runtime cost of not-thrown exceptions that C++ users have to pay, they don't have to use external tools to tell them that functions they're using can throw exceptions, and they don't have to enjoy the wonders of Java where checked exceptions in function signatures regularly prevent the use of streams.
You're conflating recoverable errors (Result in Rust, status codes or std::expected in C++) with the non-recoverable errors (panic in Rust, exceptions in C++).
If we were to compare Rust panics vs C++ exceptions, then handling of Rust panics is much less flexible. From what I understand, it's essentially a std::abort and it can be hardly used otherwise, which is only a subset of how C++ exceptions can be commonly used too.
If we were to compare Rust Result vs C++ std::expected, they boil down to pretty much the same with the difference of Rust requiring the call-site to unconditionally check for the return value. That may or may not be preferable in every situation.
> they don't need to pay the runtime cost of not-thrown exceptions that C++ users have to pay
Had this been true, which in 99% of cases it isn't unless you can support your claim, do you mind sharing how Rust implemented their zero-cost panics?
> The alternative that the parent said was making all of your state be a union
> with some kind of error, and making sure all accesses handle the fact the
> variable might be in a erroneous state.
This is exactly what `Result` is in Rust. While I haven't used Rust, it seems
that panics are generally discouraged and only used as a last resort whereas
exceptions are more commonly used in C++ and Java.
Yes, they are. They are basing their argument by comparing Rust Result against C++ exceptions in the context of general error handling whereas I pointed out that there are actually two classes of errors and both of which are addressed their own appropriate mechanisms in both Rust and C++.
What parent comment tried to (wrongly) imply, and your comment as well, is that exceptions in C++ are (commonly) used as a control flow mechanism. And they are not.
There is no cost to exceptions that are not thrown.
On the contrary, the approach you describe introduce a lot of overhead, since it affects all code paths, the function call ABI, jumps after every function call etc.
Also in C++ you have operators that are integrated in the type system and are resolved at compile-time to know whether an given expression can throw an exception or not. Do not confuse C++ with Java.
> There is no cost to exceptions that are not thrown.
This is not true for a variety of reasons, but the main ones are maybe missed optimizations and otherwise-unnecessary spills of objects into memory so that their destructors may be called.
> Also in C++ you have operators that are integrated in the type system and are resolved at compile-time to know whether an given expression can throw an exception or not.
Maybe if you only have a single TU or LTO? In general any function from another TU can throw an exception so you don't have this.
> This is not true for a variety of reasons, but the main ones are maybe missed optimizations and otherwise-unnecessary spills of objects into memory so that their destructors may be called.
The missed optimization opportunity you describe only affects the Windows ABI, designed in 1989.
> Maybe if you only have a single TU or LTO?
Whether a function can throw or not is part of its signature.
> There is no cost to exceptions that are not thrown.
Oh yes, there is. C++ compiler has to emit unwind tables, register destructors for RAII resources and generate the RTTI information (where applicable).
In this trivial example, consider and compare two versions, the first does not have an exception handler, the second one wrap a single constructor call with a dummy try/catch block:
For the latter one, the object file size is up by 1kB instantly by virtue of adding a no-op exception handler. Exception handling implementation is not standardised and varies across different compilers AND also across different runtimes. Due to space constraints, the C++ exceptions are oftentimes a big no-no in the embedded world due to the space and time cost the language imposes. As well as long gone are the days when a «try» was a «setjmp» plus a few bells and whistles and «throw ecx;» was a «longjmp».
Yeah, that's why exceptions were created back then. They got rid of a lot of extraneous branches in exchange for a small, nearly constant cost on your function calls.
But with decades gone, things changed. That constant costs is relatively not so small anymore, and those branches are much cheaper now.
Your reasoning is off. You don't have "10^3" states if you always unwrap the return values at the call sites (which implies returning if it fails). It's literally the same as exceptions, just that the errors get encoded by (re-)using the type system. You'll have the exact same types for your local variables -- the only difference being that you would put a '?' (or similar) after function calls, to unwrap the return values.
The advantage of this ADT approach is that you can store error unions more permanently when it makes sense. It is not additional syntax, unlike exceptions. In that sense ADTs are the simpler approach of the two. If there is any "explosion of complexity", then it is exceptions where you get that -- because you have to express your code using multiple mechanisms (types vs exceptions), and possibly have to switch between the two when refactoring.
I say that as someone who doesn't think highly of either approach. In my view, plain error values are fine, there isn't any clever language solution needed. If you find yourself checking return values a lot (as opposed to storing error values in handles and checking them at strategic locations), that can hint an architectural problem.
By unwrapping and returning, you're creating another path down the control flow of your program, which also propagates to your callee, since you have to return an error.
Exceptions don't do that, they stop the flow entirely, then match it to a point arbitrarily higher on the stack, and resume after that whole sub-tree has been destructed.
They're also much more efficient than branching and maybe returning on the result of every single function call.
> Exceptions actually do that, except hidden and unsignaled.
Which IMO is good in exactly one situation: when raising the exception means that the program contains a bug.
Using panics (ah sorry... exceptions) in this case is justified as it should be really exceptional (if there is a bug anyway we have more pressing problems than performance) and in the absence of bug if we were to use a Result type it would mean we would have a "BugError" variant that is actually dead code everywhere where the program is not buggy.
So in my opinion a correct approach is to unwrap whenever you have an invariant that guarantees that there should be a value, with a panic handler set at the boundary of the logical task to fail the entire logical task in case there is a bug. A logical task can be an asynchronous light task, a thread, or the whole process depending on the situation.
I much prefer it not being the whole process when the process is e.g. a web server or a word processor (and the failure occurred somewhere in an ancillary function)
I don't see a reason why the compiler couldn't implement error-sum return values the same way that exceptions are typically implemented (the way you describe).
(I don't see why it should, either. The blanket "efficiency" argument is unconvincing to me).
Ok, I see one reason: The programmer might want control which implementation is used. That would require an additional mini-feature in the language syntax/function types. But this still wouldn't be an argument for a whole different syntax and forced separate code paths as required for traditional exceptions. And it's theoretic anyway -- I don't think it's important to give the user this "control".
This doesn't track at all for me. Rust provides strong guarantees around accessing discriminated unions. The net effect of which is that the code you write has the "railway style" error handling that you get with exceptions in the trivial case (propagate the error). It even has a convenient syntactic shorthand for this `?`.
In non-trivial cases they are equivalent too. For example, collections need to maintain at a minimum a valid state in the presence of types with exception-throwing (fallible) constructors. This is a mess with or without exceptions in basically the same way. It's such a mess that the C++ standard allows for unspecified behavior of `std::vector::push_back` if the contained type has a throwing move constructor. Throwing move constructors are of course ridiculous but nonetheless allowed.
And that I would say is the biggest flaw with exceptions: they presume the fallibility of everything by default. This is not only brain damaging, it actively creates situations where there are no good options.
> If you have 3 variables, each of which can be in 10 states, that's 10^3 states your program can be in.
> If instead you have 11 states because all your variables are actually unions with an error, that's 11^3 states (assuming all error states are equivalent to a single state).
> Now in practice it's even worse since what you care about isn't the possible states of your values, but rather how many different paths you have in your control flow to handle them.
You're really demonstrating that you have no clue about the subject and refuse to think about it.
If the current function does not deal to specifically deal with erroneous results (aka it would be a passthrough for exceptions) then it unifies the error states into one, by either pruning their branches through early-returning, or unifying the triplet of results into a result of triplet.
Hence you don't have 11^3 states but 10^3 + 1.
> What exceptions do is enforcing that your working set does not have to encode any erroneous states, preventing the combinatorial explosion of states, which of course is a net win, there isn't really any valid argument that can be made against it.
The problem is that none of that is actually true, you're literally inventing combinatorial explosions which effectively don't exist.
Unless they would have to in all cases at which point exception would lead to a significantly worse combinatorial explosion, because exceptions would not allow representing the product of 11 states as just that, and instead would need 20^3 states as every possible value would have to be paired with two error states, success and failure.
> That's already somewhat dubious
It really is not.
> there is nothing about that which has anything to do with interfacing with C.
The C (or system) ABI is the linga franca of inter-language communication, unless you decide to pay for a network cost.
> I do a lot of system programming and I write it all in C++, which has a lot of advantages over C beyond exceptions.
And plenty of drawbacks as well.
But if all you know is C and C++ and you see the entire world through that lens, I can see why you're missing most of the field, you're essentially blind.
> What exceptions do is enforcing that your working set does not have to encode any erroneous states
You do that by having types that encode a guaranteed non-erroneous state. It's not like exceptions are doing anything all that different, they're just trying to establish that guarantee in a language where variant record types and pattern matching are not first-class facilities.
This is something where C and C++ actually regressed from PASCAL, which did have support for variant records.
Same for exceptions really. Exceptions don't give any guarantee of non-erroneous state. The guarantees that you're talking about actually come from how construction and deconstruction work in C++ (note how it plays with early returns just fine, no exceptions needed). And these construction semantics can be implemented with variant types as well, it's completely unrelated.
The prevent the control flow from continuing in that direction, which prevents those variables from ever existing.
Early return is nothing like exceptions. Early returns needs to return something which passes the problem to someone else. It's also a choice to do it at all.
You're completely missing my point. The point is that both prevent the control flow from continuing in that direction. Both prevent the variables declared later to ever "exist".
>What exceptions do is enforcing that your working set does not have to encode any erroneous states, preventing the combinatorial explosion of states, which of course is a net win, there isn't really any valid argument that can be made against it.
A combinatorial explosion of states is not a bad thing. Integers in C++ for example have 4294967296 possible states. Programming is not descending into complete chaos just because one of the fundamental types has more possible states then the human brain is capable of handling.
You're describing using exceptions as a catch all fail-safe. It's isomorphic to the the "else" statement in your standard if-else structure which is one of the techniques people use to handle the 4294967296 possible states of int. See example code below on this amazing technique I use to deal with 4294967296 possible branching possibilities:
if(x == 0){
// do something
} else {
//handle all 4294967295 other states.
}
>Where people are debating is that sometimes you do want errors to be part of your working set, in which case you shouldn't use exceptions. But choice is difficult for some, especially those seeking absolute doctrines.
In every other engineering field you do want this as part of your design. You want to know about every possible state your system can be in and handle the states explicitly. Unknown states that are not explicitly encoded into an engineering design is typically a Bad thing.
That is not to say you should design your system and not acknowledge the possibility of an unknown state. You need fail-safes like exceptions to handle these unknown states. But make no mistake, it's not good to have fail-safes regularly executing to catch a bunch of states you failed to encode into your system.
A good example of this is corrected design of the MCAS on the boeing 737 max. The MCAS should not use a fail-safe handle the crash modes we are now well aware about. The MCAS should explicitly be encoded with our knowledge about the new possible error modes. I certainly don't want to sit in a plane where this hasn't been done.
I will also say that much of programming doesn't need the level of safety other engineering products need. Shipping products faster at the cost of quality is something unique to software as the quality can be improved AFTER shipping, so that is not to say your way of using exceptions to catch unknown states (or states not explicitly encoded into the system) is completely wrong; but is certainly not best practice or ideal.
You don't need exceptions for constructors. You can just use static factory functions with an error return as Rust does, and dispense with constructors altogether.
Can you enforce at compile-time that only such a static factory can ever be used for creating the object in Rust? This is the whole point of constructors, they cannot not be used. Otherwise you're just one commit away of creating an object that will not respect the invariants - will you even remember to call this specific factory function in 6 months?
To me, constructors are mostly a convenience thing for the simple cases where I declare a quick container on the stack or similar, and don't want to waste keypresses to have it constructed. And to me it's a question of, what does the language want to be -- maybe this kind of code is better served by languages like C#.
There are various practices that handle the problem of having to call a specific function to get an object in a specific state, without requiring language support. You can make the function that should be used stand out in an obvious way. You can hide the definition of a structure, which very
If it is the whole point of constructors to guarantee that the object is in the right state, it could be the best argument why Rust does not have constructors. Programming is chock-full of "this must be called only by that or in this or that context..." and practically speaking, only a part of them can be handled by language objects and construct/deconstruct semantics.
Plus, C++ gives enough escape hatches to get not-constructed objects or to un-construct objects without them going out of scope. These guarantees that C++ provides (but not really), they require a ton of baggage like ridiculous initializer lists or 17 different kinds of constructors (to the point where it's sometimes almost impossible to tell which will be called), or requiring an out-of-band mechanism (exceptions) to signal construction failure.... that's not worth it in my book.
Could you elaborate?
Rust manages pretty well without constructors and I'm nearly certain that it doesn't have any kind of combinatorial explosion of states. Same for ML-family languages.
Exceptions have their uses, including in system programming. There’s in fact nothing about system programming which makes a particular error handling method better or worse. These are the kind of minor points some programmers like to fixate on and then sell as the one true way of doing X, while providing no proof and asking everyone to trust them, because it worked great in a project once for the comment author.
Not letting exceptions escape at API boundaries has been a technique for a few decades. It’s not rocket science.
Writing exception-safe code is likewise an ancient technique by now. Meyers’ books which explained such things were published in the 90s…
Signaling to caller has already been shown to be a complete mistake with java checked exceptions, I don't understand why people persist with this. It makes for ridiculous code.
Are you saying that "not signaling" has not been shown to be a total maintenance nightmare as well?
Because the only way it is not ever a maintenance nightmare is when you really don't care that function execution could be aborted at any point without any visual clue. And the only way I can see you wouldn't care is when you go 100% in to everything-context managed / RAII or similar. And that again, sorry but I can't be arsed to prematurely rip everything into little pieces like that. It makes for terrible code IMO.
My experience has been the opposite. Ensuring exception safety in a type that has nontrivial move/copy operators (that is, a type that for whatever reason can’t follow the “rule of zero”) is often a research-level problem. Not having to worry about that in Rust is such a breath of fresh air.
No, you do have to worry about that too in Unsafe Rust if you want to achieve memory safety. (If you're writing only Safe Rust then probably not, but then that also applies to C++ if you're adhering to the rule of zero and extensively use the STL containers for all your memory allocation needs.)
Exceptions do exist in Rust, and you do need to catch it explicitly at the FFI boundary. [1][2] And the programmer needs to take care their abstractions are safe with stack unwinding when writing any kind of unsafe code in Rust. (For an example: [3])
Enforcing basic exception safety is trivial, you just have to follow very simple rules.
Enforcing strong exception safety might require some thought, but it's definitely not "a research-level problem".
In any case either of these is miles easier than satisfying the Rust borrow checker, unless you use the cop-out of (A)rc.
Regardless, how the invariants of your objects are maintained in case of operation failure is something you should be thinking of regardless of the language.
> Enforcing basic exception safety is trivial, you just have to follow very simple rules.
That's not my experience as a C++ developer in a complex, cross-platform, application, which needs to:
1. interact with C;
2. operate with an event loop;
3. operate/interact with a GC;
4. interact with non-trivial system libraries (e.g. Direct3D, Vulkan, ...)
> In any case either of these is miles easier than satisfying the Rust borrow checker, unless you use the cop-out of (A)rc.
The borrow checker is indeed complicated. I'm not sure how you define "satisfying", though. As for (A)rc, it can definitely be interpreted as a "cop-out" or as delaying optimization until you actually have good reasons to believe that you need it.
I'm sorry to hear that you haven't been successful in using C++ features to their full potential in environments tightled coupled with C libraries. Integration with C or C-like code usually requires some effort if you want to be able to use exceptions that could propagate through C.
I do not offer consulting but can refer you to people who do.
Well, from what you were saying, it's mostly a problem of making your asynchronous framework work well with exceptions. It is true that you need to do special things for asynchronous programming to work well in C++, be it for exceptions or even the scope-bound lifetime management of C++ in general, which all have a huge impact on the design of your system.
In particular most multi-threaded C++ code is incorrect, not because it is impossible to do it correctly, but because the standard tooling is too low-level, each third-party framework targets a different niche, and people who roll their own tend to hack it together.
I understand Seastar is supposed to do it somewhat correctly, so you could suggest to Mozilla that they switch to that.
>If anything it enables the enforcement of strong invariants and leads to better and safer code.
How?? If a container have `get(K)->V` and `remove(K)->V` then how does it preserve this invariant? This is an impossible contract to satisfy once you try to push once and pop twice. The container is promising you something that it can't satisfy, I would rather have a container that's honest with `get(key)->Maybe(value)`.
Of course. With unchecked exceptions, it's impossible to statically check that this is the case, so for system programming they are not an appropriate error handling mechanism.
> With unchecked exceptions, it's impossible to statically check that this is the case
Aren't checked exceptions just a hint to the programmer?
Also I don't see why static analyzers shouldn't be able to map out which functions throw which exceptions without having hints in the language itself, the information is in the code.
Lastly if the goal is as simple as catch all exceptions, can't you just enforce a catch all on the top level?
Caveat: we may be using different definitions of "system programming". The one I'm using is code that is fairly close to the system, i.e. will call into libc or into the kernel/libSystem/etc. as well as calling more-or-less directly into a bunch of system-specific .so/.dylib/.dll.
In my experience as a system developer, you need to invest some time into understanding the invariants expected/promised by the libraries you're calling, but many of them map nicely to static types. Of course, if you're implementing e.g. a IPC or RPC layer, you need to deserialize (and validate along the way) your inputs, but there are very few systems that do not need to do that regardless.
A mere possibility of a function throwing an exception is a side effect that an optimiser can't ignore, and this inhibits many types of optimisations involving code motion.
This is for example the main cost of bound checking in Rust, not the branches.
I always hear this, but these days entire Linux distros build with -fasynchronous-unwind-tables which has much more overhead than C++ exceptions, since now you have to provide an unwind path for every single instruction boundary. Even Chrome does it (for "accurate stack traces") when otherwise they're on the no exceptions camp. Using this option also completely subsumes the otherwise negligible cost of C++ exceptions, which only affects non-inlined function calls (which already limit code motion anyway).
Unless you care about performance. Exceptions have a performance benefit in that there's no cost if they are not thrown. With returning error values, every caller up the chain has to test the error condition.
We've used exactly for this reason to improve performance, albeit this was a long time ago.
If anything, the latest round of systems programming languages which all use error unions instead of exceptions have demonstrated quite clearly that exceptions are actually quite pointless.