Arithmetic is surprisingly hard to get "right". You might try to generalise from this example that "multiplying two N bit numbers together should give a result 2N" wide, then discover that for simple examples you run out of machine bits. Then there's overflow/saturation handling, which is a mess everywhere: lots of systems have hardware support for saturation arithmetic, but you can't conveniently specify it in C.
If anyone could think of a good, concise way of expressing all these bells and whistles of arithmetic, it could be implemented as a language or language frontend. For now, most languages choose to either ignore it entirely or push the user to floating or arbitary-precision arithmetic as an 'improvement'.
For modern applications programming, arbitrary-precision is probably the right way to do integer arithmetic. Python does that, and it doesn't seem to cause any trouble. The people who need to do massive amounts of numeric stuff know who they are and can take the time to learn the relevant arcana, but your typical cat pictures app never has to worry about how big any of its integers are.
So do Ruby or Erlang. The problem, of course, is that it has a cost: trivial arithmetic operations have to be checked and may need to allocate.
And operational coverage can be spotty outside of the trivial range e.g. when you give a bignum to an "integer" operation going through FPN, bad things can happen as said FPN are generally machine doubles (fp64)
If Python handled overflow with an exception instead of a bignum, I'd bet it still wouldn't cause trouble either. The actual values generally aren't reaching bignums.
Yeah, I recently read on some forum someone who was looking for a quick C++ training for a friend and said "he doesn't need a deep training, just some basic syntax, he's not a programmer, that doesn't interest him, he's just doing scientific calculus".
I was like: Oh boy... getting arithmetic calculation right is really tricky and you have to understand many not-obvious details, especially concerning the behaviour of floating-points numbers, and then you have to make choices or compromises depending on your requirements. But if you have no idea of the limitations of CPUs and languages, the naïve requirement is just "exact, full-precision, as long as needed" and you'll hit some bugs sooner or later.
The Cambridge (UK) maths undergraduate course used to do exactly this; include a numerical programming section without any training. They effectively just gave people a small volume of API calls and expected them to get on with it. The only reason it worked at all was a small number of undergraduates who could already program running guerilla assistance courses in C for bewildered mathematicians.
Programming is becoming increasingly important in science, but still treated as a skill that people should just casually pick up.
Oh, and then there's a whole bunch of stupid reproducibility issues caused by Intel using 80-bit FP internally but then truncating on load/store. Very important if your algorithm isn't rigorously convergent.
> then discover that for simple examples you run out of machine bits
Why? I don't think it would be common to run out of machine bits for correct programs (and not really run out, just go into bigints in some cases). But the compiler cannot calculate those bits based on machine types, instead it should track all possible values/ranges and multiply them too to decide whether the result could fit into the type or it needs a bit bigger one. And since values don't appear out of thin air, but come from literals or some input and checked for specific ranges all the time, it should be possible to automatically choose appropriate types and only warn about possible overflows or performance penalties due to bigints.
Naive approach of multiplying machine types is pointless, of course, on that I agree.
Generally, for fixed point operation you need to explicitly handle two things: overflow and rounding. The former may not be an issue in some applications, but the latter will. It is easy to introduce a cumulative error by incorrect rounding for instance.
I suppose rounding is solvable too. Depending of how value is used at the end it would be possible to deduce how much precision is needed to guarantee no cumulative error.
If anyone could think of a good, concise way of expressing all these bells and whistles of arithmetic, it could be implemented as a language or language frontend. For now, most languages choose to either ignore it entirely or push the user to floating or arbitary-precision arithmetic as an 'improvement'.