Your analogy is wrong i.e. you are comparing apples to oranges. ML is very different from other "normal" computation systems.
* Non-ML: Input + {Rules} = Output
* ML: Input + Output = {Rules}
where "{Rules}" = Infinite set of possible "Programs" each of which is a trace through a very large state space of variables.
In the first case, we humans use all our ingenuity to write the program and tweak it to get the right results. We already know the difficulties involved in writing "correct" programs but have mastered it to some extent.
In the second case, you cannot do that. Your "Programs" are derived by the system and encoded in numbers. How in the world do you even know that your encodings are correct? This is why you need the techniques of Mathematics to transform (eg. Linear Algebra) and constrain (eg. Inferential Statistics/Probability) the output "Rules" so you can have some measure of confidence in it. This is the fundamental challenge inherent in ML.
> How in the world do you even know that your encodings are correct?
Easy, you know that they aren't and will ever be entirely correct for complex enough ML problems, just like humans. The ways to handle its errors is not an ML topic though, you just have to ensure via old fashioned system design that the system you build doesn't depend on any ML model to always output correct results.
* Non-ML: Input + {Rules} = Output
* ML: Input + Output = {Rules}
where "{Rules}" = Infinite set of possible "Programs" each of which is a trace through a very large state space of variables.
In the first case, we humans use all our ingenuity to write the program and tweak it to get the right results. We already know the difficulties involved in writing "correct" programs but have mastered it to some extent.
In the second case, you cannot do that. Your "Programs" are derived by the system and encoded in numbers. How in the world do you even know that your encodings are correct? This is why you need the techniques of Mathematics to transform (eg. Linear Algebra) and constrain (eg. Inferential Statistics/Probability) the output "Rules" so you can have some measure of confidence in it. This is the fundamental challenge inherent in ML.