The argument I get against this is that when a higher level test fails, it's harder to locate the lines of code that broke the test unlike when you have lots of unit tests. I don't find this convincing though. Most of the time it's going to be pretty obvious looking at what lines of code have changed since the last working commit. Lots of projects get by without any tests at all as well. I'm not saying to skip testing completely but it comes as a cost and you need to be practical at weighing up how much time to put into it vs how much time you're going to save. Writing unit tests for everything takes time.
There is no silver bullet. Personally, I let a combination of complexity and importance guide my tests.
The more likely it is that a piece of code will break, and the more business damage it will do if it does break, the more tests I wrap around it.
For self-contained algorithms that have a lot of branches or complex cases, I use more unit tests. When the complexity is in the interaction with other code, I write more high-level tests. When the system is simple but critical, I write more smoke tests.
If I’ve got simple code that’s unlikely to break and it doesn’t matter if it does break, I might have no tests at all.
100% In critical areas I would suggest parameterised tests are worth the effort, especially in conjunction with generators. Property-based testing in FP for example, or just test data generators that'll generate a good range.
>The argument I get against this is that when a higher level test fails, it's harder to locate the lines of code that broke the test unlike when you have lots of unit tests. I don't find this convincing though.
This is a valid observation but the problem clears up when you start integrating better debugging tools into your testing infrastructure: having the ability to intercept and inspect API calls made by your app, launch a debugger in in the middle of the test, etc.
It is also minimized by writing the test at the same time (or just before) the code that makes it pass.