I think in a centralized environment (workplace), it could be argued that immediately triggering all the build failures and having good hygiene in cleaning them up is actually not a bad thing. It really depends on how that's set up.
And how is sparse checkout worse for discoverability? With multiple repos it's even harder to find what you want sometimes if you are talking about 100's of random repos that aren't organized well.
> I think in a centralized environment (workplace), it could be argued that immediately triggering all the build failures and having good hygiene in cleaning them up is actually not a bad thing.
In abstract I agree. However when I'm trying to get my code working having test failures in code that isn't even related to the problem I'm working on is annoying and I can't switch tasks to work on this new failure when the current code isn't working either.
How could broken code (or broken tests) be merged in master ? That is a rhetorical question, of course it happends, and of course this is the root issue you would be facing
There are multiple ways that code gets merged into master and ends up broken.
First, the one where everyone does everything correctly: CI executions do not run serially because when too many people are producing a lot of code, you need them to run at the same time. So you have two merge requests which are done around the same time A and B, they each see a commit C before each other. Say merge request A deletes a function or class or whatever that merge request B uses. Of course merge request A deleted all uses of that function but could not delete the use by B since it was not seen. A + C passes all CI checks and merges. B + C passes all CI checks and merges. A + B + C won't compile since B is using a function deleted by A. If you are lucky, they touch the same files and B doesn't merge due to a merge conflict and the rebase picks it up, otherwise broken master.
Then you will typically have emergency commits to hotfix issues which might break other things.
Then you will have hidden runtime dependencies that won't trigger retests before merge due to being hidden, but every subsequent change to that repo will fail.
Then you will have certificates, dependencies on external systems that go away.
As you may be aware, 100% broken code cannot be merged. However code that works 99.99% of the time can be merged and then weeks later it fails once but you rebuild and it passes. There are a lot of different ways this can happen.
And how is sparse checkout worse for discoverability? With multiple repos it's even harder to find what you want sometimes if you are talking about 100's of random repos that aren't organized well.