Insecurity is invisible. Users have no way to know the weaknesses in the software they use until it's too late. Disclosure is meant to make it possible for users to see what weaknesses they might have so they can make informed decisions.
Users still benefit to know about issues that can't be fixed (think about Rowhammer, Spectre and similar), so as these attacks become more practical (eg https://leaky.page or half double) they can adjust their choices accordingly (switching browsers, devices, etc) if the risk imposed by them is too high.
Of course (using an analogy for a second), some can say that it would be better for people to never find out that they are at increased risk of some incurable disease, because they can't do anything about it.
But for software, you can't make individual decisions like that. Even if one person doesn't want to know about vulnerabilities in the software they use, others could still actually benefit to know about them, and the benefit of the many trump over the preferences of the few.
That is, unless the argument is that it's actively damaging for all of the public (or the majority) to know about vulnerabilities in the software they use. If the point is to advocate for complete unlimited secrecy, and for researchers to sit on unfixed bugs forever, then that's quite an extreme view of software security and vulnerability disclosure (but that some companies unfortunately still follow).
Disclosure policies like these aim to strike a balance between secrecy and public awareness. They put the onus of disclosure on the finder because it's their finding (and they are the deciders on how it's shared), and finders are more independent than the vendor, but I could imagine a world in which disclosure happens by default, by the company, even for unfixed bugs.
Users still benefit to know about issues that can't be fixed (think about Rowhammer, Spectre and similar), so as these attacks become more practical (eg https://leaky.page or half double) they can adjust their choices accordingly (switching browsers, devices, etc) if the risk imposed by them is too high.
Of course (using an analogy for a second), some can say that it would be better for people to never find out that they are at increased risk of some incurable disease, because they can't do anything about it.
But for software, you can't make individual decisions like that. Even if one person doesn't want to know about vulnerabilities in the software they use, others could still actually benefit to know about them, and the benefit of the many trump over the preferences of the few.
That is, unless the argument is that it's actively damaging for all of the public (or the majority) to know about vulnerabilities in the software they use. If the point is to advocate for complete unlimited secrecy, and for researchers to sit on unfixed bugs forever, then that's quite an extreme view of software security and vulnerability disclosure (but that some companies unfortunately still follow).
Disclosure policies like these aim to strike a balance between secrecy and public awareness. They put the onus of disclosure on the finder because it's their finding (and they are the deciders on how it's shared), and finders are more independent than the vendor, but I could imagine a world in which disclosure happens by default, by the company, even for unfixed bugs.