I never mentioned users having to know things (what you quoted was about the user getting informed whether their system is compromised, which is the job of a secure boot chain). The user being in control means that the user can decide who to trust. The user may end up choosing Google, Apple, Microsoft etc. and it's fine as long as they have a choice. Most users won't even be bothered to choose and that's fine too, but with remote attestation, it's not the user who decides even if they want to. And we don't need random developers looking at our devices to consider them trustworthy, it's none of their business and it's a big mistake to let them.
> what you quoted was about the user getting informed whether their system is compromised, which is the job of a secure boot chain
User being informed means they have to know what a compromised system would entail. That alone is a huge and frankly impossible thing to expect from regular people.
> Most users won't even be bothered to choose and that's fine too, but with remote attestation, it's not the user who decides even if they want to.
> And we don't need random developers looking at our devices to consider them trustworthy, it's none of their business and it's a big mistake to let them.
Then you can't demand those developers trust your device.
> That alone is a huge and frankly impossible thing to expect from regular people.
The systems used by regular people could just refuse to boot further when detecting a compromise, so I'm not sure where this comes from. We have prior art for that too. This is still orthogonal to letting users who want to patch things patch them, and not letting the apps verify what environment they run in. It's all compatible with each other, and with both regular and power users.
> Then you can't demand those developers trust your device.
Somehow we could for decades. Whether we'll still be able to in the future depends only on how much noise and friction we'll make about it now.
> This is still orthogonal to letting users who want to patch things patch them, and not letting the apps verify what environment they run in. It's all compatible with each other, and with both regular and power users.
No, they're fundamentally opposed to each other. The entire point is that developers don't want their apps patched by just anyone, especially not malicious actors. Small minority of power users will inevitably get caught in the crossfire.
> Somehow we could for decades. Whether we'll still be able to in the future depends only on how much noise and friction we'll make about it now.
No, you really couldn't. Past lack of technical means doesn't mean anyone trusted your device nor that we had use-cases where this was important. (It was also usually solved with external hardware, physical dongles and whatnot.)
> The entire point is that developers don't want their apps patched
That's exactly what I'm trying to say. The entire point is not to secure the user, it's to secure the apps. It's working against the user's interest, as letting the user lie to apps is essential to user's agency. The technical means used to achieve this could also be used to work for the user and ensure their security without compromising their agency, but that's not what happens on mainstream platforms.
> No, you really couldn't.
Yes, you could. Exactly how you describe, so it was used only where it mattered, and in other cases they just had no choice. Today the friction is so low that even McDonald's app will refuse to work on a device it considers untrustworthy. The user does not benefit from that at all.
App attestation does not stop at legally binding identity software, and legally binding identity software can be serviced without app attestation. I accept not being able to tamper with my ID card, I may say it's "mine" but it ultimately belongs to the government; I don't accept not being able to tamper with my computers, they wouldn't belong to me anymore if that was the case.
> Not that they wouldn't or didn't want to.
Of course, but my devices' purpose isn't to grant wishes to corporations. In the ideal world they would still have no other choice. Unfortunately the more people use platforms that let them attest the execution environment the less leverage we have against them.
> I accept not being able to tamper with my ID card, I may say it's "mine" but it ultimately belongs to the government; I don't accept not being able to tamper with my computers, they wouldn't belong to me anymore if that was the case.
So where does a digital ID card fit in your model? It's the government's but on your computer.
I have a digital ID card on my desk right now. It does not need to be stored on the phone which has all the means necessary to communicate with the card. In fact, if it was in a slightly different form factor I could even put it physically into my phone as it happens to have a built-in smartcard reader, which would still be a more reasonable solution than apps since then it wouldn't be strongly coupled with a complex device that can break or be compromised in various ways (some of which can't be solved with attestation) and would maintain a clear separation between what's mine and what's government's. What exactly would I, as a user, gain by muddling that distinction?