I think the important distinction is _everything_ should be considered untrusted because even trustworthy software can become malicious. For example, the XZ Utils backdoor[0].
On Android, everything I run is subject to the permission model and sandboxed. That is not the case on Linux.
Could you be more specific on how to circumvent the android permission model + sandbox? So far I have only thought of two ways an XZ-like backdoor could circumvent that:
1. By being baked into the OS itself, which is unavoidable since the OS is the thing providing the sandboxing + security model. It still massively reduces the attack surface.
2. By being run through the android debug bridge, which is far from normal and something users have to explicitly enable. Leaving you the option to shoot yourself in the foot in an opt-in manner 99.9% of users will never touch isn't the same as Linux where foot-shooting is the default.
The defining aspect of the XZ backdoor was that it was baked into the OS itself, being linked into memory space by about half of the system and activated by being packaged in a specific way in a specific distribution. If you wanted to ignore 1), you would have to choose a different example.
If you want to confine yourself in a sandbox, feel free to do it. The past decades have demonstrated that it's only necessary for some specific threat models.
I think the important distinction is _everything_ should be considered untrusted because even trustworthy software can become malicious. For example, the XZ Utils backdoor[0].
On Android, everything I run is subject to the permission model and sandboxed. That is not the case on Linux.
[0] https://en.wikipedia.org/wiki/XZ_Utils_backdoor