AFAIK, the worst you could do is serve the victim stale (valid) packages, and prevent them from seeing that there are new updates available.
I maintain a (somewhat) popular mirror server at a university, and we actually ran into this issue with one of our mirrors. The Tier 1 we were using as an upstream for a distro closed up shop suddenly, leaving our mirror with stale packages for some time before users told us they never got any updates.
I don't think that would work with most distros, since you're fetching an (also signed) update list and you'd get notified that the update failed due to a stale list, or that the expected updated package was missing on the mirror.
You could, but then the signature check would fail. Usually the public keys of developers or packagers are shipped with a linux distribution.
However, you shouldn't blindly trust in this in "linux" either. The implementation varies between package managers. Eg. DNF in Fedora has signature checks not enabled for local package installations, by default. There is no warning, nothing. If you want to infect new Fedora users, you MITM RPMFusion repo (codecs etc) installation, because that's a package almost everyone installs locally and the official install instructions don't show how to import the relevant keys beforehand. Arch was also very late to the validation party.
How is Arch vulnerable? While I don't have an Arch system handy, I do have a steam deck that I play around with (in an overlay), and I've certainly run into a lot of signature issues due to Valve making a hackish "pin" of the evergreen Arch with signatures in the Valve tree's snapshot being often out of date.
Those signatures are also checked for local installs unless you explicitly disable them.
Pacman has signature checks by default, for over a decade now, I think, but they have been ridiculously late with universal usage of this feature, relatively speaking. They were still barebacking their machines, when everybody trivially knew the internet was serious business and expected signature checks, therefor.
I realize now it was a stupid question, but the excellent refresher and ensueing discussion of edge cases was well worth the downvote someone felt compelled to leave, haha
This has to do with circularity. If you are building a TLS library that needs to fetch OCSP Responses dynamically, you might not have an easy time using HTTPS to do it. Well, obviously you'd have to disable the use of OCSP for validating the OCSP Responder's TLS server certificate, but still you have a re-entrance requirement, and anyways the OCSP Responses are signed. (Or, well, you could use OCSP to validate an OCSP Responder's TLS certificate if you had code to detect a circular dependency, then stop and consider it validated. This would allow the use of OCSP for validating OCSP Responder TLS server certs where ultimately you could use HTTP for a non-privacy-sensitive certificate or where you could elide OCSP Responder TLS server cert validation but still use HTTPS to fetch OCSP Responses so as to provide confidentiality about the server names you're visiting.)
The main reason to want to use HTTPS for fetching OCSP Responses has to do with privacy rather than security relative to active attacks.