This is probably the most interesting AI experiment I've seen yet. Looking through the codebase has me wondering where all the code is. I don't know if anyone has had the displeasure of going through the next.js codebase, but I estimate it's at least two orders of magnitude more code than this reimplementation. Which makes me wonder, does it actually handle the edge cases or does it just pass the tests.
Like compare the two form implementations for example. Vinext is a completely different implementation compared to what the Next.js version does. Is their behaviour actually the same? The rewrite looks incredibly naive.
The behavior isn't entirely the same and reaching 100% parity is a non-goal, but there are a few things to note.
This is still a very early implementation and there are undoubtedly issues with the implementation that weren't covered in next's original test suite (and thus not inherited) while not being obvious enough to pop up with all the apps we've tried so far.
As for why it's so much smaller, by building on top of Vite and their react + rsc plugins there is a whole lot of code that we don't need to write. That's where a significant portion of the LOC difference comes from.
> The result, vinext (pronounced "vee-next"), is a drop-in replacement for Next.js
"Drop-in" in my mind means I can swap the next dependency for the vinext dependency and my app will function the same. If the reality is that I have to spend hours or days debugging obscure edge cases that appear in vinext, I wouldn't exactly call that a drop-in replacement. I understand that this is an early version and that it doesn't have parity yet, but why state that it is a non-goal? For many of us, that makes vinext a non-choice, unless we choose to develop for vinext from the beginning.
Furthermore, if you're making a tool that looks almost like a well-known and well-documented tool, but not quite, how is gen AI going to be able to deal with the edge cases and vinext-specific quirks?
Changing the definition of drop-in definitely has me concerned and makes me not take this any seriously than other projects open-sourced by Cloudflare, particularly the ones focused on more critical parts of their systems – e.g. pingora and ecdysis.
Yeah I'm curious about all the routing edge cases, form actions, server functions etc, since that is where most of the complexity of the app router comes from. Does it encrypt captured values inside closures sent to the client? Stuff like that.
It is the most passive aggressive thing I’ve ever seen. Cloudflare team had issues with the Next team? And they responded with ‘we can do your whole product with an intern and AI’, lol.
Yeah, I'm not even sure what this is, or whether or not this is even intended to be a serious project. It just comes across to me as deeply unprofessional. I say this as someone who doesn't even like Vercel and has their own gripes with that bizarrely run company.
The docs say that none of the code has been reviewed or tested properly, so how serious is this for people to run in a production setup where many companies are going are not going to be super enthused that their production environment has been 'vibecoded' in a week? It just reads to me like a not-so-subtle middle finger to the Vercel guys.
On a side note, I find it extremely weird that the current AI era of software development is turning the state of the entire field into a reenactment of Lord of the Flies. Bizarre behavior all around from people who should know better.
Like compare the two form implementations for example. Vinext is a completely different implementation compared to what the Next.js version does. Is their behaviour actually the same? The rewrite looks incredibly naive.
https://github.com/vercel/next.js/blob/b8cbaad24ca66ec673a7b...
https://github.com/cloudflare/vinext/blob/main/packages/vine...
Either way, pretty impressive.