vinext: One Week, One Engineer, $1,100 in Tokens
A conversation with Steve Faulkner from Cloudflare on recreating Next.js, one test at a time
On 24th February 2026, Cloudflare announced vinext, a complete rewrite of Next.js built on Vite, deployable to Cloudflare with a single command.
For years, Cloudflare has worked to support Next.js on its platform, but the framework’s bespoke build output made that difficult. Running Next.js outside its intended environment requires reshaping that output for each platform.
This pain point led to vinext. By rebuilding Next.js on top of Vite, deployment becomes considerably simpler.
That alone would be an interesting story. But what if I told you that vinext was created in one week, by a single engineer, Steve Faulkner, using AI and around $1,100 in tokens.
I sat down with Steve to understand how vinext came together, and what it tells us about the future of AI-assisted development.
Ashley: Hi Steve, and thank you for agreeing to be interviewed today! Can you briefly introduce yourself and your role at Cloudflare?
Steve: Hey everyone, I'm Steve Faulkner, Director of Engineering for Workers at Cloudflare. I run a roughly 90-person org that covers Workers, Containers, Agents SDK, Sandboxes, Wrangler, Frameworks, and several other teams.
Ashley: vinext is a pretty radical idea. How did it actually come about?
Steve: We’re constantly trying to figure out how to best support Next.js on Cloudflare. We’ve invested heavily in OpenNext and we’re still involved in that project, but OpenNext is fundamentally built on top of Next.js and its Turbopack-based output, which constrains what we can do.
There was always this idea of reimplementing the Next.js API to get a higher ceiling on performance and better output. We actually tried it twice with human engineers.
Then the models got dramatically better last December and January, and I started wondering if AI could just do it. Next.js has a massive test suite. I used that as the spec, started on a Friday night with Opus, and woke up Saturday morning to an app router demo that was kind of working. That kicked this whole project into gear.
Ashley: Rewriting Next.js is no small undertaking, even for AI. How did you structure your workflow with AI on this project?
Steve: Almost all of it was OpenCode with Opus 4.5 and 4.6. I also do a lot of voice-to-text via SuperWhisper. A few markdown files for planning and context: an agents.md file generated and maintained by the agent, a discoveries.md that logged ecosystem gotchas to avoid repeat mistakes, and tracking documents for ported Next.js tests. I didn’t use special sub-agents or custom MCP servers.
Closer to launch I started using context7 and Exa search MCP for better library lookups. I also made use of agent-browser, which was critical for actual debugging in a live browser.
I asked OpenCode to analyze its own sessions and they were surprisingly barbell-shaped: either two-to-three-minute corrections or one-to-two-hour deep dives. My peak token usage was at 3 AM, when I’m sleeping, so I was clearly setting up task lists before bed and letting it run overnight.
This was not a fancy or complex setup. No Ralph Wiggum loops. Just giving OpenCode a list of tasks I thought would keep it busy for a few hours.
Ashley: It’s refreshing to see a relatively simple workflow be so effective, especially when all the discourse online is often buzzword soup. I’m sure it wasn’t all plain sailing though.
Which parts of vinext did AI handle well, and which parts still needed deep human expertise?
Steve: AI excelled at the boring, systemic work: porting tests from the Next.js suite, implementing next module shims against a known API surface, and grinding through compatibility issues. If you give it a failing test and a clear target, it iterates well.
The big human contribution was setting overall direction, such as:
Deciding this should be a Vite plugin rather than a custom bundler
Choosing to port tests rather than run the Next.js harness
Prioritizing which features to tackle first. And finding bugs
I had to do manual QA many times, but AI would surprise me here too. It would get really far on its own with agent-browser.
Ashley: That definitely resonates with how I’m working with AI too, it can churn out code but it still lacks some critical thinking in my experience, especially when given a larger task.
I’d say recreating Next.js qualifies as a pretty large task, so how did you break down such a large framework into tasks that AI could reliably execute?
Steve: The agent helped a ton. We focused mostly on the test suite, but rather than trying to run their test harness directly, I had the agent port tests one by one into our Vitest and Playwright setup, then implement the code to make them pass. A tracking document helped the AI keep track along the way.
Ashley: There’s a long-standing idea that test suites are the best form of documentation, and this seems to prove it.
No matter the model, AI isn’t perfect though and I find it often gets things wrong or gets stuck.
What were the biggest failure points of AI during the project? What was the hardest technical problem in vinext where AI struggled the most?
Steve: There were definitely times where the AI just got stuck and I had to open a browser, click around, look at logs, and figure out what was actually going on. That manual debugging loop never fully went away.
I’ve also noticed it struggles with larger files. We have some that are two to three thousand lines, and I suspect smaller, more focused files work better for AI. That’s something we’re actively refactoring toward right now.
The other recurring issue was getting the AI to match Next.js behavior exactly. It would implement something that seemed reasonable but was subtly different from how Next.js actually works.
I’d point out the discrepancy and it would immediately agree and fix it, but it wouldn’t catch it on its own. A lot of iterating on our agents.md file went into teaching it to look at the Next.js implementation first before writing anything, so it was informed by how Next.js actually does things rather than guessing.
Ashley: I’ve been playing around with agents.md files a lot more lately, so I can definitely see how that would help. The one that Cloudflare ships with any new project created by npm create is quite good, I noticed the AI often tries to validate its work because it’s very aware of the tools to do so.
Another common discussion point online is the quality of code in the era of AI.
How would you rate the code quality of vinext, considering it was entirely written by AI which can often be sloppy or overly verbose in some cases?
Steve: It’s not amazing code. It’s verbose, and there are patterns I don’t love, like heavy use of template strings for code generation. However, this project was fundamentally about trusting the tests. If it passes the 2,700+ tests across Vitest and Playwright, it works.
We’re embracing that philosophy in maintenance too. We have AI reviewing code, and if we’re confident it’s the right direction and the AI approves it, we merge it.
This project was born from AI, and I think it will succeed because of AI. We’re trying to embrace that mindset rather than fight it. Almost all activity in the repo is driven by AI, including from the community.
Ashley: If nothing else, it’s a fascinating way to test how far AI can be pushed, along with how well it can maintain a project of considerable size.
In the vinext blog post, it mentions 94% coverage of Next.js’ test suite, so what big gaps remain in vinext vs Next.js?
Steve: There are still gaps and they’re well documented in the README. Static pre-rendering was called out in the blog post when we initially released it, but we just merged a PR with the first version of that, so it’s already being tackled.
One thing that’s surprised me is how few reported issues are about actual behavioral differences with Next.js. If you look at our issue tracker, the problems are mostly general JavaScript ecosystem issues: CommonJS versus ESM, bundling and module resolution differences between Vite and Turbopack.
Vite has a very ESM-first view of the world where Turbopack/Webpack are more lax about what they will accept especially around CommonJS.
Ashley: It’s great to see vinext being iterated on and maintained, as there was some concern online that this would be pushed out and then abandoned as a PR piece.
Looking forward, what’s the plan for vinext in the future? Is the plan to maintain it and continue working on it?
Steve: We are continuing to work on it. It started as an experiment but it’s clearly something people want.
In four weeks we’ve merged over 350 PRs, shipped 19 releases, and have 50+ contributors from the community. The repo has nearly 7,000 stars.
If you’re using Next.js and want to try it, you should. The more people who try it, the better it gets. The most valuable thing you can do is point your agent of choice at the repo, have it use the migration skill, and file issues for anything it hits along the way.
Ashley: That’s an incredible number of PRs for such a new project! The fact that other engineers are contributing shows there is value in the project, and that there’s a real desire to more easily run Next.js in places that were previously difficult.
Something that came to my mind immediately when I read the blog post was how vinext would co-exist with Next.js.
What does maintenance look like? How do you see updates in Next.js flowing into vinext, or will they diverge in future?
Steve: This project was born from AI and it’s going to be maintained by AI. We’ve gone all-in on AI development in this repo.
We have a strong bias toward merging AI-written PRs, we actively encourage contributors to use AI, and we spend a lot of time on our agents.md files to make sure the AI has the context it needs.
We have AI doing code review, AI doing security scanning, and AI keeping up with Next.js commits. That’s been the philosophy from day one.
Our primary focus right now is maintaining feature parity with Next.js. But I’m open to a future where we introduce small divergences. We’ve already shipped one feature that Next.js lacks (traffic-aware pre-rendering), and we’ll continue to look for similar opportunities.
We’re also getting requests to address Next.js bugs or implement behavioral modifications. We’re not actively pursuing those today, but I’m open to it depending on how the project grows.
Ashley: That’s really exciting, and I’m looking forward to seeing what happens with vinext in the future. Given the popularity of Vite, and the ability for AI to migrate from one framework to another at pace, it’ll be interesting to see how much it grows.
I know there are lots of engineers out there that are grappling with AI, some are embracing it, others are resistant, and everything in-between.
What skills do you think matter most for engineers working in this new world with heavy AI assisted coding?
Steve: You still need to know what to build and why. Recognize when AI output is structurally flawed, even if the tests pass, and understand your problem space deeply enough to steer effectively.
That ability to clearly articulate what’s right and course-correct is becoming one of the most valuable engineering skills.
Agents are remarkably good at taking feedback - often better than humans.
Ashley: There’s a lot to be said for taste in this new world we are in, and I think it’s even more important than it was before.
You’ve stated that you want to be as AI-pilled as possible with this project, so do you think AI will eventually be able to maintain a project like vinext without a human in the loop?
Steve: We're close, but not quite there yet.
For me, the truly interesting part of this project isn't Next.js; it's the potential of AI and pushing its limits. If the models improve just a bit more, and if we can establish a few more robust workflows (like Ralph Wiggum loops and leveraging the emerging auto-research techniques), I believe we can reach a point where AI can maintain projects like this without human intervention.
Given the current trajectory of these tools, that's where I think we’ll end up.
Ashley: Thank you so much for doing this interview with me, it’s been incredibly enlightening, answering a number of questions I had when I read the blog post - and I’m sure that others had too.
Good luck with vinext in the future, and I’ll be watching with eagerness to see how the project develops!
Is there anything you’d like to sign off with?
Steve: Go try vinext! It works everywhere, not just Cloudflare.
Running outside Cloudflare is as simple as vite start and you’re up on any Node server. The more feedback we get, the faster we can move.
And please, use AI to do the migration and have AI file issues. We’ve had really good luck with this kind of AI-first migration surfacing what problems people hit in the wild.



