Replies: 35 comments 40 replies
-
Thanks for this very detailed summary Evan You! |
Beta Was this translation helpful? Give feedback.
This comment was marked as off-topic.
This comment was marked as off-topic.
-
Oh Woah, thanks for this detailed clarification. I actually felt in the trap without trying to do a comparison meanwhile I've been longing to test vite. The repo for the benchmark tests is highly appreciated Thanks |
Beta Was this translation helpful? Give feedback.
-
Thanks for the detailed comparison. It’s much appreciated. The atmosphere feels great again knowing what can be expected from both technologies. |
Beta Was this translation helpful? Give feedback.
-
Shouldn't the default configuration be used for comparison? |
Beta Was this translation helpful? Give feedback.
-
as dev-time tool , support swc as default |
Beta Was this translation helpful? Give feedback.
-
Doing Open source is hard. Vercel should do better |
Beta Was this translation helpful? Give feedback.
-
This is super concise and clear. I also feel that OSS should be based on mutual respect and open communication as it is |
Beta Was this translation helpful? Give feedback.
-
I agree, a touch of scientific principles would make those bold claims more trustworthy. Otherwise it really only sounds like marketing hogwash. Also damages a bit the credibility of vercel as well as Tobias Koppers which is sad. |
Beta Was this translation helpful? Give feedback.
-
Well i really love Vue and Vite, and in my tests VITE shows up ahead, and even if that wasn't the case I would continue with Vue/Vite because I really believe in their power and the community is the greatest. |
Beta Was this translation helpful? Give feedback.
-
Would love to see vercel's 30k modules benchmark github repo. Thanks for the discussion! |
Beta Was this translation helpful? Give feedback.
This comment was marked as off-topic.
This comment was marked as off-topic.
-
Turbopack claims even 20x faster speeds than Vite
|
Beta Was this translation helpful? Give feedback.
-
Life long in OSS and respect mutual 🚀 |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
Great job. Evan is absolutely right, competition is good as long as it is healthy and transparent. Thank you for your great contribution to the community. |
Beta Was this translation helpful? Give feedback.
-
Great explanation.
|
Beta Was this translation helpful? Give feedback.
-
Who cares? by the time turbopack will fully be production ready, Vite would have reached v4 so much so that "10x faster than Vite" will be redundant |
Beta Was this translation helpful? Give feedback.
-
Very disappointed to see this kind of debate (0.01 vs 0.09) on something humans are even not able to feel, all for marketing and money. Time is a treasure, and should be spent on something more meaningful. |
Beta Was this translation helpful? Give feedback.
-
Given that Vite is not ashamed to build upon other tools (ESBuild, Rollup), I wonder if it will, at some point, benefit from using Bun instead of Node 🤔 |
Beta Was this translation helpful? Give feedback.
-
Could you please share your thoughts on
Question:
|
Beta Was this translation helpful? Give feedback.
-
We got a new benchmark in here 😄 |
Beta Was this translation helpful? Give feedback.
-
I'm Frontend JS developer and not Rust backend person so why we compare DSL language to debuggable JS code in first place? What speed is important then important is how is achieve - we all know than Assembly bundler will be even faster but why we don't use it in first place and stick with webpack? We sell small speed footprint on compiler time to have better DX and process in JS. Vite right now not using WebAssembly which can increase speed for cost of debugging and mixing languages. What Vite benefit is what Webpack was before: JS solution to bundle all stuff and play as we, JS developers, like :) . All is JS in Vite that can be adjusted to our need. What everyone need to think is catch frases: That's what need to be showcase for other when people check benefit of Vite. I'm proud to use Vite as I can be assure I code JS/TypeScript language Question is: do you compare any time required to use Rust when you compile code and get issue? Adding some issue in compilation and you get 10x faster in Vite resolved bug. That's what you don't add to test suite so your testing repository lack scenario of fixing Vite or Turbopack issues. "30k modules" is way more not realistic scenario than simple bug in rust ;p |
Beta Was this translation helpful? Give feedback.
-
Not optimistic about turbopack's server rendering |
Beta Was this translation helpful? Give feedback.
-
Thank you Evan! |
Beta Was this translation helpful? Give feedback.
-
Completely agree @yyx990803 , IMHO it's often better to listen to the creators/developers than aggresive marketing departments. Yesterday I was listening to an interview with Tobias Koppers on Devtools.fm, this interview shows a lot more respect and nuance when it comes to comparing Vite and Turbopack. |
Beta Was this translation helpful? Give feedback.
-
I wanted to test the performance of a hammer vs turbopack vs vite, so I
setup one raspberry pi with vite, another one with turbopack, and then
started a test to see if they could complete the job before I smashed them
with the hammer, or if the hammer was faster. Surprisingly the pi running
turbopack exploded almost instantly, destroying both the pi running vite
and my hammer, so while it never completed the job, I think turbopack was
the winner since it took out both of the other contestants.
…On Thu, Mar 16, 2023 at 11:47 PM Peter van der Steen < ***@***.***> wrote:
However, I also believe that OSS competition should be based on open
communication, fair comparisons, and mutual respect. It is disappointing
and concerning to see aggressive marketing using cherry-picked,
non-peer-reviewed, borderline-misleading numbers typically only found in
commercial competition. As a company built on top of its OSS success, I
believe Vercel can do better.
Completely agree @yyx990803 <https://github.com/yyx990803> , IMHO it's
often better to listen to the creators/developers than aggresive marketing
departments.
Yesterday I was listening to an interview with Tobias Koppers on
Devtools.fm
<https://open.spotify.com/episode/3WA45bx9eYWOR2yZEwwMue?si=c527225c47874a36>,
this interview shows a lot more respect towards and nuance when it comes to
comparing Vite and Turbopack.
—
Reply to this email directly, view it on GitHub
<#8 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AEF4OD3XTVB5HULH6W36DTLW4QCIRANCNFSM6AAAAAARTVEMAI>
.
You are receiving this because you commented.Message ID:
<yyx990803/vite-vs-next-turbo-hmr/repo-discussions/8/comments/5342059@
github.com>
|
Beta Was this translation helpful? Give feedback.
-
I would be curious to a revisit of these tests. I saw Vite had a few major performance boosts and Turbopack is still in beta. It might be already that the speed difference is almost non existed overall. |
Beta Was this translation helpful? Give feedback.
-
A week ago, Vercel announced Turbopack, a Rust-based successor to Webpack.
In the announcement, one of the headlines was that Turbopack is "10x faster than Vite". This line is repeated in various marketing materials from Vercel, including tweet, blog post, and marketing emails sent to Vercel users. The benchmark graphs are also included in Turbopack's documentation, originally showing that Next 13 with Turbopack is able to perform a React Hot-Module Replacement (HMR) in 0.01s while for Vite it takes 0.09s. There were also benchmarks for cold start performance, but since none of the cold start benchmarks showed a 10x advantage, we can only assume the "10x faster" claim was based on the HMR performance.
Vercel did not include any links to the benchmarks they used to produce these numbers in the marketing materials or the documentation. So I got curious and decided to verify the claims myself with a benchmark using the freshly released Next 13 and Vite 3.2. The code and methodology are publicly available here.
The gist of my methodology is comparing HMR performance by measuring the delta between the following two timestamps:
The time when a source file is modified, recorded via a separate Node.js process watching for file changes;
The time when the updated React component is re-rendered, recorded via a
Date.now()
call directly in the component's render output. Note this call happens during the virtual DOM render phase of the component so it isn't affected by React reconciliation or actual DOM updates.The benchmark also measures the numbers in two different cases:
The "root" case, where the component imports 1,000 different child components and also renders them together.
The "leaf" case, where the component is imported by the root but has no imports or child components of its own.
Nuances
Before we jump to the numbers, there are a few additional nuances that are worth mentioning:
React Server Components
Next 13 introduced a major architectural shift in that now the components are by default server components unless the user explicitly opt-in to client mode with the
'use client'
directive. Not only is it the default, the Next documentation also recommends users to stay in server mode as much as possible to improve end-user performance.My initial benchmark measured Next 13's HMR performance with both the root and leaf components in server mode. The result showed that Next 13 was actually slower in both cases, and the difference was significant for leaf components.
Round 1 snapshot (Next w/ RSC, Vite w/ Babel)
When I posted the numbers on Twitter, it was quickly pointed out that I should be benchmarking Next components without RSC to make it equal. So I added
'use client'
directive to the Next root component to opt-in to client mode. Indeed, in client mode Next HMR improved significantly, going 2x faster than Vite:Round 2 snapshot (Next w/o RSC, Vite w/ Babel)
SWC vs. Babel Transforms
Our goal is making the benchmark focus only on the HMR performance difference. To make sure we are actually comparing apples to apples, we should also eliminate another variable: the fact that Vite's default React preset uses Babel to transform React HMR and JSX.
React HMR and JSX transforms are not features coupled to build tools. It can be done via either Babel (js-based) or SWC (rust-based). Esbuild can also transform JSX, but lacks support for HMR. SWC is significantly faster than Babel (20x single threaded, 70x on multiple cores). The reason Vite currently defaults to Babel was a trade-off between install size and practicality. SWC's install size is quite heavy (58MB in
node_modules
whereas Vite itself is only 19MB), and many users still relied on Babel for other transforms, so a Babel pass was somewhat inevitable for them. However, that may change in the future.More importantly, the Webpack implementation in the same benchmark is also using SWC.
Vite core does not rely on Babel. Using SWC instead of Babel to handle React transforms does not require anything to be changed in Vite itself -- it is only a matter of replacing the default React plugin with vite-plugin-swc-react-refresh. After switching, we saw significant improvement for Vite in the root case, catching up with Next:
Interestingly, the growth curve here shows that Next/turbo got 4x slower in the root case compared to the leaf case, whereas Vite only got 2.4x slower. This implies a curve where Vite HMR scales better in even larger components.
In addition, switching to SWC should also improve Vite's cold start metrics in Vercel's benchmarks.
Performance on Different Hardware
Since this is a composite benchmark that involves both Node.js and native Rust parts, there will be non-trivial variance on different hardware. The numbers I posted were gathered on my M1 MacBook Pro. Other users have run the same benchmark on different hardware and reported different results. In some cases Vite is faster in the root case, whereas in others Vite is significantly faster in both cases.
Vercel's Clarification
After I posted my benchmark, Vercel published a blog post clarifying their benchmark methodologies, and made their benchmark available for public verification. While this probably should have been done on day one, it is definitely a step in the right direction.
After reading the post and benchmark code, here are a few key takeaways:
The Vite implementation is still using the default, Babel-based React plugin, while both Turbopack and the Webpack implementation in the same benchmark are using SWC. This alone makes the benchmark a fundamentally unfair comparison.
There were number rounding issues in the original numbers for the 1k component case - Turbopack's 15ms was rounded down to 0.01s while Vite's 87ms was rounded up to 0.09s. This further got marketed as a 10x advantage when the original numbers were close to 6x.
Vercel's benchmark uses the "browser eval time" of the updated module as the end timestamp, instead of React component re-render time. The former is a theoretical timestamp, while the latter reflects end to end HMR update speed perceived by the user. I also found flaws in how the eval vs. update timestamps are gathered for Vite (see update at the end).
The post included a graph showing that Turbopack can be 10x faster than Vite when the total module count surpasses 30k. However, both tools actually scale constantly all the way up to 10k modules. It is only above the 20k module count where Vite HMR curve starts to go up. Considering that Vite apps pre-bundle their dependencies, a project with 20k+ src modules is extremely unlikely in reality. Using the 30k number to justify the 10x claim feels like cherry-picking.
To sum up, the "10x faster than Vite" claim only holds if all of the following are true:
What is a "Fair" Comparison?
If what we are trying to compare is "out-of-the-box default", then we should compare with RSC enabled in Next, since that is the default and is what Next is actively encouraging users to use. Since Vercel's benchmark is not using RSC, and is measuring the "module evaluation time" to exclude the variance caused by React's HMR runtime, it should be fair to assume that the benchmark's goal is to perform an apples-to-apples comparison of the HMR mechanism inherent to Vite and Turbopack.
With that premise, unfortunately, the fact that Vite is still using Babel in the benchmark makes it not an equal scenario, and still leaves the 10x claim invalid. It should be considered inaccurate until the numbers are updated with Vite using SWC transforms.
In addition, I believe most would agree that:
30k modules is an extremely unlikely scenario for the vast majority of users. With Vite using SWC, the number of modules needed to reach the 10x claim will likely grow to be even more unrealistic. While it is theoretically possible, using it to justify the kind of marketing Vercel has been pushing seems disingenuous.
Users care more about end-to-end HMR performance, i.e. the time from saving to seeing changes reflected, compared to the theoretical "module evaluation" timing. When seeing "10x faster updates", an average user would think in terms of the former instead of the latter. Vercel conveniently omitted this caveat in its marketing. In reality, the end-to-end HMR for a server component (the default) in Next is slower than that in Vite.
As the author of Vite, I am glad to see a well-funded company like Vercel making significant investments into improving frontend tooling. We may even leverage Turbopack in Vite in the future where applicable. I believe healthy competition in the OSS space eventually benefits all developers.
However, I also believe that OSS competition should be based on open communication, fair comparisons, and mutual respect. It is disappointing and concerning to see aggressive marketing using cherry-picked, non-peer-reviewed, borderline-misleading numbers typically only found in commercial competition. As a company built on top of its OSS success, I believe Vercel can do better.
Update
I managed to clone and run Vercel's benchmarks myself, and also updating the benchmark to use SWC with Vite. The benchmark also includes an
hmr_to_commit
metrics, which is not mentioned anywhere other than in the benchmark's own README. After running the benchmarks, I noticed that thehmr_to_eval
andhmr_to_commit
suites show exactly the same numbers for Vite (~100ms), while for Turbopack the number goes from 15ms to 54ms. Since the eval to commit cost is real, the only reasonable explanation is that the benchmark is flawed: thehmr_to_eval
suite fails to correctly capture Vite's module eval time (possibly due to mechanism differences in native ESM) and reportshmr_to_commit
numbers instead. Otherwise, we cannot explain why the additional overhead to commit changes only applies to Turbopack and not Vite.Considering that:
hmr_to_eval
suite is flawed, and thathmr_to_commit
better reflects end-to-end performance perceived by the userI believe it makes more sense to use
hmr_to_commit
as the measurement metrics. This would bring the speed advantage to below 2x (~100ms vs ~54ms), which aligns with the numbers I'm seeing from my benchmarks.Beta Was this translation helpful? Give feedback.
All reactions