Feedback on v9.4.0 Core Web Vitals (LightHouse) - Amplitude implementation #12959
Replies: 2 comments 3 replies
-
Thanks for the detailed feedback @Vadorequest 😃
I've never heard of Amplitude before, but it looks great! Thanks for doing a test-drive with the current metrics reporting implementation with Amplitude, it's always great to hear how it can be used for different platforms (and if there is anything that could be improved to make things easier).
Simple was the goal :)
That makes sense. The goal of the API was to simply expose every web vital metric (and a few custom ones) to the user every time they're determined. Since everybody is going to have a different use case with how they would like to handle the API, the user decides how to handle the results (send every one to analytics, or batch them, or only send values that are poor, etc...)
To be honest, I'm sure many folks who use an analytics provider that limit the number of requests will also want to batch instead of sending every one on their own. Would you like to submit a PR that adds a section to the docs that quickly show? I think that would be super useful :) Looks like the logic you laid out works well in your Next.js app, but maybe we could show a super simple example of how to batch metric results together? |
Beta Was this translation helpful? Give feedback.
-
I have the same problem on my side. LCP is only triggered on first user interaction, even if happens long before the first user interaction. |
Beta Was this translation helpful? Give feedback.
-
Since v9.4, Core Web Vitals are available to us so we can track apps performances. (read more at https://nextjs.org/blog/next-9-4#integrated-web-vitals-reporting)
About Amplitude
I've made a first implementation about how to track them using Amplitude analytics. Amplitude is free, up to 10 millions events per month. (then, it's around $50k/year 😨)
Amplitude is much better (IMO) than Google Analytics, both from a developer standpoint (tracking data is much easier and works well with SPA) and from an analytic standpoint (it's much easier to exploit data, test, iterate, observe the system, etc.).
Demo: https://nrn-v1-hyb-mst-aptd-gcms-lcz-sty-c1-n62chq0lj.now.sh
Feedback:
The current API is very simple. A bit too simple maybe. basically, each
metric
is received one by one in_app:reportWebVitals
function.While this is not an issue on systems such as Google Analytics, because it doesn't cost anything to send millions of API requests (AFAIK), it's not true with other systems, such as Amplitude (I really, really don't want to reach the 10 millions events limit 😅). Sending them in batches instead of one-by-one seems like a better solution with Amplitude.
Also, I was surprised by the behaviour, documented below:
There are 3 main "steps":
<Link>
, basically), those metrics are received for every link clicked, unlike the 2 other "steps" above that only happen once by browser-sessionI though all events would be received at once and I'd just have to send one big analytic event to store it all. Also, some events (like
LCP
doesn't always trigger, sometimes it does upon first click, sometimes it doesn't and I can't tell why)Eventually, I used batches of 5 and 2 metrics sent at once to avoid sending too many events to Amplitude. My website being used as iframe in other websites, it could have suffered from quite a lot of API requests sent, that'd have eaten my free plan.
I believe the Next.js team didn't provide those stats in the
_app:render
function itself to avoid useless re-renders (which would have made perfs worse), but I wonder if there isn't a better way for people interested only by the 4-5 most interesting metrics to get them all at once.The worse thing about this design, is that it lives completely outside of the whole Next.js app. I had to rewrite a bunch of stuff that I thought I wouldn't have to duplicate (related to Amplitude config), because even though what I had done is re-usable between all my pages, those metrics live completely outside of the "page" concept.
Also, I'm not sure about how to analyse those metrics now that I'm tracking them. I guess it's great to have them around to be able to compare perfs across time and maybe pinpoint a specific feature that badly impacts performances after being shipped? I have a few ideas around, but no particular action plan yet.
Read https://web.dev/vitals/ if you want to take it deeper 😉
Commits:
Branch: https://github.com/UnlyEd/next-right-now/tree/v1-hyb-mst-aptd-gcms-lcz-sty, and more specifically:
NextWebVitalsMetricsReport
: https://github.com/UnlyEd/next-right-now/blob/v1-hyb-mst-aptd-gcms-lcz-sty/src/types/nextjs/NextWebVitalsMetricsReport.tsNextWebVitalsMetrics
: https://github.com/UnlyEd/next-right-now/blob/v1-hyb-mst-aptd-gcms-lcz-sty/src/types/nextjs/NextWebVitalsMetrics.ts_app
: https://github.com/UnlyEd/next-right-now/blob/v1-hyb-mst-aptd-gcms-lcz-sty/src/pages/_app.tsx#L80-L119Beta Was this translation helpful? Give feedback.
All reactions