You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
2024-03-06T05:06:23.921Z ERROR [hero-core/index] UnhandledRejection { context: {}, sessionId: null, sessionName: undefined } RangeError: Maximum call stack size exceeded
at MirrorPage.onPageEvents (/Users/andy/repos/ec2/stag-secret-agent/app/node_modules/timetravel/lib/MirrorPage.ts:461:28)
at Tab.emit (node:events:517:28)
at Tab.emit (/Users/andy/repos/ec2/stag-secret-agent/app/node_modules/commons/lib/TypedEventEmitter.ts:158:18)
at FrameEnvironment.onPageRecorderEvents (/Users/andy/repos/ec2/stag-secret-agent/app/node_modules/core/lib/FrameEnvironment.ts:600:14)
at Tab.onPageCallback (/Users/andy/repos/ec2/stag-secret-agent/app/node_modules/core/lib/Tab.ts:934:47)
at Page.emit (node:events:517:28)
at Page.emit (/Users/andy/repos/ec2/stag-secret-agent/app/node_modules/commons/lib/TypedEventEmitter.ts:158:18)
at /Users/andy/repos/ec2/stag-secret-agent/app/agent/main/lib/Page.ts:263:14
at DevtoolsSession.<anonymous> (/Users/andy/repos/ec2/stag-secret-agent/app/agent/main/lib/FramesManager.ts:229:9)
The thing is, I have found that any trigger of this UnhandledRejection is often associated with subsequent
failures and timeouts for scraping other pages, that I have decided to also first do:
So that my node server immediately quits and restarts, "resetting" existing scraping sessions of course,
but this is better than causing other scrape failures, and such a reset socket is just a retry. For example,
for this and certain scraping exceptions, I have found that I can dramatically reduce my timeout
error rate for both hero.goto() and my various waitForLoad functions by forcing such abrupt
restarts of my PM2 wrapper of my node server.
In this particular case, it appears that maybe timetravel has a recording bug that blows up the stack?
It makes me wonder, regardless of the resolution of this issue, is it possible for those of us that don't
get value from timetravel, to turn it off? Would it speed things up? Reduce the risk of bugs?
But the other interesting thing about my scrape of this page is that it appears that within just
20 secs of page-loading (on my localhost mac), this page gives "screen" dimension of:
Hi Andy, thanks for the report. The maximum call stack is definitely an issue, and possibly the source of your cpu issue from the other bug. It's not only for time travel, unfortunately. It's also being used for navigation tracking on the page. I think we could certainly optimize this, and maybe at least have a way to not do any dom recording.
As for the dimensions and characters of the page, those are the same values I get loading it into a regular browser, so seems valid.
I found a github url that exhibited a couple problems, I believe.
https://gist.github.com/1wErt3r/4048722
The first problem is that Hero's snippet that registers an UnhandledRejection:
gets triggered, with:
The thing is, I have found that any trigger of this UnhandledRejection is often associated with subsequent
failures and timeouts for scraping other pages, that I have decided to also first do:
So that my node server immediately quits and restarts, "resetting" existing scraping sessions of course,
but this is better than causing other scrape failures, and such a reset socket is just a retry. For example,
for this and certain scraping exceptions, I have found that I can dramatically reduce my timeout
error rate for both hero.goto() and my various waitForLoad functions by forcing such abrupt
restarts of my PM2 wrapper of my node server.
In this particular case, it appears that maybe timetravel has a recording bug that blows up the stack?
It makes me wonder, regardless of the resolution of this issue, is it possible for those of us that don't
get value from timetravel, to turn it off? Would it speed things up? Reduce the risk of bugs?
But the other interesting thing about my scrape of this page is that it appears that within just
20 secs of page-loading (on my localhost mac), this page gives "screen" dimension of:
scrollHeight: 333057,
viewport: {
screenWidth: 1920,
screenHeight: 1080,
windowWidth: 1241,
windowHeight: 905
},
offsetWidth: 1241,
offsetHeight: 333056
and the outerHtml is about 7 million chars. I did a quick look at this page, and it is quite huge, but is it that big?
Let me share my snippet of test code:
The text was updated successfully, but these errors were encountered: