-
Notifications
You must be signed in to change notification settings - Fork 782
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Client: Implement eth_getLogs RPC endpoint #1520
Comments
Here is a really nice deep-dive-article with a good overview on how |
Would there be any issue for The Merge compatibility if we follow the same rules as Alchemy as outlined in the post shared above (or perhaps make them more/less loose depending on our needs)? Namely:
EDIT: actually it looks like they've implemented a response size limit since then (from their docs):
|
Yeah, that sounds reasonable. I guess the most important thing is that we generally have some constants in place to actually set these values, then it will be easy to just adopt later on if we see that some limits are impractical for some reasons. |
After doing a bit of research into how Geth handles logs:
|
Update on this relational DB idea: just had another look at the API. I have to say that I didn't get the whole picture before since I just read "optional" on all the parameters and assumed that there would be a "context-less" search on whatever topic possible (so over the whole range of logs from genesis to present), which would likely not be doable with a key-value store. 😜 I overlooked the default paramters for the block (being set to "latest"). So that makes it a totally different story of course. So I guess one is implementing this by looking at the respective blocks and - from reading the Geth code - the log entries are then stored by If that's correct I guess the relational DB idea is very much from the table and we can stick to what you guys suggested (one additional level DB for these kind of things). |
I think we can safely call this one closed via #1185 |
During the Merge interop event it turned out that Eth2 clients need the eth_getLogs JSON RPC endpoint, so we should implement (as some hack when doing work on #1512 we just mocked the call and gave a static response).
For these kind of historical data we would need some database to store along client sync, latest state after some discussion with Ryan was that we likely wouldn't want to store in the configuration database (due to being misplaced) but also do not want to add a new database for every new type of data stored, so we settled a bit on doing one additional database for all now and future historical data additions (e.g. receipts as well if needed) and use some prefixing here for access (
log_*
or similar). If there are drawbacks which we might have overlooked let us know.Update: from looking at the API from the RPC description linked above with the different possible filter parameters (block numbers, address, topic), I wonder if we would rather need a small relational database here? 🤔
Logs are produced along VM execution runs in runTx.ts respectively runBlock.ts.
The text was updated successfully, but these errors were encountered: