Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nitro is keep crashing when serving eth_call #2255

Closed
Johnaverse opened this issue Apr 24, 2024 · 5 comments
Closed

Nitro is keep crashing when serving eth_call #2255

Johnaverse opened this issue Apr 24, 2024 · 5 comments

Comments

@Johnaverse
Copy link

Describe the bug
Nitro is keep crashing when serving eth_call. Node crash affecting the performance of rpc calling.
Here is the error log.

ERROR[04-17|14:25:15.510] RPC method eth_call crashed: runtime error: invalid memory address or nil pointer dereference
goroutine 906698369 [running]:
github.com/ethereum/go-ethereum/rpc.(*callback).call.func1()
        /workspace/go-ethereum/rpc/service.go:207 +0x89
panic({0x3fd2220, 0x6862710})
        /usr/local/go/src/runtime/panic.go:884 +0x213
github.com/ethereum/go-ethereum/arbitrum.(*APIBackend).StateAndHeaderByNumberOrHash(0xc09627dc50, {0x4e9cb88, 0xc0365ee550}, {0x0, 0xc0959c6aa0, 0x0})
        /workspace/go-ethereum/arbitrum/apibackend.go:538 +0x112
github.com/ethereum/go-ethereum/internal/ethapi.DoCall({0x4e9cb88, 0xc0365ee550}, {0x4ed6408?, 0xc09627dc50?}, {0x0, 0xc0a0be68b8, 0x0, 0x0, 0x0, 0x0, ...}, ...)
        /workspace/go-ethereum/internal/ethapi/api.go:1196 +0x174
github.com/ethereum/go-ethereum/internal/ethapi.(*BlockChainAPI).Call(0xc09625b180, {0x4e9cb88, 0xc0365ee550}, {0x0, 0xc0a0be68b8, 0x0, 0x0, 0x0, 0x0, 0x0, ...}, ...)
        /workspace/go-ethereum/internal/ethapi/api.go:1254 +0x1a5
reflect.Value.call({0xc0967a8200?, 0xc0967ab010?, 0x7f8d8c127340?}, {0x446e037, 0x4}, {0xc096848cf0, 0x6, 0x0?})
        /usr/local/go/src/reflect/value.go:586 +0xb07
reflect.Value.Call({0xc0967a8200?, 0xc0967ab010?, 0x16?}, {0xc096848cf0?, 0x2?, 0x4?})
        /usr/local/go/src/reflect/value.go:370 +0xbc
github.com/ethereum/go-ethereum/rpc.(*callback).call(0xc0967e9b00, {0x4e9cb88?, 0xc0365ee550}, {0xc0ad1fbd78, 0x8}, {0xc0afc4af00, 0x4, 0x15a0a77?})
        /workspace/go-ethereum/rpc/service.go:213 +0x3c5
github.com/ethereum/go-ethereum/rpc.(*handler).runMethod(0xc04239d0a0?, {0x4e9cb88?, 0xc0365ee550?}, 0xc1256a8a10, 0x4?, {0xc0afc4af00?, 0x151abb0?, 0x4030a20?})
        /workspace/go-ethereum/rpc/handler.go:565 +0x45
github.com/ethereum/go-ethereum/rpc.(*handler).handleCall(0xc07ef71360, 0xc15bae9320, 0xc1256a8a10)
        /workspace/go-ethereum/rpc/handler.go:512 +0x239
github.com/ethereum/go-ethereum/rpc.(*handler).handleCallMsg(0xc07ef71360, 0xc15bae9380?, 0xc1256a8a10)
        /workspace/go-ethereum/rpc/handler.go:470 +0x237
github.com/ethereum/go-ethereum/rpc.(*handler).handleNonBatchCall(0xc07ef71360, 0xc15bae9320, 0xc1256a8a10)
        /workspace/go-ethereum/rpc/handler.go:296 +0x1ae
github.com/ethereum/go-ethereum/rpc.(*handler).handleMsg.func1.1(0x4e9cb88?)
        /workspace/go-ethereum/rpc/handler.go:269 +0x27
github.com/ethereum/go-ethereum/rpc.(*handler).startCallProc.func1()
        /workspace/go-ethereum/rpc/handler.go:387 +0xc5
created by github.com/ethereum/go-ethereum/rpc.(*handler).startCallProc
        /workspace/go-ethereum/rpc/handler.go:383 +0x8d

Flag

--execution.caching.archive
  --persistent.chain=/var/lib/nitro/
  --persistent.global-config=/tmp/
  --parent-chain.connection.url=l1_url
  --parent-chain.blob-client.beacon-url=l1_beacon_url
  --chain.name=arb1
  --http.api=net,web3,eth,debug
  --http.corsdomain=*
  --http.addr=0.0.0.0
  --http.port=9657
  --http.vhosts=*
  --log-level=3
  --execution.rpc.classic-redirect=http://0.0.0.0:9656
  --metrics
  --metrics-server.addr=0.0.0.0
  --metrics-server.port=6060

Build version
build locally with v2.3.3

@Johnaverse
Copy link
Author

Look like it is occurred when high volume eth_call served by nitro

@matthewdarwin
Copy link

We have nitro integrated with graph-node. graph-node is sending a huge volume of eth_call.

We checked to make sure we are not running out of RAM.

Please advise if you need any additional info to help troubleshoot this issue.

@PlasmaPower
Copy link
Collaborator

This will be fixed by OffchainLabs/go-ethereum#312, but it shouldn't affect anything other than the error message right now. The RPC method crashing does not cause the node to crash.

@Johnaverse
Copy link
Author

Agree. The node still running. Just showing the error message and not 100% eth_call served

@joshuacolvin0
Copy link
Member

Thank you for confirming that node is not actually crashing, just outputting the error message.
eth_call is exceptionally resource intensive, and running an indexer like graph-node especially so. Usually a single node is not enough to handle the large volume of calls from an indexer. Commands are likely timing out or simply not handled.
Each incoming command is handled by a separate thread, so saturating all CPU cores could cause slowdowns. The more likely issue is that disk latency is slowing things down, we recommend local NVMe drives for the lowest latency. If you are already using NVMe, then you probably need to be running a load balanced cluster of Nitro nodes to handle the volume required by graph-node.

If you have further issues, feel free to followup on the Discord #node-runners channel https://discord.gg/arbitrum

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants