-
Notifications
You must be signed in to change notification settings - Fork 20.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
64bit memory calculations #19210
64bit memory calculations #19210
Conversation
938258b
to
1f5c28d
Compare
func calcMemSize(off, l *big.Int) *big.Int { | ||
// calcMemSize64 calculates the required memory size, and returns | ||
// the size and whether the result overflowed uint64 | ||
func calcMemSize64(off, l *big.Int) (uint64, bool) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if !l.IsUint64() {
return 0, true
}
return calcMemSizeWithUint(off, l.Uint64())
core/vm/common.go
Outdated
return 0, true | ||
} | ||
// Check that offset doesn't overflow | ||
if off.BitLen() > 64 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IsUint64()
core/vm/common.go
Outdated
if val < offset64 { | ||
return 0, true | ||
} | ||
return val, false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return val, val < offset64
instead of condition
Updated, PTAL |
core/vm/interpreter.go
Outdated
// for a call operation is the value. Transferring value from one | ||
// account to the others means the state is modified and should also | ||
// return with an error. | ||
if operation.writes || (op == CALL && stack.Back(2).BitLen() > 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we use stack.Back(2).Sign
here instead of bitlen?
Updated, I removed a couple of occurences where |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
* core/vm: remove function call for stack validation from evm runloop * core/vm: separate gas calc into static + dynamic * core/vm: optimize push1 * core/vm: reuse pooled bigints for ADDRESS, ORIGIN and CALLER * core/vm: use generic error message for jump/jumpi, to avoid string interpolation * testdata: fix tests for new error message * core/vm: use 64-bit memory calculations * core/vm: fix error in memory calculation * core/vm: address review concerns * core/vm: avoid unnecessary use of big.Int:BitLen()
This PR replaces #19203 .
This PR contains five separate optimizations to the evm runloop. For review, it's probably simplest to verify each commit separately, since they are completely standalone, and some larger than others. For those, I've tried to make them as easy to verify as possible, by making the changes verifiable side-by-side with the original code.
The changes are:
1. Remove function call for stack validation
Previously, we always called
operation.validateStack
. Sinceoperation.validateStack
is a pointer, this call cannot be inlined by the compiler. It has been replaced withminStack
andmaxStack
on theoperation
, which can be checked immediately in the runloop.Additionally, I removed the call to
enforceRestrictions
, by inlining it.2. Separate gas calculation into constant and dynamic
Previously, we always called
operation.gasCost(in.gasTable, in.evm, contract, stack, mem, memorySize)
(again, cannot be inlined).For a lot of operations, this method only returned a constant. I instead added the constant gas values to those operations, which can be checked immediately. This makes a difference particularly for all cheap operations which have no dynamic part, which are the majority of the operations.
3 Optimize push1
This adds a custom
opPush1
method for push1, where we can convert a byte directly into auint64
, and don't have to do padding which the generic method does.4 Reuse pooled bigints for
ADDRESS
,ORIGIN
, andCALLER
This adds a convenience-method to
Address
to convert into an existing bigint, and the ops above can thus use the int pool instead of allocating new bigints.5 Avoid string interpolation for jump errors
On a trial run with metrics, it was found that the second largest time-sink was
7m30s
in JUMPI (1m31s
forJUMP
). Now, since solidity used to do intentional invalid jumps for abort all the time, most of that time is probably spent onfmt.Errorf("invalid jump destination (%v) %v", nop, pos)
.This PR replaces the string interpolation with a generic
errInvalidJump
error.6 Avoid allocating bigints during memory calc
It changes the signature of the memory calculation function to use
uint64
instead of bigints.WIthout this PR,
MSTORE8
,MLOAD
,MSTORE
), and additional bigint was allocated since thelength
does not come from the stack.As a side-effect, overflows are now detected and aborted earlier.