Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

64bit memory calculations #19210

Merged
merged 10 commits into from
Mar 12, 2019
Merged

64bit memory calculations #19210

merged 10 commits into from
Mar 12, 2019

Conversation

holiman
Copy link
Contributor

@holiman holiman commented Mar 5, 2019

This PR replaces #19203 .

This PR contains five separate optimizations to the evm runloop. For review, it's probably simplest to verify each commit separately, since they are completely standalone, and some larger than others. For those, I've tried to make them as easy to verify as possible, by making the changes verifiable side-by-side with the original code.

The changes are:

1. Remove function call for stack validation

Previously, we always called operation.validateStack. Since operation.validateStack is a pointer, this call cannot be inlined by the compiler. It has been replaced with minStack and maxStack on the operation, which can be checked immediately in the runloop.

Additionally, I removed the call to enforceRestrictions, by inlining it.

2. Separate gas calculation into constant and dynamic

Previously, we always called operation.gasCost(in.gasTable, in.evm, contract, stack, mem, memorySize) (again, cannot be inlined).

For a lot of operations, this method only returned a constant. I instead added the constant gas values to those operations, which can be checked immediately. This makes a difference particularly for all cheap operations which have no dynamic part, which are the majority of the operations.

3 Optimize push1

This adds a custom opPush1 method for push1, where we can convert a byte directly into a uint64, and don't have to do padding which the generic method does.

4 Reuse pooled bigints for ADDRESS, ORIGIN, and CALLER

This adds a convenience-method to Address to convert into an existing bigint, and the ops above can thus use the int pool instead of allocating new bigints.

5 Avoid string interpolation for jump errors

On a trial run with metrics, it was found that the second largest time-sink was 7m30s in JUMPI (1m31s for JUMP). Now, since solidity used to do intentional invalid jumps for abort all the time, most of that time is probably spent on fmt.Errorf("invalid jump destination (%v) %v", nop, pos).

This PR replaces the string interpolation with a generic errInvalidJump error.

6 Avoid allocating bigints during memory calc

It changes the signature of the memory calculation function to use uint64 instead of bigints.

WIthout this PR,

  • A new bigint was always allocated when the memory size was calculated (except if length was zero).
  • In some cases (MSTORE8, MLOAD, MSTORE), and additional bigint was allocated since the length does not come from the stack.

As a side-effect, overflows are now detected and aborted earlier.

@holiman holiman requested a review from karalabe as a code owner March 5, 2019 08:27
@holiman
Copy link
Contributor Author

holiman commented Mar 6, 2019

Benchmarks after ~24hours of full sync. This PR is roughly 1 hour ahead.

screen shot 2019-03-06 at 09 57 03

@holiman
Copy link
Contributor Author

holiman commented Mar 7, 2019

After roughly 46 hours, this PR is about 2 hours ahead
screen shot 2019-03-07 at 08 15 38

@holiman holiman force-pushed the optimize_memcalc branch from 938258b to 1f5c28d Compare March 7, 2019 11:02
@holiman
Copy link
Contributor Author

holiman commented Mar 8, 2019

After 3 days (and two hours)
screen shot 2019-03-08 at 12 43 07
THe PR is about 3 hours ahead:
screen shot 2019-03-08 at 12 44 10

@holiman
Copy link
Contributor Author

holiman commented Mar 9, 2019

After almost 4.5 days, it's almost 4.5 hours ahead. I actually would have thought that the linearity would break on the disk-heavy sections (5-6M and later), but that does not appear to be the case.
Screen Shot 2019-03-09 at 20 16 43

So it's about one hour every 24hours, or ~4 percent performance increase.

@holiman
Copy link
Contributor Author

holiman commented Mar 10, 2019

They both ran out of disk before completion[1], but made it to 7.13M and 7.11M respectively.
mon12 at 7118879 at Mar 10 07:23:46
mon13 at 7118975 at Mar 10 11:43:03, 4h 40m

Screen Shot 2019-03-10 at 21 21 10

[1] They are m5d.2xlarge instances, with only 300Gb of NVMe SSD

func calcMemSize(off, l *big.Int) *big.Int {
// calcMemSize64 calculates the required memory size, and returns
// the size and whether the result overflowed uint64
func calcMemSize64(off, l *big.Int) (uint64, bool) {
Copy link
Contributor

@fjl fjl Mar 11, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if !l.IsUint64() {
    return 0, true
}
return calcMemSizeWithUint(off, l.Uint64())

return 0, true
}
// Check that offset doesn't overflow
if off.BitLen() > 64 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IsUint64()

if val < offset64 {
return 0, true
}
return val, false
Copy link
Contributor

@fjl fjl Mar 11, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

return val, val < offset64 instead of condition

@holiman holiman mentioned this pull request Mar 11, 2019
@holiman
Copy link
Contributor Author

holiman commented Mar 11, 2019

Updated, PTAL

// for a call operation is the value. Transferring value from one
// account to the others means the state is modified and should also
// return with an error.
if operation.writes || (op == CALL && stack.Back(2).BitLen() > 0) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we use stack.Back(2).Sign here instead of bitlen?

@holiman
Copy link
Contributor Author

holiman commented Mar 11, 2019

Updated, I removed a couple of occurences where BitLen could be replaced by Sign or IsUint64

Copy link
Member

@karalabe karalabe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@karalabe karalabe added this to the 1.9.0 milestone Mar 12, 2019
@karalabe karalabe merged commit 7504dbd into ethereum:master Mar 12, 2019
kiku-jw pushed a commit to kiku-jw/go-ethereum that referenced this pull request Mar 29, 2019
* core/vm: remove function call for stack validation from evm runloop

* core/vm: separate gas  calc into static + dynamic

* core/vm: optimize push1

* core/vm: reuse pooled bigints for ADDRESS, ORIGIN and CALLER

* core/vm: use generic error message for jump/jumpi, to avoid string interpolation

* testdata: fix tests for new error message

* core/vm: use 64-bit memory calculations

* core/vm: fix error in memory calculation

* core/vm: address review concerns

* core/vm: avoid unnecessary use of big.Int:BitLen()
gzliudan added a commit to gzliudan/XDPoSChain that referenced this pull request Dec 19, 2024
gzliudan added a commit to gzliudan/XDPoSChain that referenced this pull request Dec 21, 2024
gzliudan added a commit to gzliudan/XDPoSChain that referenced this pull request Dec 26, 2024
gzliudan added a commit to gzliudan/XDPoSChain that referenced this pull request Dec 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants