From c1a852e9647cdefcbcff6e47dba6d2fe5df1dccf Mon Sep 17 00:00:00 2001 From: Darkfire_rain <67558925+darkfire-rain@users.noreply.github.com> Date: Mon, 19 Jun 2023 18:00:41 -0400 Subject: [PATCH 1/4] Create Call_111.md --- AllCoreDevs-CL-Meetings/Call_111.md | 476 ++++++++++++++++++++++++++++ 1 file changed, 476 insertions(+) create mode 100644 AllCoreDevs-CL-Meetings/Call_111.md diff --git a/AllCoreDevs-CL-Meetings/Call_111.md b/AllCoreDevs-CL-Meetings/Call_111.md new file mode 100644 index 00000000..4ec343dc --- /dev/null +++ b/AllCoreDevs-CL-Meetings/Call_111.md @@ -0,0 +1,476 @@ +# Consensus Layer Meeting 111 [2023-06-15] +### Meeting Date/Time: Thursday 2023/6/15 at 14:00 UTC +### Meeting Duration: 1.5 hours +### Moderator: Alex Stokes +### [GitHub Agenda] ( https://github.com/ethereum/pm/issues/809 ) +## [Audio Video of the Meeting] ( https://www.youtube.com/watch?v=ybgQuRcz9sg ) +### Next Meeting Date/Time: June 29, 2023 + +# Agenda + # Error issue #809 + ## Exchanging the engine API for Cancun + ### Merging into Deneb release + #### Max effective balance change. + +Stokes: This is consensus layer call 111. I'll drop the agenda in the chat. It's error issue #809. Danny can't make it so I'll be moderating and yeah we can go ahead and get started. So first up oh yeah, I think first call deneb and then we'll touch on some more research forward-looking issues. So, to get started with Deneb, I think essentially we just want to understand what things are finally going into the stack the spec release soon. So, I guess we'll just go to the agenda there's this issue for EIP-6988. I think Mikhail’s been driving this let's see. + +Mikhail: Yeah, okay so about 6988. So the problem to this EIP is about preventing a slashed whatever is elected. As a proposer which is basically, prevents us from having a lot of empty slots in case of mass slashing and the one of the problems that we have encountered during work on the CAE was that basically this EIP changed the brakes the invariant of the proposer shuffling for an Epoch. So, it can change throughout the Epoch and that's apparently a problem for your clients and for probably some other toolin and some other balances so I've made it PR which suggests the change proposed by that plan. It basically stores the proposer shuffling in state. The footprint is not that big and it seems like pretty damn not that big in terms of flag conditioner complexity to keep it stored in this state. So there is the update proposer Shuffling which basically accepts the state and I'm going to use the sharpen for the next Epoch and then we read the Proposal this is from from this vector and that's the change and yeah it's called every Epoch processing this function is called Reaper processing what's not that nice about the introducing of this function is that if we are bootstrapping from from Genesis for instance. We have to call this function before our state becomes usable well and before we are going to process any further slots and propose any blocks. Otherwise it will just not you know have the proper shaft in it but I don't think it's a big issue. So the question main question here is whether considering this additional complexity whether we want to include this change into deneb. So you know if anyone took a look at this PR. It will be great if we can discuss it now so let's do it now or I would keep it for a couple of days more to take a look in those this PR and make a final decision on that EIP whether it goes in the number not. + +Sokes: So, we do have a precedent of having some initialization function for the Genesis state. So yeah, I agree with you I don't think that's a big problem. Iit says in the description it only increases the extra state that we have to track by to introducing device. So not too wild there I know there's been some like back and forth on this you know trying to achieve the aim of this the slash unit variant so yeah it's good to see this here personally it feels a little bit late to me to add something that is kind of this big because it is kind of like a change to the mental model of the protocol and in the process I don't know if anyone else is taking a look. + +Mikhail: Yeah I'll take that I'll take that. + +Stokes: No, so maybe we have the pr open you know and for another few days and I don't know if anyone feel strongly they should voice support or not support and we'll go from there. I do. I would imagine it's a bit too late to decide this on the next CL call but if that's the next time we get to discuss it that might be the next time. + +Mikhail: Yeah probably we can briefly mention it on the next yeah CL call if it makes sense. I don't know. + +LionDapplion: So, I think here it will make sense to do a study like we did for removing the SELFDESTRUCT to understand the implications of breaking the dependent route which may not be that bad but I think that's the main question we have to answer here but definitely it feels tight for deneb. + +Mikhail: Yeah so definitely if we want to explore breaking this depend to route thing yeah I would say that this is too late for deneb also. + +Stokes: Waystar can you explain a bit what you mean by breaking the dependent route? + +LionDapplion: So, this doesn't show up in the spec but in the beacon api’s all the duties included dependent route and the idea is if the dependent route of that specific Epoch is not changing because there is a reorder deeper than a number of slots you are good and the duties should be the same. So I know a few clients at least I think loads then we can rely on that. We can get around that and just pull but it will be good because that's exposed who else may be using that dependent route because we don't know. + +Stokes: Right so I thought really why we have the EIP for this is to basically maintain this invariant that the shuffling’s wouldn't change. So it seems then that you'd still have that guarantee for the dependent route right. + +LionDapplion: Well my point is if no one is using a dependent route maybe we can just break it. + +Stokes: Oh you just mean like get rid of it entirely yeah well but okay sure. But my point then is that it sounds like this change doesn't have any bearing on that or am I not following? + +LionDapplion: So the EIP-6988 is a reaction to the simpler one that just adds an extra condition in the compute proposal if we don't mind breaking the embedding again that's that's a much simpler change. + +Stokes: I see okay so you're saying we could not do this just make you know the few lines change in the other PR and then loses guarantee. Okay yep yeah, when I was talking about changing the mental model I think the like having shufflings because we're talking about that they would change possibly every slot right so that to me seems even more chaotic than what we have today. So I think we should not go in that direction. + +Mikhail: I know maybe this invariant also used by some other tool in I don't know and yeah you can have some implications but as for me not having proposer shuffling swords in the state is and more clear more clean select clear solution and like breaking the invariance if it possible it is possible. + +LionDapplion: Another point is if you have to process old blocks you need to retreat the shuffling so yeah having having to press the block the block after that specific slot that's the cost but I don't know I will look into it + + Mikhail: You would have a state anyway so it's not gonna be a problem. + +Stokes: Okay so I think we should stay focused on essentially what's going to go into like the you know final TM the net release and so it sounds like there's still a few design questions we have some more sort of research we want to do around this particular feature which suggests to me that we table it for now does that sound good. + +Mikhail: Makes sense to me. + +Stokes:Okay next up, on the agenda there is another issue, this one for exchanging the engine API for Cancun. I think also Mikhail, l opened this one and yeah anything Michael we should know. I think I went the other day and added stuff for #4788 which we'll get to later in the call and generally it looks good when I look at it. + +Mikhail: Yeah it's just basically and so this spec consists of The Blob extension spec that everyone is familiar with and also with the parent beacon block route which is for the other EIP and the other and and last one change that is in this PR in the proposed Cancun specification. For now it's not probably the last one deprecation of exchange transition configuration. I would just like to quickly to briefly go through it so I think that this pack first of all. I think that this pack is just not enough I'm in terms of tooling to deprecate this gracefully. So basically it's the spec says that first execution layer clients must not surface any error messages to the user if this matter is not called if we remove this requirement I mean like if we remove this error message then we can break the dependency between CL and EL. So, we remove this on the outside and then consistently clients can remove this method entirely. So what actually how I see the procedure the procedure of removal of this thing is that EL clients remove it right away as soon as possible. And then we, I mean like remove the error message of the methods methods should still exist because otherwise because since the clients will surface error because of this method will be responded. So yield clients removed the error message we wait for Cancun or yeah we we wait for every EL Client to remove this message and then process the clients also releases this software without this method being called. And then after it Cancun everyone can just remove it at any point in time. So we will use Cancun as the point of coordination for software upgrades. Yeah this is the the way of how to do it gracefully. So, maybe we can just you know cut it, cut the court and you know remove it right away but then we will users will see some x issues by seeing by potentially seeing error messages one client stops supporting it yeah the other one will complain. Yeah so there are two potential ways to do this. I would prefer the first one which is the graceful one but maybe there are some other opinions on that. + +Stokes: Yeah I mean this graceful approach sounds better and was that what you have in this PR or something else. + +Mikhail: Now it's not it's not described in the PR because this spirit is into this pack and so I say that this back is not here. So probably we should just try this on the call and say and ask EL clients you know to start removing this error message if they're okay with that. How does it sound? + +Stokes:Yeah it sounds good to me. I got a plus one in the chat so yeah let's make a note to bring this up on the call next week but otherwise yeah sounds good I know this is something we're going to do for a while so makes sense to go ahead and do it. + +Mikhail: Great. + +Stokes: Okay that was pretty straightforward. okay so there's a list of EIP’s here. that we essentially well that are on the you know the slave for going into Deneb. And let's see we'll just take them in turn so the first one is #7044. This refers to essentially changing how we process voluntary exits so that they once made are sort of valid forever. And this was essentially ux Improvement because before they expired and that was not so nice for people especially in like custodial staking services yeah so it looks like it's been merged into the specs already. And I think it's on here just to call it out. I don't think we really need to rehash it but I guess it's worth calling out. The next one is #7045. So this one wants to change how we process attestations and I know that this was around the confirmation rule that some of the Fortress people I've been working on does anyone here want to give an overview of this. The PR itself basically just says that you have I believe the current and also previous Epoch. To include an attestation on chain whereas before it was just when when epoch's worth of slots so there's a basis like slots for Epoch rolling window. And now it's basically extended back out to sort of rounding down so to speak to both epochs. If that makes sense. So I don't know if there are any of the conformational people on the call? + +Mikhail: I can try to give an overview on that. It's not only about the confirmation rule it's also about some other properties in the protocol that we want to maintain under some edge cases. So, essentially this PR removes the constraint and gives us the guarantee that attestations that has been produced in the previous epoch will be includable until the end of the current debug. Which is important for the confirmation Rule and for some other things as I already mentioned. This PR also fixes the incentives part because previously there was the same constraint on the number of slots equal to nber of slots in the Epoch for the Target. As far as I remember, to give the rewards for that. So now well proposal will be rewarded for attestation if it even comes from the beginning of the previous Epoch while the epoch has been proposed in the end of the guarantee epoch so yeah this part is also fixed and yeah I would be pretty much in favour of PCIP to be of this change to be deneb. As I think it is straightforward there is also the other the P2P part as well so I'm attestations will be propagate it yeah so the propagation also changed the from from the slots equal to the number of slots in epoch to the previous Epoch in the guarantee epoch and all these issues produced in the in these two are free to be propagated. + +Stokes: Okay thanks for the overview. Yeah, I mean I don't really see anything blocking inclusion so has anyone looked at this in terms of implementation that'd probably be the only reason we wouldn't have had with it if there was some issue. So, does anyone like to implementing this? I'll assume no if no one speaks up. + +Arnetheduck: Okay I have a quick question: does this affect the aggregation subscription in P2P at all like which subject you're supposed to be subscribed to for how long? + +Mikhail: I don't think so. + +Stokes: Yeah, I think this is mainly a state transition change. + +Arnetheduck: No it's peer-to-peer change as well and it extends the time that somebody is allowed to send. The attestations on a particular Subnet in peer-to-peer. + +Stokes: Right but I don't think it changes anything about the aggregation structure. + +åMikhail: And that that's actually a good question and the question is that in the current like before this change how does the aggregation Works in terms of subscription whether aggregators stay for the longer time to expect some attestations from the past or not because the stack is not it does not seem to say anything about this Edge case I mean like even with a 32 slots as it is today. + +Arnetheduck: Yeah, I think that might be a gap actually. I think we're kind of unsubscribe early. + +Mikhail: Yeah. + +Arnetheduck: And then the wrong aggregators will be listening to these attestations and they won't aggregate them anyways they'll be lost anyway so I think that's merits at least thinking about. + +Mikhail: Yeah so I mean like it does not break that much this change in terms of aggregation and I think that there is a probabilistic this probabilistic function I mean like whether there is the right aggregator. So there is still a probability that somebody subscribed to the right subnet. I mean like the current subnet can aggregate those or or not. Maybe maybe I'm wrong. + +Arnetheduck: Well that's the thing that nowadays, we will only be subscribing to one aggregate substitute subnet per validator anymore and yeah to be honest I don't know I'm I'm raising the question because I have no idea but but it feels like the kind of thing that would in practice possibly break this or not break it it would just render it points to be propagating these attestations unless the the aggregation pipeline is in tune with this change. + +Mikhail: Yeah so again it doesn't seem to break anything but yeah it's probably probably the problem that we're discussing now already exists in some for some all the destinations. + +Stokes: Okay well so it sounds like we should do a little digging to look into this a bit further. And I suppose we'll discuss this on the next call. I don't think even if we needed to change the subscription logic that it would you know make this twice as complex or anything so I would lean towards considering it included. But yeah definitely something we should take a look at. Okay so we'll make a note about that the next one up for discussion is #4788. So I opened a PR for this basically there was like a feature for this. And now it's been in this PR migrated into Deneb formally. There is some feedback from some of you so thank you for that on the PR. And yeah this one's pretty straightforward change discussing this for a while. I think it's pretty much ready to go. There is some feedback here that I can get to but it won't be super substantial. in terms of the spec or implementation.So yeah, I suppose any final thoughts otherwise we'll get this ready in version for the to never release. I guess it is worth calling out that there was also an engine API change for this in the PR that we discussed earlier from Mikhail. + +Mikhail: One thing that I commented out in the PR and that from the first glance I can look a bit odd is that we are duplicating the parent. We can block routes on the CL side. So the other way around would be not introducing this into the execution payload on CL side because we have the parent route in the outer structure in the beacon box structure. And the advantages CL passes this data into El and El but El includes this parent beacon block route and to the execution payload into the actual execution block. But yeah there was a comment about it and So I think it's fine to keep it there considering that the complexity the data complexity isn't really really cool here. So I'm just emphasising that because probably somebody else may take a look at that. And have some they have similar thoughts on that. + +Sokes: Right yeah thanks for bringing that up so you know you could say very strictly that we're duplicating some data in the block with this extra parent routes and while that's true. the reason it's there is because we essentially want to have this sort of symmetry, almost that like whatever we have in the execution payload in the block is what is passed to the EL and the EL needs us there definitely. So I think it's easier to reason about if we just put it there that Mikhail pointed out we could save 32 bytes but yeah. I don't know if anyone here has a strong opinion either way and perhaps if you do you can take it to the PR. Unless you want to talk about it now. Regard it seems okay in the chat. Thank you. Okay so those are the PR’s here on the agenda I think the intention is to essentially get them merged into a Deneb release in like the next week or two. And you know that would sort of be our final Deneb spec from the CL side so very exciting to see. And yeah, everyone here please take a look at everything we've discussed if there's any final comments, especially beyond what we've discussed on the call so far let them be known. So, with that being said the next item here is discussing the Blob counts. Perhaps this was mainly just a note to call this out that you know. I think expect the EIP says that we essentially have a target of two blobs and a Max set before. There's been some conversation between different you know researchers and devs about bumping that up to say Target three blobs Max of six blobs. And yeah I'm basically I think the cloud here is just to say that people on this call and others are looking at all the data that we can and thinking about you know potential rectifications. Please just keep doing that and you know join the conversation to the extent that you're able to do so. And there's a note here we're trying to make an informed minute decision in the next two weeks so yeah if you have thoughts or feelings about this you should bring them upstairs. I would imagine this will be a topic on the next execution call but yeah this is just a reminder that it's happening at least it's a topic to happen you know a discussion point that it may happen so don't don't let it evolve your plate so to speak. + +Dankrad Feist: Yeah and I also wanted to mention for those who went on the Monday court that I did more tests at 768 kilobytes and I'm going to share the dashboards in my chat for anyone who's interested in looking at these as a summary. We didn't see any yeah problematic behaviour any instabilities. We did have some problems with our own nodes sinking as far as I know. So the blocks we created some of them while we hit the average were more like 1.5 megabytes and zero megabytes. Alternating which is actually a stronger stress test but yeah what's our our endpoints the best was a problem there. + +Stokes: Okay great and yeah thanks thank God for filling the charge on those experiments it seems like from what we've seen there isn't any immediate issue with the bigger Blob size. So that's a strong argument to make them bigger. okay great there's some data in the chats. Arnetheduck is asking how into the process can we make this decision. Yeah I mean I would say the sooner the better like I don't think we want to be like two weeks out from the fork and being like Oh you know let's change this. Just because it'll grip one does I think a bunch of things. + +Arnetheduck: I'll mention one thing which is only weakly related but it kind of came out of looking at at graphs around this and and what's been happening over the past six months is that we've gone from practically no reorgs at all to you know a few per hour and we can kind of we haven't like there's no great answer to why this is happening there's a couple of theories. It looks a little bit like it's growing with the number of validators. It definitely became worse after the complexity increasing Capella. + +Dankrad Feist: Well what about the late block records? Wouldn't that be a strong reason why we're seeing this. + +Arnetheduck: Because. + +Dankrad Feist: Before that if even if you get your block and as late as 11 seconds your probe is still going to be on the Chain whereas now like we walk those blocks so maybe it's not that surprising. That this has gone up as two clients have introduced that. + +Arnetheduck: Yeah it's possible, let's say. But if you look at the graph it's kind of growing so that could be explained by you know more people starting to use newer versions of the clients. But it kind of looks very similar to the validators that are growing. Like I don't want to draw any conclusions here really I'm just highlighting it like when looking at experiments. This is like an interesting thing that has changed over the past few weeks sorry months. And it definitely became worse with a capella like on the spot where we switch to Capella. It's markedly higher on average. And why am I mentioning it right now because well we're packing more and more stuff into the first four seconds before we're supposed to send attestations. And like my gut feeling is that it might actually be an excellent time to rebalance the timing of sending the attestation and the aggregate. And I just wanted to sort of fill out the call whether anybody like strongly opposed to say. I'm just going to pull some numbers out of my nose there but like let's send the block at six seconds and all right sorry let's end the attestation at six seconds and send the aggregate appropriately in the middle between that and the next block instead of at four seconds. Has anybody explored this and found strong reasons not to do it? Or to do it ? I'd be kind of curious because that would help certainly help reorgs. + +Dankrad Feist: So I I'm strong in favor of of this and I also have the same feeling that this is an exactly one of our big problems we put one more the first third of the slot I think the interesting question would be if people have data on when attestations are arriving in the seconds third and when aggregations are arriving yeah if anyone has that data that'd be very interesting well. + +Arnetheduck: We have histograms and tell us when all the stations come + +Dankrad Feist: Can you share that? + +Arnetheduck: Yeah I can share it in the consensus Dev Channel later. The general trend is that or well so there are two things in respect. There is a rule that allows us to send attestations as soon as we've observed the block. clients are generally not doing this; there's like a large concentration of attestations coming in shortly after the four second mark. And then like it spread out but it's fairly like there's a file is a large number of the attestations coming in the second after after the four second marks so like as a client Dev if we're pushing the timing back. I would still look into something like I would strongly suggest that people implement this feature where. We send the attestation a little bit earlier to spread out the traffic that would help. And I am going to post the exact numbers in the consensus tab Channel but I just got feeling based they're pretty good. + +Stokes: I'm sorry what do you mean it's pretty good just the ones that you gave as actual numbers. + +Arnetheduck: No as in we're supposed to send the attestation at the four second mark and like by five seconds most of them most are the ones that you know we're going to be sent have been sent and result + +Stokes: Okay right. + +Arnetheduck: But again like this is just me looking at the graph. I'd have to I'm going to pull it out in a slightly different format to give better numbers. + +Stokes: Sure yeah yeah I mean data would be helpful I mean I also agree I think this is something worth strong investigation. And I don't know I would my concern with deploying this in deneb is just delaying the fork. This is like a somewhat involved change that being said I do think it's like pretty important. So yeah I mean it sounds like we should probably spend some resources looking into this ASAP. I don't know if anyone has any thoughts on the Dev and its relation to this change but it sounds like we probably just want some more data first. + +Mikeneuder: I just wanted to bring up a. + +Sean: Yeah go ahead Mike. + +Mikeneuder: Sure yeah just from the relay perspective. Some of the issues around relay stability were around that four second deadline. And in particular getting all the relay checks done in time to hit that four second deadline like if a validator sends a signed header to the relay at T equals two then the relay really only has two seconds to get the blog post in time. So yeah, I think with bigger blocks that timeline is going to get even tighter for relays. So success seconds might help in terms of stability there too so just thought that was worth bringing up. + +Arnetheduck: for relays how much of that is like trying to, you know, post the block as late as possible to make more profit versus starting to work on time. + +Mikeneuder: Well the relay can't start the work until the validator sense of assigned header right. So I guess the validator could play like some timing games. There's a paper from Casper and the rig group about this. Let me get the link for it. It's called time is money. I think the takeaway is generally validators aren't playing these timing games yeah I'll post a link in the chat but the relay still has like some amount of latency that it has to do like for example simulating the block takes like on the order of like two to three hundred milliseconds and then like receiving all the bytes might take like another 200 milliseconds. So the latency starts to add up quickly especially if someone calls get payload later in the block. But yeah like the Relay can do stuff like oh will reject any get header requests past three seconds but if they have a if they have a valid header that they sign the relay is kind of like obliged to try and get the block published even if the signed header isn't received until like T equals 3.5 or something like that you know. + +Arnetheduck: Cool thanks Mikeneuder. + +Mikeneuder: I think Terence's raising his hands. + +Terence: Yeah I just want to call out about prison also has the release that's coming either today or early on Monday that we do find like an issue where that if today there's the late block and then the substitute for the subsequent slot after the late block that presume will delay have some additional vacancy blob production and therefore it may cause issues with the relayer. And that's why if you're a present validator you may see more reorg or not real or you do a Blob proposed block will have a higher chance of getting New York but the fix is coming. So hopefully, that you improve things a little bit from the prism side. + +Stokes: Okay that's exciting yeah so zooming out a bit like I think this is actually really important that we look into changing these sub seconds or sorry these these so the slot second timings. So yeah let's keep this thread going and we'll see again I won't speak to when exactly it's included. But I do think it's very important for us to to dig in here yeah Potuz, how does you have your hand up we can't hear you you're muted. if you're speaking, Can't unmute, I'm sorry um. Maybe try to get it working and just speak up when you can figure that out. Yeah I'm not sure there's some messages in the chat but I'm not sure what oh I can I don't know if I can maybe 10 do you know if you have the ability to mute people. + +Tim Beiko: Let me check I cannot. I'm getting that you are the only one I don't know what that means, I mean why couldn't he I mean people have been talking. + +Stokes: Yeah but I think Danny's the host technically he's just not here . + + +Tim Beiko: Well Danny just shot a bad POTUS. I'll see if I'll give you trick is it + +Stokes: Yeah wait wait okay he's back very incredibly, now. Yeah I don't know I can chat with you. I'm not sure who the actual host is. yeah okay sorry if you want to send the message in the chat we can try to go about it that way but otherwise I will keep moving things along. + +Potus: Oh wait you I can probably be unmuted now. + +Stokes: Yeah we can hear you. + +Potus: Oh good so I just want to mention this thing on the on the on the sub segments that it's not so driven and it's not so clear that you can actually take out time from the first part of the slot. Because aggregation becomes a problem and becomes an actual problem we have a very large valuator set that is getting larger and aggregating is taking over two seconds on a normal computer especially if you're subscribed to old subnets. the highest I've been monitoring this because we changed in prison the way we are aggregating and aggregated at the stations and on a normal nuke like mine it would take up to four seconds to aggregate all unaggregated attestations if you are subscribed to all subnets. So that means that you cannot really realistically be a good aggregator. If you are hosting more than 30- 32 keys on a nuke. So what happens is that very large validators can run on very faster software the hardware. But I think for small home stakers you are not going to get good aggregation if you reduce the middle part of the slot. So that leads us to like shifting everyone and taking only seconds from the last part of the slot. And the last part of the slot I think is it is safer to take some time out of it. But the problem is now with the reorder feature that we place a bet before the end of the slot on whether we are going to reorg or not and I'm afraid that we're going to see a lot of split views if we make this last part very smaller. So I think that it's going to take a long time to actually get these numbers correct and getting good experimentation that actually vouches for how long we can increase the first part. + +Dankrad Feist: so what I don't understand is why is the relevant number all subnets. Why isn't it? + +Potus: Because if you so the amount of attestations that you need to aggregate is going to depend on how many subnets your subscribed for sure and if you run more than 30 validators you are going to be subscribed anyways to all sub names so by so if you're running on on on a on a home computer you can't really realistically run more than the two or three. + +Dankrad Feist: It wouldn't be, you wouldn't be in aggregated or something so. + +Potus: Well but you are an aggregator quite often and you get all this. I mean you are going to get all of this many more attestations and aggregated attestations that you are going to need to aggregate. So by subscribing to all subnets you're gonna be getting a lot or more of unaggregated attestations. + +Dankrad Feist: Right but you cannot do all that work in parallel right like presumably if you're if you are running tens or hundreds of validators you know don't just have once you to run. + +Potuz: Yeah the bottleneck is even we are parallelizing it and the Bottleneck is is in is in this in the BLST library and it doesn't matter like I see Terrence asking whether or not clients are aggregating all at once or on us they come I Benchmark this and it doesn't matter it doesn't make much difference Lighthouse Aggregates as they come and pressing just that grades them all at prescribed times because the the number of additions that you make is exactly the same it doesn't really change anything so so I I truly don't think that we can't really subtract time from the middle part of the slot we can measure it and we can try to Benchmark it. But I would clearly see a degradation if we subtract parts from the middle part of the slot. And I think measuring the split views that will come from subtracting part from this larger the last part of the slot is the pain so I think that we should keep our minds open that if we are forced to increase the first the first four seconds then we may need to increase the slots. + +Dankrad Feist: I I didn't understand why you can't parallelize it sorry that doesn't make sense. + +Potuz: No we can parallelize it we are parallelizing and anyways we're getting a lot of we are getting four seconds marks so the the typical aggregation for my computer is about 200 milliseconds but then from time to time when there's a missed block for example on you need to aggregate more then you you get up to two seconds and I'm subscribed. + +Dankrad Feist: Well I do need to aggregate more when there's a message. + +Potus: Well because you get a lot of attestations that weren't included before that you if there's a missed slot you you have the aggregate the the attention from the previous slot then you need to aggregate with the applications from this Cloud to include more attestations and my computer takes up to two seconds. + +Dankrad Feist: That's that I don't understand because the previous ones you could already have done in the previous slot. + +Potus: No but you still need to aggregate with the current one so so you run there's always. Later the statements as well. + +Dankrad Feist: Sure but these are you can just add right you've already aggregated the one. + +Potus: Well the algorithm to add is not so simple it's simple to add when you only have one bit but then you need to start aggregating aggregates and it's not trivial. + +Dankrad Feist: No you don't, I don't understand. + +Potuz: Yes you do okay so I like algorithms. + +Dankrad Feist: I had like 200 aggregated signatures now I received 10 more that were late. + +Potus: So yeah but then the problem is that you're receiving 10 more that had like 10. Two of them are with intersection with the 100 that you had before and you cannot and you've had like a group of like seven or ten years. + +Dankrad Feist: Okay why do you have to aggregate the Aggregates I didn't know where to? + +Potuz: Because you want to have a better block. + +Dankrad Feist: Yeah okay. + +Arnetheduck: foreign compared to numbers at least that sounds a bit more like your numbers seem on the high end of things. + +Potuz: Yeah so I'm giving you the worst cases so my computer takes very little on the on the on the app. So the biggest chunk is in aggregating the one beat once. because we take them all at eight seconds and we aggregate them all at 10. And I'm sorry we take them all at four seconds. I have eight and that one takes on my three validators is taking about like 25 milliseconds normally but it gets to two seconds on by location. + +Arnetheduck: So I'm looking at some numbers here actually and specifically. I'm looking at the delay from the start of the slot when the stations and Aggregates arrive and this was basically the number that was asked for before and just eyeballing it like it's in the 97 Range that both attestations and aggregates are in two seconds after when they're. + +Potuz: Well but this is a different issue. So I'm talking about different things. So in order for us to submit the aggregate at eight seconds. What we're doing is start aggregating before so that their aggregate is already ready at eight seconds. So at eight seconds is our deadline to submit the aggregate so if so what we're doing now is before eight seconds and this is suggestible by the user we aggregate them all. And you need that before to have enough time so that by eight seconds you can actually send an aggregate because at eight seconds we're gonna send whatever the node has. Because that's the deadline so all right I mean it's always going to be early. + +Dankrad Feist: I still I still feel like you're misrepresenting the issue because if like valid nodes with two validators can manage in that time then I feel like that's fine that's great because those are just going to get their aggregations in and we don't need the others we don't need everyone to aggregate we just need someone to aggregate right and so you know people who run more validators. Actually I need larger machines to run them then that's not the end of the world in my opinion. + +Potuz: Well I do think that this is centralising Force if we're like going to have like people that are able to be homestakers can only stake like one or two validators for four seconds. + +Damkrad Fest: You were talking about not not like you were talking about someone who subscribes to all validators all right yeah. + +Potuz: That's correct but this is the way that we need to when we submit our clients with default values we are typically looking at the work situation which is a validator that runs at least 30 keys and there's many of this and those are going to be subscribed to all subnets and this is what needs to be our default. so our timing is going to be this regardless of whether you're homesick or not. + +Dankrad Feist: I think it's okay to require someone who is running 30 validators which is like what's like a few million in capital. I think they can afford a machine with a few more CPUs to make this fast enough. + +Casper: Something to note here though is that aggregation is not incentivized and so we rely on aggregation but we don't actually incentivize it only. Implicit incentive is that you aggregate your own View. + +Arnetheduck: I mean there's another point which is that when you're running 30 validators you're not aggregating 30 subnets you're aggregating much viewers there's 16 aggregators for every subnet. So your child. + +Dankrad Feist: Yeah we're really talking about the node that's running a thousand planets or something that would actually need to aggregate. + +Arnetheduck: Yeah my point is that like with 16 like the number is fixed the number of aggregators are subnet. + +Potuz: That the blocks are full if you have like many more Aggregates that are worse Aggregates like what a validator with only two subnets would have. Then You're gonna fill the blocks with less attestations. + +Dankrad Feist: Why would a node with two supplements have worse aggregates? + +Potuz: Because they see less attestations. note the subscribers they get less peers + +Dankrad Feist: Yeah, but aggregations are for subnets. + +Potuz: Yeah but smaller nodes see many less peers and many less unaggregated attestations than larger nodes that are getting much more peers. + +Dankrad Feist: Is that true? + +Potuz: My node is a much worse aggregate aggregator than any that the node in the in our in our prison like kubernetes. + +Dankrad Feist: I don't understand why would if I'm subscribed to subnet 1 why would make subscribing to subnet 2 as well make me see more attestation from subnet 1. + +Potuz: No it's it's it's it's also it's also dependent on the number of fears that you have if you're a homestaker like myself on a bandwidth of a home that I am restricting that my number of peers to be I don't remember it was 30 or 50 now by default then I see many many less attestations that someone running with 200 peers on a cluster those. + +Arnetheduck: That's not how the protocol works. + +Dankrad Feist: So yeah. + +Arnetheduck: The number of peers is completely irrelevant. The only thing that's relevant is the concepts of mesh and that one is kept that let's say 8-12 depending on if you look at average or Max so you can be subscribed to eight periods and see the exact same traffic that if you're subscribed to 200 peers doesn't really matter and your your aggregate when you're creating a block is Created from listening to the aggregate Channel not from listening to the attestation Channel typically. So the aggregators are doing that work for your basic calendar 16 of them right so. + +Potuz: No no but I'm talking about the aggregating the one feature stations which is when you are an aggregator not when you're aggregating aggregates the largest chunk for us is aggregating one bit attestations. + +Arnetheduck: yeah and the risk of you being one of those guys it's pretty small that's that's the 16 aggregators that exist per subnet and you're never subscribed to all the subnets and again this is not a function of how many peers you're connected to that is completely irrelevant. So I think this deserves some more investigation and I think we should take it offline. And we can go through the flow but like it certainly merits investigation like in-depth investigation of all these issues if we're going to change these timings. + +Potuz: And I'll post it. Maybe you're in the channels these benchmarks that we have because we changed these algorithms because of these numbers that we were seeing both in my clusters and clusters and common computers. And we were seeing like very large times for aggregations. + +Arnetheduck: Well yeah yeah I'm just saying, like your peer count is not relevant that's not how the protocol works. But changing the timings the other thing I wanted to say actually was that we don't have to put all the time from the you know the point where we do the attestation or the point that we do the aggregate we can we can play around with that a little bit like it doesn't have to be evenly divided or now it's evenly divided and we want to divide it in a different way but like it doesn't have to be you know six. + +Dankrad Feist: I mean it's all right it would even be proper whatever plasma to make this flexible right to not have a 100 fear division between other stations and aggregation. + +Arnetheduck: Yeah flexible, I don't know because that feels like something that somebody could exploit but like it could be two n six or it could be three and three and whatever. + +Stokes: Right right + +Dankrad Feist: So I mean even the ones even the one second is increased on the first third I think would be huge because ice and my observations there are already under normal networking conditions slots like 3.5 seconds in like it's not rare to see that. So like if you add one more second that's actually that's gonna have a huge benefit to the network already. + +Stokes: Yeah I think we can all agree it would make sense to have more breeding room on the front end and so the question now is just one of those numbers what should they actually be there's a lot of data people have been referencing. We should I think moving this to the consists or some async channel is a great idea obviously there's like this is important and we should keep looking into it. But yeah we need a lot more investigation before we can just say oh let's make this six seconds let's make this two seconds or however it ends up yeah. So let's keep the conversation going. But we'll take it offline from here so next up we do have an agenda item to discuss this proposal Mike do you want to talk about the max effective balance change. + +Mikeneuder: Yeah sure and actually this kind of flows really nicely from the previous discussion because the goal of the proposal I'll link it in the chat is to reduce the validator set which hopefully would help with aggregations kind of. Not only reducing the validator set as currently but also slowing down the rate of growth for the new incoming validators. So I'll just give a kind of a high level overview of The Proposal. We have a few docs that I'll link that kind of outline the pros and cons and then maybe we can just open up to discussion and as far as some of the design decisions go. So yeah the kind of tldr of The Proposal is increasing the max effective balance so this doesn't change the 32 ETH minimum balance to become a validator but it allows validators to go above that we've kind of proposed 2048 -EIP. As a potential upper bound we don't want to make it Infinity as far as how big a validator can get but yeah going up to 2048 we think could be like a reasonable choice. Some of the benefits we outlined from the roadmap perspective we talk about how kind of slowing the growth of the validator set will be important for single slot finality dapline brought up the point that intentionally being blocked on the current validator set size though that is like a little more of an under debate thing. We also talk about the benefits for the current consensus and P2P layers so that's kind of what we were just discussing as far as aggregation is taking a really long time. There's a post from Aditya on unnecessary stress on the P2P Network. I'll link that here so he wrote that before I published the or before we published the mods proposal. But yeah just kind of talking about some of the numbers of on the P2P layer of of how many messages are being passed around and and all these you know the kind of unnecessary bloat from all of the the validators and then we also talk about some of the benefits for the validators from the solo Staker perspective it gives this kind of like Auto compounding benefit which people seem really interested in. The kind of key takeaway here is that with the current 32 8th Max The Sweep just takes all your rewards and withdraws them so any solo Staker just like would have to redeploy that Capital somewhere else to earn any any yield on it versus if we increase the max effective balance. They immediately start like compounding that eth so they're earning rewards on more than just the 32ETH that they kind of initially deployed. We also talk about the potential benefit for larger node operators who wouldn't have to run as many validators. This is kind of like you know a big part of the consolidation would depend on the larger validators actually doing the consolidation. There's some a risk associated with it because the slashing conditions would result in potentially higher slashing if they accidentally double a test or they double propose. So there's some risk associated with it. But in general you know for large validators like coinbase operates something like 70000 validators that 328th cap kind of artificially inflates that number for them and they've you know various staking operators have expressed interest in reducing the number of validators that they run . Or consolidating the validators into higher stake but fewer of them. So yeah I guess that's kind of the high level overview a few of the big questions I think are the design trade-off of the UX versus the complexity of the spec change so in The Proposal we list we link to kind of a minimal view spec PR. Let me get the link for that and it's super tiny. It's 58 lines added 21 lines removed so you know this shows kind of the goal here was to show how small the spec change could be. But this has some. I guess some UX inefficiencies in terms of if someone wants to get the auto compounding you know effect they have to actually withdraw and deploy with the new withdrawal credential with this direct 02 prefix instead of the 001 prefix. There's also this idea that like yeah I guess staking pools wouldn't be able to migrate or to consolidate without pulling their their validators out and redeploying them with like a 2048 validator so they'd have to like to deploy 12048 validator they'd have to exit 6432 each validators and then deploy a single 2048 one. So you know I think it's worth discussing a bigger potential change to the spec that makes the UX better and more more desirable for people. And then I guess the other big question that's come up and and dogcraft has mentioned this a number of times is how do we actually get the consolidation to happen like if we make the change it's only worthwhile if it actually results in a meaningful difference in the validator set size. And you know if the ux is so bad and there's kind of no incentive for them to do it. The big stakers might not do it you know there might be some social capital that they gain by doing something that, like quote unquote looks, is healthy for your network and improves the overall P2P layer. But yeah it's not totally clear that they would take advantage of the consolidation and and make it worthwhile. So I guess those are the big questions in my mind happy to open up the discussion here and you know also take questions in the Discord Channel later. If that's useful but that's kind of the high level. + +Stokes: Thanks I have a question just on your last point there is the auto compounding not enough of an incentive to sort of migrate to this regime. + +Mikeneuder: Well so for for big stickers they can kind of take advantage of compounding by just they have the automatic withdrawal sweep right and then they can just deploy so for coinbase they're running like 60000 nodes that gives them enough to deploy like nine new validators every day just through the withdrawal sweep. So they don't have to really the auto compounding doesn't benefit them as much as it benefits the little guys. There could be a case to be made like the because the withdrawal sweep takes a long time. Like it's, you know 40 days or whatever that capital is effectively dead while it's waiting to get out and so the auto compounding might help them. But yeah I think other than that it's not obvious that it's strictly financially better for them to consolidate right oh yeah sorry sorry the activation sure yeah that that's the thanks opponents that's the 40-day thing that I was meaning to reference. + +EthDreamer: Yeah especially since the for Sole stakers or small stakers like one of the bigger benefits was was auto compounding yeah I'm with I'm with you on like improvements to user experience make make it easier for sole stakers so at least it's currently proposed and brought up but that max effective balance being 2048 is so high you a small stake we would almost never experiment to control it would have to like manually withdraw the whole thing at this point to have it withdrawal. So I definitely do think that UX needs to be improved in order for like small stakes to take advantage of this. And then Another Point I noticed in like deposit processing or apply deposit. Like basically people can top up their if they opt into being a compounding Value. Let's say they can top up their value with the deposits and there is like a limit that you can only talk about by 32 piece. But that's it seems to be a limit per deposit and so it seems like they could. I mean they don't have to really wait the activation, they just wait the 16 - 24 hour deposit queue. + +Mikeneuder: Yeah so this is actually something we tried to cover in the spec because you're right if they are able to kind of get around the activation queues and then all of the kind of churn invariants are broken. The way we got around that is just to say like if you top up past 32ETH that's just like you can't top up past 32 ETH basically. So yeah, that that's kind of like a Brute Force way of doing it. But yeah this did come up then and it should be covered. + +EthDreamer: Is’nt it will have a limit per deposit? Like you could submit a deposit within two weeks, wait for that to process and then submit another deposit for 32 years. + +Mikeneuder: So basically how it works is if we see that the deposit changes the effective balance to be greater than 32 ETH then we the like effective balance of that validator then we just say okay the effective balance is only 32 ETH. So even if the deposit is processed like that. + +Potuz: So how do you eliminate this thing? You're going to have a validator you need to keep track of how far how much part of this balance in this validator was added in this way or was started correctly this is like are you sure about this? + +Mikeneuder: No it's just a mechanism to stop them from topping up past 32 ETH. You know we don't care. + +Potuz: How do we get track of this? So what how how is this mechanism? So u dont allow? + +Lion Dapplion: So the logic is if these public is known and effective balance is already a 52 we ignore, the value of the posited so it's effectively burned. + +Mikeneuder: Right, oh so. + +Potuz: Oh so you're you're declaring that deposit to be to being not to be an invalid deposit + +Mikeneuder: Yeah. + +Potuz: That's that's something the contract is already deployed that sounds like something that is going to lead to a lot of people being burnt + +EthDreamer: that's kind of true yeah + +Mikeneuder: Yeah we talked about this you know I think why would someone top up past 32 ETH like people can burn Ethan a lot of ways why would they top up past 32ETH. If we tell them explicitly that ETH is going to be burnt like I agree that it would be something we'd have to call out. But I also don't think that that would be a normal pattern. + +EthDreamer: At the very least you'd have to like in the tooling that are like in the front ends that deal with popping up a balance you know. Tell them that it's going to burn it. + +Mikeneuder: For sure yeah. + +EthDreamer: Because a lot of people aren't going to know ,this a lot of people didn't know that there was a reorg for the POS activation change like if people don't know these things about the protocol. + +Mikeneuder: Yeah I mean we could also maybe think about ways that the spec like we tried to make this special super minimal and maybe there's a way that we can make deposits part of the like churn limit. And say okay this is what we have the invariant of like one over two to the it's like 1 over 65000 per Epoch can't change or whatever so just so. + +Dankrad Feist: I mean this sounds extremely drastic like I mean I think there's this no go and it's too many this has actually happened a lot of time that people accidentally above 30 degrees. So like I mean that I think there's no way we're gonna do this. It was so drastic for a small mistake. But I also don't understand why. why do you want to stop people from topping up? + +EthDreamer: Just getting the activation queue. + +Potuz: Yeah the problem is if you get a lot of deposits like this then you get a big change in the validator set quickly but I think we can just churn this and that's it. + +EthDreamer: That or like, especially something that would help. + +Dankrad Feist: Right then yeah then deposits just need to replace the activation you just need to be replaced by the positive. + +Potuz: Right and I think we can do this at the same time that we get rid of all of this if one voting because anyways we're thinking on churning those. So we can just mix these two things at the same time. + +EthDreamer: And then terms of user experience I mean I know this is historically at the president and my dad a lot of complexity but another a complete other route is is it possible to add a beacon transaction to combine validators. + +Stokes: So, I think the proposal right now is trying to keep things as simple as possible and adding new types of operations would definitely be more complexity EthDreamer: Right. + +Mikeneuder: Yeah I think that's kind of the trade-off we keep circling around here is like spec change complexity versus UX but you know I think it's worth kind of exploring many different Avenues and seeing what makes the most sense. + +Potuz: And I think if everyone more or less agreed about like getting rid of one voting and there's already a spec towards that. So as long as we move towards putting the churn on the deposit queue instead of the activation queue I think we can just mix these two PR’s into one. + +EthDreamer: That's a good idea + +Mikeneuder: Yeah that's helpful. I haven't been kind of keeping up with the each one voting thing so I'll have to do a little research there but that sounds promising to me. + +Lion Dapplion: I mean the latest spec doesn't have a queue but it can be brought back. + +Potuz: That was just a matter of terminology right. Because you still have the churn so the issue of the queue being on the states are not in the states I think is minor here. + +EthDreamer: And the other is it also like pretty heavily planned or leaned into that we're going to enable execution layer which initiated withdrawals. + +Mikeneuder: Yes yesterday, I think it seems like yes that will happen. I'm still not totally clear on the relationship between that and this though. + +EthDreamer: Oh it's just that if we also combine that with this it would drastically improved UX. + +Mikeneuder: Yeah how do the execution layer withdrawals impact the churn but maybe that's a question for offline but it's not that sure. + +EthDreamer: It's just it's just the fact that if you're a small sticker you basically you basically never withdraw and if you have one validator you have to generate 2048 EIP off of that one even if it's compounded it will take. So you need another way of initiating withdrawal. + +Mikeneuder: You're right but I was just asking if you do an execution layer triggered withdrawal like do they have to go through the withdrawal queue too that seems okay it would have to yeah. + +Lion Dapplion: I think the point is that yeah + +Stokes: The proposal is for exits and it would just move into the execute like it does today. + +Lion Dapplion: Okay the the point here is, if we can do parcel withdrawals triggered from execution then we could get rid of parser withdrawals automatic personality throws and. + +Stokes: You're saying you would do that to add this back into the proposal now that Max is talking about. + +EthDreamer: Well I would. I was actually talking about partial withdrawals. + +Lion Dapplion: Yeah so the point is like today if you are a solo sticker you have one moderator you need to at least capture some value to pay for expenses and what not. If we disable parts of withdrawal signs that you have automatic compounding with the feature that Mike is presenting there has to be some way for you to extract a value fractional value of your validator without having to exit the full thing. + +EthDreamer: Right + +Mikeneuder: So execution layer initiated partial withdrawals not full exits. + +EthDreamer: Basically right although we would presumably need both. + +Mikeneuder: Right. + +Dankrad Feist: I mean that would be cool then we could get rid of needing an extra address right. We could just switch all validators to this functionality and everyone just withdraws whenever they want. right so that seems a lot cleaner than the current proposal. + +Potuz: One yeah, one problem I see with this is that you need to bound the amount that you can actually withdraw on a partial withdrawal otherwise you're gonna get a large change in effective balance in one slot which might be a problem. + +Mikeneuder: But wouldn't the partial withdrawal have to go so the Proposal is written to to rate limit the activation and the withdrawal queue based on state rather than number of validators so if the execution layer partial withdrawal goes through the normal withdrawal queue then that rate limiting should be fine, right. Like we just have to make sure that the rate limiting is correct. That's the well yeah no yeah never mind. + +EthDreamer: Yeah but it ends up like as currently written the the queue is there is no limit for partial withdrawals only full withdrawals because that actually affects the value or set but when you actually have right. + +Potuz: So the proposal is only to trigger exits not to trigger withdrawals. + +EthDreamer: Right but if we do enable partial withdrawals this way like you don't it now you can withdraw even more than 32 ETH and still not technically exit your validator so you would draw on a validator's word without going to the execute so yeah. + +Mikeneuder: Yeah sounds like some more details to work out but this is potentially promising. I guess the one thing about this is it would change the default Behaviour. So like, Yeah, part part of our design goal for the first spec was like if people don't want to change anything then we wanted to leave that there so that's why we left the 001 credential alone but if we change everyone like to compounding with these execution layer triggered partial withdrawals then a lot of like workflows would have to be updated. I don't know if that's like a big enough reason to not do it but it's a consideration. + +Stokes: Okay well thanks for bringing this up Mike and hopefully that was helpful feedback thanks everyone for the conversation. Is there anywhere Mike you'd want to drive further feedback like I guess just to the research post. + +Mikeneuder: Yeah each research post or I think Danny suggested, that Discord the POS consensus Channel could be a good place. So yeah, I think I should be pretty easy to get in touch with but would be happy to hear more feedback so thanks everyone. + +Stokes: Okay great are there any closing or final comments for this call otherwise we'll go ahead and wrap up. + +Paritosh: I wanted to bring up an update on the test net call we had just before this. + +Stokes: Yeah please. + +Paritosh: Yeah so we had the Hosekey first hoskey test net call about an hour ago and we're gonna have the next one again on June 29th with a couple of asks I can link the summary over here. But one of the big questions that was still open is the current ideas to start with about a million and a half validators so that we have significantly more than mainnet and we don't have to rush to immediately make deposits keep ahead. We're just not sure if all clients think that they'd be ready for such a big value data set, a Genesis or such a big Genesis State. So just looking to hear some thoughts on that. + +Lion Dapplion: Will we ever reach that in a minute what percentage of total each Supply stake will represent. + +Paritosh: I mean we're at six hundred thousand now with the queue of about a hundred thousand so we're already at 700,000 in a couple of months so we double what we have right now we're also open to starting with a smaller number like a million but that won't give us as big a difference. + +Stokes: So has anyone on this call like tried I just stayed that big I would say if we can get away with the million and a half may as well. + +Lion Dapplion: But I mean I don't have is 40 million each staked. So 30% of Supply minute that will be pretty crazy + +Stokes: I don't know I think if you talk to some of these liquid-staking people like they want all these sticks so so + +Lion Dapplion: all I don't think the numbers are completely taken to be contained and no one can do anything. + +Paritosh: I think when we started prata, if someone had said 15 of all it was going to be State we would also thought it was crazy but we're already here. + +Lion Dapplion: Yeah I'm not opposed like seeing the difficulties I would rather start with a big one so we can optimize the clients and get done with it. + +Sean: Okay yeah I also support the nine and a half size. + +Potuz: I have a suggestion as well. I'm not sure how hard is it to do. But one of the things that we're seeing on mainnet is that we now have some validators that are exit. And even if we start with a large number of deposits we may increase even the evaluator slides by just adding validators that are already exited on Genesis. So that we start with a we don't need to have like a large number of validators sending attestations but the slice itself is still large. + +Lion Dapplion: That's a great point. + +Paritosh: Yeah we can also take that into account Thanks. Yeah besides that we're looking for Client teams involvement to run at least majority of the validators. There's a Hoshkey planning dock that's been shared on the chat already that States what the requirements would be. Like what you can expect to get away in terms of machines it is a bit of an investment in terms of money so if so we're looking for commitments like solid commitments from client teams by the 29th. And if not possible then we're gonna look at node operators to help us get to the one and a half million or one million or whatever number we decide on. So, please talk to your infrastructure teams and so on and try and get back to us before the next curve. + +Stokes: I'm sorry you said that was the 29th of June. + +Paritosh: Yeah I think that's it on the Hoshkeyi topic thanks. + +Stokes: Yeah thanks for bringing it up. It's very exciting to see progress there. Okay anything else otherwise we'll wrap up a few minutes early. Okay I'll call it thanks everyone. + +Thanks Alex bye. + +Stoke: Thanks everyone. + + +#Attendees + +Alex Stokes +Marius +Terence +Ansgar Diatrichs +Arnetheduck +Ben Edginton +Pooja Ranjan +Roberto B +Barnabas Busa +Lion Dapplion +Ahmad Bitar +Justin Florentine +Phil Ngo +Sean +Dankrad Fiest +Paritosh +Mario Vega +Tim Bieko +Mikeneuder +Hslao Wel Wang +Lightclient +Caspar Schwarz -Schilling +Zahary +Matt Nelson +Nico Flaig +EthDreamer +Fabio Di Fabio +Matthew Keil +Maintainer.Eth +Enrico Del Fante +Anna Thiesar +Potuz +Guillaume +Saulius Grigaitis From de2c59c19a4110571901b1a5137173edf389b118 Mon Sep 17 00:00:00 2001 From: Darkfire_rain <67558925+darkfire-rain@users.noreply.github.com> Date: Mon, 19 Jun 2023 18:04:16 -0400 Subject: [PATCH 2/4] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index cae09971..79688ee9 100644 --- a/README.md +++ b/README.md @@ -213,7 +213,7 @@ The audio files of the Previous Meetings are stored permanently on [Permacast](. № | Date | Notes | Recording | --- | -------------------------------- | -------------- | -------------------- | -108| Thursday 2023/5/4 at 14:00 UTC |[agenda]((https://github.com/ethereum/pm/issues/771) \| [notes](AllCoreDevs-CL-Meetings/Call_108.md) \| no reddit | [video](https://www.youtube.com/watch?v=RZnf3K1i3NM) +111| Thursday 2023/6/15 at 14:00 UTC |[agenda](https://github.com/ethereum/pm/issues/809) \| [notes](AllCoreDevs-CL-Meetings/Call_111.md) \| no reddit | [video](https://www.youtube.com/watch?v=ybgQuRcz9sg) 107| 106| Thursday 2023/4/6 at 14:00 UTC |[agenda](https://github.com/ethereum/pm/issues/752) \| [notes](AllCoreDevs-CL-Meetings/Call_106.md) \| no reddit | [video](https://youtu.be/MrHh_jS4lZY) 105| Thursday 2023/2/23 at 14:00 UTC |[agenda](https://github.com/ethereum/pm/issues/747) \| [notes](AllCoreDevs-CL-Meetings/Call_105.md) \| no reddit | [video](https://youtu.be/Xc6Ss-m_nlE) From f2dff9e4f0959692e53c835ef67b8afeab504d26 Mon Sep 17 00:00:00 2001 From: Darkfire_rain <67558925+darkfire-rain@users.noreply.github.com> Date: Mon, 19 Jun 2023 18:16:26 -0400 Subject: [PATCH 3/4] Create Meeting 162.md --- AllCoreDevs-EL-Meetings/Meeting 162.md | 303 +++++++++++++++++++++++++ 1 file changed, 303 insertions(+) create mode 100644 AllCoreDevs-EL-Meetings/Meeting 162.md diff --git a/AllCoreDevs-EL-Meetings/Meeting 162.md b/AllCoreDevs-EL-Meetings/Meeting 162.md new file mode 100644 index 00000000..e38882f3 --- /dev/null +++ b/AllCoreDevs-EL-Meetings/Meeting 162.md @@ -0,0 +1,303 @@ +### Meeting Date/Time: May 25, 2023 +## Execution Layer Meeting 162 [2023-05-25] +### Meeting Duration: 90 mins +### [ Audio Video of the Meeting] (https://www.youtube.com/watch?v=jJvS6QjhPlM) +### Administrator: Tim Beiko +### Note taker: Darkfire_rain +### [Agenda](https://github.com/ethereum/pm/issues/781) + +# 1. Cancun Scope planning +## SELFDESTRUCT removal impact analysis + +**Tim Beiko**: Okay, we are live. Welcome everyone to CVE-162. Okay. So, today, I guess at the start, we've commissioned an impact analysis by debug around self-destruct removal. So they'll be here to give us an overview of the analysis. And there's already been some conversations in the last day or so about it, and some, you know, potential issues with, the proposal of EIP # 6780 and then some counter points to those issues. So we can have all that conversation. And then I, there's a bunch of stuff around the IP for it before. That's in the work. So it makes sense to cover all of those and after that, I think, thinking about like, how can coon shapes up to be So whether there are some things we want to add or removed from CFI. There's already some comments about that in the chat. And then Daniel has Daniel has put together a proposal to just align the upcode the upcodes that are used across all of the proposed EIP’s for both Kennedy and the next forks and then a couple of more updates on some other EIP’s. So hopefully, we get through all of this in 90 min. But yeah, to kick us off. Sorry. Neville, do you want to give us a rundown of your impact analysis of self destruct removal. And I'll post the link in the chat. + +**Neville (Dedaub)**: Sure. Yeah. Can I? Share my screen as well. Yes, yes, please. Okay. Cool. Yeah. So thanks for hosting. yeah. So I'm never Greg from, we work commission to do a study for the Zoom foundation regarding the removal of the self-destruct upcode or the chain in semantics of that upcode. And it's the team was basically me. All 3 of us are on the call. So feel free to ask us questions. All right. So the scope of the study was basically to determine the impact of change in the semantics of SELFDESTRUCT and some of DESTRUCT has recently been used, or, you know, throughout its lifetime to either safe to transfer it or to perform app upgrades ever since create to came about. But there are also some question marks. You know, things like, is it going? Is it actually being used to burn CRC20 tokens. Things like that? So wanted to find the affected projects wanted to find the impact. And all of these things are you know, coloured by the fact that you know some projects have not been recently used, or they don't have a high balance. They're not part of the known contract. So the impact has to be considered against all these factors. And so we and the other thing is that we have two different proposals. And we want to see, you know which one, or help determine which one to select, if any at all, right out of these two. So just to give a little bit of an overview of how SELFDESTRUCT currently works for those who are, you know, external. There's a a bit of a new analysis here, because so SELFDESTRUCT, you know, top one. And from the stack. That's an address, and what it does, it sends all ETH in the current in the current contract to the beneficiary address, but, unlike CALL, it does not actually create a CALL frame to execute the beneficiaries code. So, it can be used in a way that's even for smart contracts. That block, you know the receive function. Yeah, that brought the receive function. Then. some of the stuff is still gonna work. It also clears the runtime Bytecode of the current address, resets nonce to 0 and resets all the storage vars to 0. Okay. It does not issue a gas refund. So that's changed, you know. I think one and a half or a couple of years ago.So how is this gonna change in the EIP-4758. So, very simply right, it's gonna be renamed single. It's only going to SENDALL to the beneficiary address. But it's not going to clear the Runtime bytecodes. So I'm going to reset the Nonce, and it's not going to reset the storage Vars. So that's a simple proposal there. And in the way I remember, which ones which one is simple, which one is more complicated is by the first number. So the simple one starts with a 4, the more complicated one starts with a 6. So, in EIP-6780, the semantics of SELFDESTRUCT are essentially the same as they are now, except that there's a condition. That, if the address and by address I mean specifically the upcode address, right, is created in the same transaction, then you do the same thing as in SELFDESTRUCT you, clears the Runtime Byte code, reset the nonce, all storage vars. Otherwise these things will not work, they will not happen right? Okay. So, now that we know which each one of these proposals. What each one of these does. here's a little bit of a summary of what we found in the study. So, first of all. Some of these things are subjective. So these are protocols which you think are affected. Some of them are, you know, we estimated the impact too low because they're more likely to be a false positive. So we have to clarify, for instance, in the case of cellar over here. because we've seen some, you know, weird behavior. If you know, the SELFDESTRUCT instruction is going to change. But essentially, the impact is, you know, quite minimal. especially in one of the in the more complicated proposal. Right? So, 6780, so yeah. I'll go through these one by one. Just give it a little bit of a summary of what we saw there. So in the case of Axelar Network, Axelar creates contracts, and then it destroys the contract in order to do some safe Ether transfers. No, we think that it is upgradable. So, impact is high, but it is upgradable, you know, so that can be fixed. There is no impact in the case of the Axelar Network. If the second EIP is chosen, that's #6780, so it can remain operational without any upgrades with the second EIP. In the case of sorbet, the similar to gelato. It's not upgradable, but I mean, that's part of the smart contract that uses SELFDESTRUCT. But, it really isn't used much right? And the case of gelato, it's used in conjunction with Pine Finance. Okay, so find finance. You know we think it's affected as well. Now, I've skipped celar in the case of celar. That one is interesting because it reach it seems to replay messages from another chain. But the messages ought to be unique. And the uniqueness of this message is used as part of the salt when creating a smart contract, using, create 2, and then those smart contracts are subsequently destroyed. So we think that in this case. If the messages are indeed unique. Then that's not gonna be effective. But it's gonna take a while, obviously, to go through the entire Celar, Protocol is very complicated. We'd have to look at the way it interfaces with our change as well. But that's what we think at this point, you know, we'll try to confirm with the developers. Chainhop, actually works in a very similar way to Celer the part that has this issue. JPEGd, is affected, and then 1000-Ether. Their homepage is also affected as well in theory. But you know we haven't seen an instance in the past where, like, if we had to replay this, these sequences of transaction with we would find the same effect. But it is a, you know, impacted by this. Okay? So note that all these things are subjective in the case of estimated impact. But from the point of view of like finding potential protocols that are affected that is not subjective. We conducted this by, you know, looking at past in -transactions. By doing in some cases static analysis of the contracts by code as well. So we did cover a little bit. MV Bots as well, even though there weren't sources available. Now, if we look at this from a quantitative point of view, right? So the usage patterns. so we we took at we looked at big blocks between blocks 15-17 23million. We measured the number of times, for instance, that CREATE or CREATE2 is used. Quite a few times, as you can see over here. Most of great to pretty much dominates. There's an order of magnitude fewer sell the struct but interestingly. Most of the SELFDESTRUCTS are used in conjunction with Short-lived smart contract creations. As you can see now, potentially, this can be impacted by EIP-4758, because there's a shortlift smart contract. It's not gonna be, you know, removed at the end of the the transaction, and so someone else can interact with it. But then this is even more worrying. So metamorphic patterns where a contract is created on an address. And then it's destroyed in the same transaction. And then in another transaction. It could be like, you know, weeks or months after that the same smart contract is recreated. and then this one is destroyed. Now there's been 22,000 instances of this. Now, this pattern will not be impacted by EIP-6780, but it will be like outload by this other proposal? and actually Axelar Network. This is it's responsible for a few of these. yeah. And then, finally, this is what we were kind of like, mostly worried about so long lift metamorphic patterns. We're longed to refer to the fact that a contract is destroyed in one transaction and then recreated as another transaction. Hey? When I say short-lived, it means that a contract is created and destroyed within the same transaction. In this case, as you can see, it's like 2 orders of magnitude, almost 2 of the magnitude less than in the previous case. like the number of times that this has happened. So. yeah, I mean, just just looking at this quantitatively. It seems like, you know, especially if EIP-6780 is selected. Most of these you know, usage patterns will be would be valid. And you know this will not be. But then we'll see kind of. Like most of these, 735 upgrades are not actually done in mainstream protocols. They are done in, you know, unknown, or MV Bots, or things like that. It's okay. So let me give you an example here of a an example of a short-lived metamorphic contract. So a contract which is created and destroyed at an address and then created them destroyed at another one. I'll share these slides with you guys. So you can see. But basically, these slides are just a, They're just a summary of the full document. So you can see here destruct here. So this contract is created and destroyed. And in the next transaction same contract is created and destroyed. So these 2 are separate transactions. But, as you can see, like the same smart contract is created and destroyed. This is just an example, in Axelar Network. Yeah, we've also looked at some protocols where this thing is done. or could potentially be done according to how the smart contract code operates. But, for instance, in this case, this is a problem called revest we can look at sample transaction. But there's there's no need. But basically what happens is that re best creates a smart contract. And then I think it transfers a an NEFT, when you withdraw and it uses actually within the creates to solve the NEFT id. So again, this is just an example, you know, if you want to see more of these, you know examples. Look at the full report. Let's wait until this rolls up. But essentially what we think is because of this. So over here, this clone deterministic call over here. it's gonna pass the NEFT id, you know some entropy from the NEFT id, and then that call and deterministic down the line. It creates a new smart contract. And it's going to use, you know, the NEFT id as part of the salt, and we think that, you know, since the NEFT id is more authorly increasing. There's not going to be clashes. and the smartphones that are created. So even though, like the smart contract, can cannot be destroyed. If if the proposal goes through they're always gonna be new smart contracts that are created because of that. So you know it. It takes quite a while. I mean, this study was connected over two weeks. It takes quite a while to go through all of these probos and and verify. But all we need to do. you know essentially, now that we have. This discussion going on is, you know, just confirm that the ones that we thought were false. Positive are indeed false. Positive. + Okay, so, Yeah. So, one of the other things that we try to do is to use static program analysis to find possible behaviors that we haven't seen on the blockchains. so far, but they may potentially happen. For instance, you know, someone deposits to a smart contract or to some address. and for some reason. with the proposal. They cannot be, you know, retrieved right, these tokens. So someone asked the question, on one of the from all the forms. that is used as to discuss these proposals. what? So what we did for this is we first of all, we found all contract factories. so for this thing, we actually. We perform static analysis to find contract factories because, what contract factories do is that they create a smart contract. So they use something like Create 2, but they are actually within the in, it's called. They would have like certain patterns there. Okay. Then out of these, okay? So we find the ones that do CREATE 2. And then we applied static analysis. And this is in a lock shell. I mean, this is not the real code. This is a pseudo-, but then I'm not sure what it did is. You know, if it's fine as a SELFDESTRUCT in a contract that was created by these factories. And you know it doesn't have. For instance, an ERC transfer an arbitrary call like, you know, //contracts.call and allows you to pass in any data, or there's no delegate call. And potentially you know, you can have funds stuck in this smart contract after the EIP goes through. So we looked at many of these examples. I mean many of these where I would be bots. Of course, I mean, you blocks don't have verified smart contracts, so verify the sources. So I have to use our D compiler. It takes a while to, you know, go through these implementations. And there's, you know, there's try and there here. So it's not going to be an exact science. But we thought that all of them were false positives, for you know, this particular pattern. So we haven't found any examples of this. okay, so what was surprising is that there's there's been a lot of discussion about create to and sell the structure being used to perform upgrades. But in mainstream particles we haven't actually found this anywhere. But if you look at the fine print, and you know some of the libraries that are used to do these metamorphic concrete upgrades. The Ux is a problem. When you do, metamorphic contract operates using SELFDESTRUCT and create2, because you cannot do this atomically. So you have to do this in two separate transactions. Now imagine doing a governance proposal where you propose. You know one transaction, you do something, and then the second one. you have to recreate the smart contract. so in. And then, in the meantime people might go in and use the protocol without with the smart contract, not 3 created, so that can cause, you know, all sorts of issues. And then the other thing is that the state setup, you know, the mainstream protocol would have. You know, quite a bit of states that needs to be recreated when you do a substrate and CREATE2. So that's not used much. It's mainly use their men with you. But and there's obviously there's arguments to be made as to why, for instance, if you box would use that because it's like more efficient to do it that way. You don't have proxies. but maybe we'll hear about this and in the rest of the meeting. So you know, in summary, we think the impact is moderate. You know, it's like an especially so with EIP- 6780, like we EIP -6780. So it's gonna affect mainstream protocols. Metamorphic contract out rates are low. One of one thing that we found is that even though there's a lot of discussion about, you know, have to start being duplicated. People are still using it. equally so, irrespective of the users of the through and blockchain again, but very rarely right and also throughout the study there was a coincidence that there was some evidence of said this drug being harmful as well. So, yeah, that concludes. My, you know, just just the summary here. And I think we'll be discussing a few things throughout the call, and if you have any questions, just just ask me or my colleagues on the call. Thank you. + +**Tim Beiko**: Thanks a lot for the presentation. That was very good. + +**Neville (Dedaub)**: I guess before. + +**Tim Beiko**: Oh, thanks, I, William, I see you have your hand up before we go into the like set code stuff. I just wanna ask So anyone, I guess on the client teams or others have like questions about the presentation or the report in general. and then. Yeah, after that, we can go about, you know, next steps for the specific proposals. But just yeah, any questions, thoughts, comments on the report or presentation Okay, And then, if not I guess you know where we were at prior to this report was that we'd included #6780, which is basically the the second proposal mentioned in the presentation and and allows SELFDESTRUCT if it's within the same transaction as create call. And so, William, you raised like a couple of cases where this could break things. do you want to take a minute to like, walk through those. And yeah, we can take it from there. + +**William Morriss**: So I'm a user of the create to upgrade pattern. I have a prepared statement so the report skim over MEV, but doesn't speculate why this upgrade pattern is common among me. V- Bots, who wrote their contracts and assembly. The reason is broader than MEVand applies to normal decks, trading, indeed, any actor trying to get competitive transactions included with any urgency must participate in priority. Gas options off chain systems are being built to allow anyone to participate in these options. Such auctions are denominated and gas used, so any upgrade mechanism with any gas overhead whatsoever cannot be used competitively. That leaves code replacement as the only viable mechanism for traders. Suppose I want to be able to upgrade my code in order to trade on the newest protocol. It is much simpler now when I'm able to replace the code without the disability I would need to redeploy to a new account. Instead.: Every NFT, every token balance, every token, protocol, approval, every staking deposit, every registration. Indeed, my entire presence on the chain has to move over to this new account for identities, and moving all your possessions to another account is painfully expensive. If your account is a smart contract having a secure way to upgrade it can avoid such a migration. Indeed, this is the purpose of several of the mechanisms identified in the report. Such systems are inventing ways to secure the upgrade costs for the masses. SELFDESTRUCT removal destroys their work, set code, inclusion renews it. We can preserve the ability of accounts to change their code by including set code in the same. For systems anticipating the fork, can prepare by including setcode. If setcode is not included in the same, for we could not securely anticipate what the Semantics will be. It is critical for adoption of smart contract, Wallace, that we preserve code mutability. aside, I've had some discussion in the topic channel on discord. And it seems that it would be a significant improvement for static analysis if we also disabled setcode during delegate call, and this would indeed limit the scope and risk and make it much easier to identify potentially mutable contracts because you would just be looking for the opcode. Another note is that there's I forgot that. Anyway, I'm gonna read your responses now. + +**Tim Beiko:** Thanks. And then, okay, we had. It all happened in the past day. I you have put like a a set of security concerns he had about setcode in the discord I just shared him here. He unfortunately couldn't make the call and then I think what it boil down to in the discord right before I hop on this call was that. Basically, if we want to introduce setcode as a mitigation for some issues caused by #6780 there are some security concerns with that. The worst of them have to do with using setcodes within a delegate call, and maybe we could +enable setcode without enabling it in delegate call and and that would results with the issues. I guess. + +**MariusVanDerWijden:** So. Id, yeah, I have a quick question, can we add setcode later on? Is it possible to add this upcode later on. If we is it. + +**Tim Beiko**: assuming we do, we do remove SELFDESTRUCT right? That's what you're asking. + +**William Morriss:** So in order to allow contracts that are currently upgrading with SELFDESTRUCT to remain upgradable through the upgrade. It would be better that we don't have downtime in which code mutability is not possible. And also it's important to solidify that opcode, so that at the time of the upgrade the contracts that are frozen are able to be prepared for the setcode upgrade. So it's better that they happen in the same. Hard, for + +**MariusVanDerWijden:** is it technically impossible for them to be separated out? + +**Tim Beiko:** What do you mean by separate it out? + +**MariusVanDerWijden:** Is it? Is it possible for them to be in separate hot folks? + +**Tim Beiko**: Well, I guess. What if I understand correctly? This means. + +**MariusVanDerWijden:** Yes, but the contracts that are affected are basically bricked between those hard forks, right? anyway, break between those hard folks. This, of course. looks to me like. I think court mutability is something that should have never been been done. In the first place, it's very much an anti-pattern, in my opinion, and people are taking advantage of of it, and I don't really think we should include this upcode. No. and I don't. I also don't really think we should do this of code in the future, but we can have a debate debate about that in order to unstuck or like make it cheaper for MEV BOT to to extract them, maybe I don't think I think this this upcode has a huge amount of security implications, and we should. Shouldn't you know it? + +**Tim Beiko**: Thanks Gil. + +**Guillaume**: yep, I mean, basically a lot of it. So Mario's already said, Yes, it's I think it's a huge security issue. But the biggest problem that might be my question still would be if any of the arguments that that we're just given in the statement, we're true or at least we're significant enough to make us consider including setcode. Would that not be like? Would would not to simply not remove it. Removing self. This would be the superior solution, like, does it? Code offers something that not removing SELFDESTRUCT would not solve in a better way. + +**William Morriss**: Yes, So the weakness of the SELFDESTRUCT upgrade pattern is that there's a downtime in the upgrade itself, where? Because the SELFDESTRUCT takes place at the end of the transaction rather than during it, so you must selfdestructed one transaction, and then in another transaction. Recreate the contracts setcode would allow us to upgrade safely and securely in place. If you all want more time to analyze setcode, please. just postpone SELFDESTRUCT. Thank you. + +**Tim Beiko:** Thanks.Then + +**Ben Adams**: If that code and SELFDESTRUCT depreciation happened, wouldn't that code need to be in a fork before SELFDESTRUCT because otherwise you couldn't upgrade the contract to use set code without putting in because of SELFDESTRUCT happened first. Then you couldn't. You can no longer upgrade the contracts. And if it and that I didn't exist beforehand, so you can't upgrade it to you. + +**William Morriss**: yes, so I think it can be in the same hard forkr because contracts can contain invalid octos. + +**Tim Beiko**: Okay? So people would basically deploy the contract with an invalid upcode prior to the for. and then the of code will become valid after the fork. Is that right? + +**Ben Adams**: Yeah. But they would be dead contracts during that time. + +**Tim Beiko**: Yes. yeah, I guess I'd be curious to hear generally from, you know, client teams given all of this how the people feel about 6780 setcode like. Do we want to? Basically, I think that you know, the the first question is, we we agreed to like, have 6780i in pending the results of like the impact analysis and and and making sure that, like not too many things would break our teams comfortable, leaving it in, regardless of what we do with set code and analysing that separately? Or do we feel like the like remaining inclusion of 6780 should be dependent on setcode. + +**Andrew Ashikhmin**: I think we should leave 6780 in and analyze that called separately. + +**Tim Beiko:** Okay. + +**MariusVanDerWijden:** I feel the same way. We should have 6780, as soon as possible. And think about setcode. + +**William Morriss**: Could we perhaps finalize the upcode reserved for set code, such that bricks, smart contracts, might eventually be up there on bricks in a future upgrade. + +# 2. EIP-4844 + +**Tim Beiko**: Yeah, we have a whole section about upcodes later on. Daniel's put a full list. So I assume we can at least. You know, quote on call to preserve one for setcode. yeah. And we can discuss that as part of that. I guess. Yeah, okay. So Aragon, get on board with leaving SELFDESTRUCT. That is, anyone else, I guess. Does anyone disagree with that? Because anyone disagree that we leave? we leave it. we leave the EIP- 6780 in. We discuss, we keep discussing setcode. It may or may not be included in this upgrade and security, you know, just in the last, like 12hrs. There was a ton of discussion. So there's still a lot of back and forth. But does that make sense to people? Okay, so let's do that. yeah. So no actual changes to the to the fork inclusion list. and yeah, I'll make sure to share. Well, I guess if anyone wants to read the full impact analysis for SELFDESTRUCTt it's linked in the agenda. And so people can see it from there. Okay, next up. EIP-4844. There's a ton of PR's Since then that updates some potential changes for the next 10 min, and then some RLP, and S changes that might affect are currently C, if I a piece so I guess to start off like, client, you had. PR-7062. That adds data. Guests used to the block header. You? Wanna? I briefly discuss this. + +**Lightclient**: Yeah, I just wanted to see if there is any interest from other EEL devs. I'm not sure if Peter is on the call. But while he was implemented in some things related to Sync. He just kind of noticed that there are some it. It was a little bit different interaction with the sync code than he was expecting. And after dig into it we kind of realized that the excess data gas and and the base fees are not as similar, as we, you know initially, would have anticipated them to be so. Ultimately. The issue is that for excess data gas, it's kind of that number is going to be used in the block, the the descendant block. And so the value that you need to compute the cost of the data transactions has actually. The excess data gas from the parent header, and for base feeds slightly different. The base fee in the header is actually the base fee that's used during the execution of transactions. And so whenever you do your header validation, that's when you are checking that the base V is computed correctly from the header. You know it. This is kind of just a proposal. I was sort of sketching out what it would look like to bring those 2 things in line, so that they both are representing the value for the currently executing block. And so that's kind of what that that Pr is. They both do essentially the same thing. I think. Ultimately the question is, just. Do we care enough about having these similarities to try and make this sort of minor change? Or are we okay with the status quo? Personally, it's just like one of these thing that's in another one of these things, where, if we have the formatting slightly different. It increases the overall code complexity of clients like we saw. As you know, Peter was implementing the syncing code, whereas, if the you know, if we had a solar mechanism already, and we just reuse that mechanism in most of the same way, it would have kept things a bit simpler. So that's like, you know, my thoughts probably like slightly favor the data gas used approach in the PR. but I'm curious what other people think about it. Oh, Peter's here, too. + +**Peter Szilagyi (karalabe):** Yeah. So perhaps just a a slight thing. I I maybe just an expansion to what Matt said, that the I think the complexity comes from the fact that previously we had the the fee was defined is defined by the base feed. And so essentially, we have to have a 2 header fields. One of them tell us how much gas we consumed, and based on that we can validate the base fees from header to header. and the base itself. If I want to run the transaction, I only need to to look at the current blocks, base fee, and with the block transactions we kind of merge these, how much we use and how much does the next one cost into a single field? And because it is a single field, it makes basically everywhere where previously. So previously. When we validated the header chain, we just looked at the header field, we didn't care about the block content at all. And then when we ran the block content, then we just need to look at the current header. and since this field somewhat got convoluted, it somehow means that I need to be able to both look at. So when I'm I'm running the blocks, I somehow need to both look at the current header and the parent header. Similarly, when downloading snap via snapsing. Because there, I even if I'm not running the block or executing block, I'm still I still want to verify the blog body. And the thing is that it doesn't really matter how we interpret access the excess data gas, whether it's for pre execution of post execution. The thing is that I. If I if I convolve these 2 fields together, I will have this extra complexity, that all validation code will all of us that need both the parent header and the current header in yeah, whether this is something worth I mean the week. One solution is to go with the current status quo and make the code around this structure a bit more complicated. The other solution is to split out the 2 fields so that it follows a similar pattern to basically, and the date and gas used. And if you have the 2 fields and the code route you become a bit of simpler. Yeah, it's not that complicated. It was just something that that was surprising to me that it's a new mechanism, something that was kind of unexpected. + +**Tim Beiko**: Thanks. Andrew. + +**Andrew Ashikhmin**: yeah, I think. he in Aragon we kind of we have separate stages for headers and bodies. So I would be very much in favor of making the change, so that that we don't need a parent's body to verify the block.So yeah, I'm totally in favor that they because I just just to double check. So with this change, we only need parents Header, not parents Body correct? + +**Peter Szilágyi (karalabe):** no. Actually So what the orange current thing requires. If you want to verify the current blocks body, you need the parent header. So that's the weird thing that the current transactions required. The parents header. That's what the weirdness is with the proposed change. you could verify the headers completely, separately, and then, when you want to run or execute the block, you only need to look at the current header. You don't need to look at the parent header at all. So that's how currently everything else works. But it requires this field being split out. So that's the cost So, for example, for us, the complexity was that our in our in one we are snaps, thinking. We essentially also have this two-phase thing where we download the header separately, and then for every header, if they're the transaction hash is not empty, meaning that there are transactions, then we download the body block body and just feel the header. And up until now we just said that. Okay, I want to fill this header. I downloaded the all the contents, and then I just match it. That does this trans. As this transaction list match what they had there once, and if yes, great. And now with the block transactions, what I need to do is that after I'm downloading the list of transactions, what I need to do is that. Okay, okay. But now, does the excess data gas also get computed correctly, for which I think I need the parents access data. It. It gets a bit bit funky. It's it's doable. So the way we did it is that from now on our downloader, instead of a download task is a single header. It's actually heather. And it's parents. So I mean, it's not specifically complex. But for example, it makes. if you interrupt synchronization and resume it, then you will also need to dig up the parent of the first header. And yeah, it just these little tiny weirdnesses all over the place where up until now you. You just had the single header, and now you just need to, and it's it makes things we wonky. But again, the question is, is it worth it to add an extra integer to the heather field? Or is it too much? And I'm I don't want to make this decision, really, because the current design. Isn't that painful? So if if people say that it's not really worth it to stir up the wasp nest for it, then I can live with the current, whatever it design is. + +**lightclient**: Yeah, I mean, I think, like this, one simple thing is really not that big a deal, but it's more of this mindset, and I'm like worried. What will happen if you know every fork? There's that one small thing that doesn't really align nicely, and we could have just fixed it. If we have that mindset in like 5 or 10 years, you know how many like special educations are? Are we really going to have, I think, a lot + +**Tim Beiko**: right, I guess. Given that. Get an Aragon's comments. Does anyone think we should not do this change? + +**stokes**: So that' a point about sort of refactoring the execution gas in a way that might make the current thing with 4 4 4. Make more sense. I don't understand. you know what he's envisioning well enough to really defend it. But I wonder if anyone else has? + +**lightclient**: Is it written somewhere? I don't think I've seen it. + +**stokes**: Yeah. Well, so I guess I'm also a little confused like there's like, maybe 2 things here. One of them is just changing how it is computed, and move into like a nicer sort of math to like better approximate exponentials. Definitely, it sounds like this change that we're bringing up right now is unrelated to that. And it's just more about. How do we actually + +**lightclient**: validate these things? Is that correct? Yeah, yeah, I think it's it doesn't have anything to do with the exponentials, but more like, what? What is the order of validation? Where do you get the data for validating? And I don't think what Dancard is looking at doing. It involves getting rid of the gas used field. Which is like kind of what allows us to make the base V in the header the base for that header's block. That's kind of what we're missing for excess data gas. + +**Stokes**: Yeah. So maybe if there was push back, it was a miscommunication. I'm not sure. But yeah, it sound. I mean, does anyone think this is going to push back #4844 timelines too much? + +**Andrew Ashikhmin**: Yeah, I just just wanted to say, because I think it doesn't. So, yeah, I misunderstood it in the beginning. because I thought it eliminates the need to yet. So it is. It's not relevant to parent bodies. So I kind of. I need more time to look at the proposal. So I don't have a position on it at the moment. + +**Tim Beiko**: Does it make sense? We have the #4844 call on Monday. Does it make sense to give people like the next 2, 3 days to look over it and make a decision on the #4844 call Monday. + +**stokes**: Sounds good to me. + +**Tim Beiko**: And if people can't make that call, just leave your comments on the PR directly, and we can. yeah, consider those on the call. + +Okay, next up And I just add one more thing George, as actually asked how how this whole thing relates, for example, to live clients if you + +**Peter SzilÃgyi (karalabe):** want to, to just verify the headers, but don't have the bodies, because obviously you don't download the bodies. And the short answer is that a live client will not be able to verify the access data gas in its current form. So the same way that the live client cannot verify the gas used field because you need to run the transactions the same way, a light kind, and cannot verify the access data.Yes, the the part of it which which tracks, how many blobs are included? + +And the other problem did. This is another one of those witnesses, because even the like client can verify the the base fee. But it won't be able to verify the block. See? Because it's because of this dual nature of the like access data. Guess so for light times, you would just need to take it for granted that that field is correct. But yeah, again, with my clients. It's If we are following the. If we assume that the consensus network is on a semi-good chain, then a lot of validations could be omitted. I mean, you could even debate that if the consensus client tells you that this is the header, why even bother validating anything, just roll with it. So yeah, again. So it's not really end of the world. It just just a quirky thing that we have to decide which Quirk we want to live with. + +**Tim Beiko:** Got it? Thanks. okay. Next up this is like an old PR that recently had to move in. So refactoring the delivery conditions for blocks.so I believe this is the same where the blocks didn't actually check whether the blob cap was exceeded and only the individual transactions. + +**Stokes**: Yeah. so as far as working on this, I don't think you can make the call. So I will answer any questions on this. PR, and yeah, that's basically it. Some things kinda got dropped. And I think the main change at this point is exactly what you said. So there is no way. There's no way currently in for to specify that there's only so many Bob's per block. If you look at the current spec I could send maybe only a few blobs per transaction, but I could send, you know as many transactions as I can pay for. And then now there's like, you know, 30 MB of blobs.So it'd be nice to have this defined at the El. And that's what this change does. + +**Tim Beiko**: Got it. anyone. The post of this like, it seems pretty straightforward. Okay? Then I guess we could probably go. Oh, yeah, sorry. I was trying to find the button. + +**Peter Szilágyi (karalabe):** I think it was done yesterday. Who said that? this field is already validated by the conservative client? My 2 cents. I already wrote it on the Channel to that, in my opinion it would be nice to have it validated by the execution client to, simply because one, if this simpler and the I mean safer, and the other is that I think it's It would be useful to have the validation somewhat self-contained. So basically have everything. So that if the execution, if I just give a batch blocks to the execution of client, then it can do as much validation as possible. Maybe it cannot do change selection, but it should be able to verify everything else. And this this extra check is needed to to forbid blocks containing hundreds of blobs. + +**Stokes:** Yeah, I think we all agree. And so yeah, like, it seems to make sense to me to like, keep this in line with execution gas where it's like, yeah, there's a limit at the yell, and that's sort of the ground truth. there is a cap at the cl, but that's more networking thing at this point. And again, as we've discussed, you could imagine this kind of varying independently subject to each layer. Be uncomfortable with that. + +**Tim Beiko:** Okay, so I guess we can go ahead and merge this whatever. And stokes it's ready. Okay, next up. Okay. So devnets. So part of us, you wanted to give a quick update on devnet 5, and then Gajinder, you had a 3 PR's related to devnet 6. so let's do that. Barnabas. + +**Barnabas Busa:** Yeah, sure So they to recap it in the past 2 weeks we as a long period of one finalisation on Devnet 5. And it ended up training 900 validators accounts to force them to exit, and we managed to get into a final stage again. Be the light house, and that your mind. And at that point we had a hundred liters running on a single node, and everyone was able to catch back to that, using a checkpoint sync. and I decided to make some new deposits, so we can still see if any of the go off I now. So I made a thousand deposits yesterday to it, and these are being processed right now. the I know this is something that every one get ejected about at balance of 31.7, which is a bit strange, because, in the config I have set it to being ejected at the 31. So I'm not quite sure why that happened. here you can. Did they come pick for this? And in the interrupt channel? we had some discussion about it. What is that? + +**Tim Beiko:** Oh, Ben says his terraces. Can you expand on that? + +**Ben Edgington:** if my microphone is working? yeah, it it would be based on the effective balance, and when the notes balance drops below 31.75E. The effective balance drops to 31 is so if you set the injection balance to 31. Then when the actual balance reaches below 31.75 then you'll trigger the ejection which sounds like what you're saying. + +**Barnabas Busa**: Okay, yeah, that makes sense. I had no idea, anyway. right now we are finalizing. And we have what 500 on the train right now. it's looking quite nicely. You also have a be continuous for running the another thing is 786 specs are being collected right now, and I'm open for including anything or not, including anything else. We have a link for that, too. There's a quite some PR. that I would like to include in Denvet 6. And this would still be a for it, for for specific Devnet, and then hopefully, Devnet7 be a then come. That's which would combine other PR that are not related to for it, for awesome. + +**Tim Beiko**: and that is + +**Barnabas Busa:** Devent7 months from now. bye. But first, we should focus on devnet 6 launch. + +**Tim Beiko:** Yeah. I'm yeah. That sounds good. And I see, okay, so I'm looking at your ducks now. And I see some of the PR's. Yeah, basically, the a lot of the red PR's are the same ones Gajinder had up and we can. I think it makes sense to probably take most of the call on Monday to go over this list that you have. But you, do you want to discuss the 3 that you brought up on the agenda for today? So first of all, was #7038. + +**Gajinder:** Yeah, Hi, Tim. So I'm 238. Okay? So what 7038 does is it basically refactors a little bit how the network payload is built. So it's now, first a transaction payload, then blobs, then commitments and proofs. Earlier it was transaction payload, then commitments and blobs and proofs. So it just I mean, it feels nice this way. and the second thing It also adds clarification that the blobs are flat otherwise. there was this interpretation that we also had discussion in the discord that each blob itself is a list of field elements. So this PR also flattens out the blobs, and in a big Indian way. So oh, and then it just adds some. It just clean up some references and add some validation conditions that were missing. + +**Tim Beiko:** Okay. yeah, I see there's been a couple of comments on this. But any other thoughts from anyone on the call. Okay, next up So this is following for discussion last time following our discussion from last time about the pric, about inputs to be encoded. A big Indian. Yeah, any thoughts or or. + +**Gajinder:** I, I think in the consensus specs the case of G, big Indian change has already been merge So. + +**Tim Beiko:** Okay, you got it. + +**Gajinder:** This is quite natural. Then. + +**Tim Beiko:** Sorry someone else was trying to say something. Okay? Yeah. I mean, there's no no more issues. We could probably work out as well. And then, okay, this one was to the execution EIP’s to basically add, did a gas used at a gas price to receipts in #4844 transactions. + +**Gajinder:** Yeah, it just has those fails in the RPC response. + +**Tim Beiko:** Yeah. and I guess, yeah, anyone have strong opinions about this. Okay, so it probably makes sense to included. Then, I think like client had a comment there, saying that, we want to wait until we effectively have the full set of changes for cancun, so that we can merge them all at once. But that seems to be the only concerns. And Roberto saying that, yeah, some of your clients already have protection. Okay? So I think that's what we had in terms of PR’s for today. And yeah. Let's go over Barnabas's list for Devnet- 6 and more depth on Monday's call as well as the PR that Math opened which I'm now forgetting what that one was about scrolling up. Sorry I have way too many tabs open here. oh, that. Okay, yeah. The data gas used to the header. So that's a this, that more thoroughly on Monday as well. Cool. Okay? And then, okay, last thing after #4844, So we agreed to move it. The RLP, that said we had. We had, basically included the SSZ optional EIP in Cancun, and we'd also CAFID 6493. The SSZ transaction signature scheme. There were some comments that there are some comments. that we should remove those. yeah, the there was some cons that we should remove those given. We've moved 4042 RLP instead. yeah, there's anyone think we should keep any of the SSZ EIP’s, either CFI included in cancun. Okay? no objection. So okay, I'll do this after the call. I'll take 6475 out of the included list, and I'll remove 6493 from this Cfi list. And okay. anything else on 4844.okay. And next up. Oh, yes. + +**Alexey (@flcl42):** My question. should we like can we have for empty to fill it in updates? As far as I remember, we do not want did not want that because of SSZ and so on, but for now we can have it like So we need still to. for kids empty, too killed. + +**Tim Beiko:** I'm sorry. I'm not sure, I quite understood it is this about the like contract creation? + +**Alexey (@flcl42):** Yes. + +**Tim Beiko:** Yeah. So banning contract creation from block transactions. Right? + +**Alexey (@flcl42):** Yeah. what? What is the reason, could you? yeah, it's someone cool because so great. I believe you. I can take a stop. + +**Lightclient:** So, the original reason was partially motivated by the fact that this thing was not well specified in SSZ. But I think that there's still a good reason to do it, and that's that we have these 2 upcodes for creating contracts. And there's not really any particular use case that I'm aware of where we have to have the ability to to create a contract via an ELA And through the hard work we've seen that this is one of the these frustrating things to test, because for every kind of change to the transaction, and often for new upcodes, new functionality of the EVM, we have to test in the context of both just normal execution. And then always in the context of a nit code, both in the create, in context of it, with create cop upcodes and in the context of the create transaction. So I would like to move, start moving away from using. you know, having create trend, the create transactions in general and simply rely on just the create functionality within the EVM. + +**Alexey (@flcl42)**: No, I see since. + +**Tim Beiko:** yeah. And you see, also on #4844 if not? Okay. So, Daniel, you put together this document because we're proposing a bunch of upCodes for Cancun and Prague, and it's all starting to be a mess, and you have a proposal for how we can make it a cleaner. I I'm not sure if you're speaking, but you're now. + +**Danno Ferrin:** I'm on my yes, you're on mute it now. + +**Tim Beiko:** And we see your screen. + +**Danno Ferrin:** Yeah. So so this is a like like, Tim said. There was a lot of up codes coming in, and a lot of space being occupied and moved around. Part of it, I had to say, was, was the early responsibility of the EOS, because they are occupying 3 key upcodes, the last of the 5 series. Which. So here's a quick overview of the of the opcode box we currently have right now. 5 is what was basically filling up having the storage memory control flow. And that's where the initial EOF Control flow off. Codes went in. So the proposal starts with with a couple of philosophies. One of them is to move all EOF only upCodes that only make sense of the you have popcorn containers to a separate block the E block. and that moves the 500 out of the the 5 x's out of there. and then move everything back to the block, where it makes the most sense. so that would then move to you, load and to store back in the 5 c. And 5 d. Probably, and copy if it passes into 5 E. This does not affect blot, hash or beacon root beacon rib might be out. That's a late breaking change overnight. And then so proposed changes in the F series that series is filling up. There's another proposal for another series of call call queues later on in this meeting. So is there a space for those reserve space for pay, and I guess we need a reserve space for set code as mentioned earlier in the meeting. But the the purpose of this is to get a more sensible packing and grouping of the upcodes and to fill in the space left by EOF moving into its own block. so this is what's proposed. Two-story teload are currently in, and everything else in blood hashes and everything else is, is speculative. it to be added at some point. If it added it all. + +**Tim Beiko:** Thank you. Anyone have thoughts, questions, comments. + +**William Morriss:** We assigned set code in this meeting. + +**Danno Ferrin:** Aagain until it ships. They could move it, it move away. But my thought is, it should be in the app series rather than the 4 series, and I think the last F series available is FC. So we could, we could put that in, as is the current location for it. + +**Tim Beiko**: Okay. Alex. + +**Alex Beregszaszi**: yeah. Regarding the Us side we have been discussing. You know, these upcodes in like the last 2 or 3. You have breakout calls. so I think we can do this. in any case for you. and you know mean that there's no question about that we were really in favor of. We we get from the US Side. + +**Tim Beiko:** Sweet + +**Charles C **: anything. There was a bit of discussion of putting M. Copy at Xerox for F, which is like slightly before the 0X5 series. But I don't know if it really makes so much sense. It was just an idea. + +**Danno Ferrin**: So the reason I wouldn't want it in 4 F, because those are all focused on block data ones. it's stuff that might come in from the environment. You know what's what's in your block headers. The only reason that push 0 got put into 5 F, Is because of some fun math related to the push series operations. Otherwise I would have put it somewhere else rather than in 5 F. so putting F and and copy it for F. I don't think makes quite the same sense. I think a better home with it would be it 5. The I think the 5 E where it would be at. + +**Tim Beiko**: Is this something you should try, and but somewhere better than the I can be under your hand. I know we have. Yeah, we have something similar for transaction types somewhere. + +**Danno Ferrin**: So that's exactly what I'm thinking. I wasn't sure if the right place for this for the execution specs. Or is it informative, EIP? So, to get the discussion rolling. I just did my own. He put it into the link, and then we can move it in an app, and if it's kept there it should be kept a live document. As proposals become non-viable, they should be removed from the list. and as it become viable, maybe some speculative upcode placings. But again, shipping off codes get priority over discussed upcodes. + +**Tim Beiko**: Yeah, I think. Where is? let's say, are the transaction types in the execution specs? Is that where we yeah, they're in a folder called list slash signature dash types. And there's a read me in there, and that also keeps track of tentative signature types. + + +**Lightclient**: So this is this pretty similar thing. I don't think it would go in the folder lists, I mean, maybe. But yeah, it it makes sense and execution specs in general. Yeah. + +**Tim Beiko**: yeah, II kind of like lists, because, like, I think. I guess it is a list of all codes. + +**Lightclient**: Yeah, I mean, I think it fits well within the other stuff, like having CFI EIP’S and just having, like an idea of proposed changes for forks. This is just a different way of looking at the proposed changes for forks. + +**Danno Ferrin**: Okay, I'll make up a Pr for execution specs that includes all this, and I'll try and follow the signature types. but are there any objections? If I were to open up PR’s against t- to move the up to odes for Cancun? All right. I'll do that today, too. + +# 4. Other EIP’s + +**Tim Beiko**: Cool. Yeah, thank you. This is a really great doc. yeah. okay, couple of more things. So on the last call we briefly discussed, we briefly discussed trying to make some decisions on this call about any other EIP’s we might want to include. in cancun So a couple. I guess, 4788 has had some updates since the last discussion. And then I believe this is a new proposal for revamped call instructions. and I there's a whole bunch of other proposed EIP’S. so I guess quickly, maybe at Alex, Stokes. You want to give it a quick overview of the changes that 4788, then Alex at can give a quick overview of the call EIP’s and then we can hear from clients about what they feel might make sense to include in cancun. + +**stokes**: Sure. So it's kind of continues from AC DC. Last week.the way that 478 was before the current updates was that it would add a new upcode something like beacon route. and that would basically call out to some storage, and the execution states where these boxers would be. This is like kind of merging 2 different mechanisms with like. I'm sorry, pre-compile, like thing, and an up code like thing. So we kind of decided it would just be cleaner to bite this bullet on a stateful pre-compile. And so that's what the current update does. yeah, basically, it's just a pre and pile like we're all familiar with it happens to have access to execution states. And that's where this data is stored. this is a bit different from, say, like, how block hash works today where rather than read, the execution states, there's this history buffer that just happens to also be there. And the idea is that you know, in an ideal world. The State transition function for etherium would be up your function of the State and not rely on this like other little history thing. And this is implications for Taylor's clients and things like that. So I think we well, at least people who I've engaged with this so far, I think, generally like the stateful Prec and pilot direction. Mario has an implementation in get, I think, for both things. But definitely the staple pre compile. And yeah, I think the changes been merged already. So yeah, I think that's not too controversial. The other big thing was. And probably the last question on this was just discussing how we actually went to key these routes. So we assume, then we have the staple precompile. And now we basically need, like some inputs data to, like, you know, figure out, okay at the slot. Or maybe this timestamp, what was the actual beacon route? yeah. So from here, does I mean, those are probably the 2 big contenders is like either using the the EEL timestamp, which is sort of a proxy for slots. or introducing some way to map into the CL slots while we're writing this thing into the Ill States and doing that. so the catch there is that we don't want to violate the barriers of abstraction between the ENCL. So it's it's a little tricky. yeah, I guess maybe I'll just pause there. Does anyone have any questions so far? Okay, The one thing from here, then, is, maybe if I can just ask Mario directly. because he had some feedback via the prototype. Yeah, I can grab it. is Marius on the call. + +**MariusVanDerWijden**: Yes, I'm here. So the prototype is very prototype. and it has. It has both the state for pre-compile and the upcode implemented In the beginning it was specified as an opcode but I think the and and of course, basically the upcode, like the idea is that this data need is going to be in the State, because otherwise we would have a separate storage con storage segment that notes would need to maintain and which would complicate the state transition function. Basically, the State transition function would then be the state, the transactions, the header chain for the last 128 headers for the block hash upcode. plus this additional storage field storage segment that keeps the the beacon rules. And so what we decided on is to move this well. What Alex proposed was to move this into the State. if it's kind of weird to put something from outside into the State. but it's I think, the best it gets it. It needs some getting used to, but in the end I think it's it's kind of fine and the So why, we decided against the prototype, or why I am against the not the prototype. The upcode is that this upcode would read storage, slot storage slots from a specific address. but that would mean we have this kind of address that has some storage slots that is not really a pre-compile and so that it would introduce a new paradigm. So we have to introduce a new program somehow. And so I think the best way to do it is to just add a pre-compile. that returns this data. And now the only open question for me is, how are we going to key this data? + +**Stokes**: Basically. Yeah. And so in the prototype, I think you just wrote like the timestamp from the header.which at least to me, is like half of the problem, so they can. The problem with this is that if there's skip slots, it's like very unclear to the caller how to like find the actual next route. + +**MariusVanDerWijden**: Why? + +**Stokes**: Because there would be gaps right? So like, basically, you're saying you know, timestamp, let's say T - 0. I write the roots. Let's say there's like 4, Miss Slots, and then. Now there's like some time stamp in some time stamp from like, you know, much greater than just 12 Secs. And then, if I want to find the block route for the next thing, it's just like because the way that it's written is I would read back, you know basically 0 as the route. And then I would not really know what to do. I would need to like jump forward some amounts that I just really don't know and then have to like search through the contract. Basically. + +**MariusVanDerWijden**: yeah. So the I'm not really sure how this would be used by contracts. So that I mean, yeah, that's a good question. So like one example would be, I have a slot. + +**Stokes**: And you know. I guess where I'm coming from is, I don't know if it's missed or not. I just have some slots, because, like, you know, I know there's 64 bits, and I can just pick one out of that out of that type space. So I have some slot, and then I want to know what the root is and like. That would be the AIP. I would like. The thing that you're kind of suggesting is more like I, as the caller also need to know, kind of ahead of time that, like there was already a block there which you know might be an okay relaxation. + +**MariusVanDerWijden**: Oh. oh, oh, we could even turn it around and say that it's it's keyed by vicin root, and the values is is the is the timestamp. So the caller could say, I have this speaking root. What is the time stem for it? If it's only about needing to know that a specific beacon root was there at some point. + +**Stokes**: Right? So yeah, let's zoom out a bit just to respect everyone's time here. has anyone else looked at this and or feel strongly about including this? And okay. + +**MariusVanDerWijden**: oh, I don't feel strongly about including it in. By the way, I think it's a it's nice, and it should be included. but I'm not sure if we should include it in cancun. + +**Stokes**: Okay, does anyone else have any input because I think we can resolve this like key question one way or another, and then from there, just a question of Do we want to also include it in Cancun? It would help a lot of different applications that people want to use, like, for example, in the chat George has called out, you know all sorts of things around accessing beacon, state and execution layer. And the other thing is this does kind of T up other things in future forks like you honestly your exits. So there's ways in which this like tightens up the staking model that are like really valuable. And this change kind of leaves the groundwork for that. So in that sense, I think the sooner the better. just because it unboxes other stuff they want to do. + +**Tim Beiko**: And I guess, yeah, maybe if if no one has strong opinion, strong opinions on this. do you client teams have strong opinions on if they want to include anything else at all? or not, for now, and if there are things you know that are not for 7088 that they want to include. + +**Andrew Ashikhmin**: We would like to include a 5920 pay upcode. It's a simple usability improvement. + +**Tim Beiko**: Anyone else have thoughts, proposals. + +**Stokes:** I mean, it might be helpful just to hear. You know the on what we currently have, which is what 4844 T Store. And one of the SELFDISTRUCTION like, do we feel like there's room for another EIP at all could we discuss? M Copy and pay. + +**Tim Beiko**: And okay, 2537 as well, which is the okay. So I guess we have 4 min. yeah, maybe if we could do like, if anyone has thoughts, we can do M Copy pay. And then, Alex, if you want to do the calls. of course. but yeah, it's like with with M copy. I, how do you have an update on it? Or do you just want to get the feeling from people. + +**Charles C** : there's no update on it. I I brought it up a couple of calls ago, and I think you said you wanted to give everybody time to review it, and I think people have had time to, you know, review and think about it. So I guess we should discuss if there's any reservations, I think, Marius said. You know, maybe you know, along with telome, and she starts too complicated, which I don't. My personal feeling is that I don't know about that, because, you know, they're affecting completely different separate regions with EVM. But if anybody else has similar or other concerns. So I think we should discuss those. + +**Tim Beiko**: Damno? + +**Danno Ferrin**: So I'm copy is relatively easy. It's it's kind of like the return data copies. There's a lot of well warned testing path on that. So I want to hear Mary's opinion. But for pay I think we need to discuss it in context of the new call to series. And as to why pay is needed if it's just to make things cheaper, I think called to will handle it. But I don't think we have time on this call. To discuss the pros and cons of pay versus call to. + +**Charles C** : pay is not just for for gas. There's a number of high-profile reency attacks involving sending ether that would be easily prevented if people had a way to transfer ether without transferring execution context. + +**MariusVanDerWijden**: So I think the pay up code kind of enables. I think it's a it's kind of good. It's something that we should do but it warrants a whole new like we, we really need to look at the implications of it. hmm! Basically, it enables a new way of a contract touching another one and so this is usually where, like most of the bugs are. I don't think we have like I, personally don't think we we have enough time in cancun for testing this. with all its implications, it has on the other the on the on the other things. + +**Tim Beiko**: William + +**William Morriss**: Regarding the payoff code. we already know that contracts can receive Either in other contexts, such as from mining new blocks and from SELFDISTRUCT So the payoff code shouldn't introduce any new security considerations because anything that it allows is already allowed + +**MariusVanDerWijden**: it. It's not about security and the considerations on the on the smart contract. It's it's more about like, how is it implemented and like, basically we need to. we need to test this up code with every combination of everything ever. This kind of makes it very complex. + +**Tim Beiko**: Okay? And we're we're already at time.Alex, do you want to give a quick up, a quick overview of your eips? just so people have the context. + +**Alex Beregszaszi**: Hmm. yeah. So it is called event. call instructions and it was pushing EIP rep only today. But the work has been and the number is #7629. But the work on this has actually started in January. trying to get device where the requirement for us was brought up to also try to eliminate gas obserability. And we designed the the replacement call instructions at the time. with that in mind. But then, maybe, like 3 US breakout calls ago, we realized that these instructions are actually not dependent on the UF, and they could be just introduced in the current. TVM. And there are 2 benefits to to doing so. One If you read the this application, it actually simplifies a number of rules regarding gas and there's some a number of cases where it would be actually much cheaper and and better to use this new kind of call instructions instead of the the current ones. So existing legacy contracts you choose to to so would would benefit. And the second reason why it would be beneficial is. If this would be introduced in a different hard work. Then you, then the Uf. Changes would be much smaller. Because the only change there would be to reject the current call instructions. And these new proposed call instructions, for they already would be already there. And then you know what these instructions actually do you know?, basically there. there is the Gas observability which is, you know, one big push. We wanted to get rid of So, here we remove the the gas limit input and we just rely on the and 6364. True. We also changed the way the stipend works. it is much more simplified there is no output buffer address, because return data copy and return data size can be used. Instead of that. There has been a number of discussions with solidity. How it actually uses the call instructions and some discussions with Wiper. and then the the last changes, the the return value it, it actually returns more. it returns success, reward, and failure and the back. When the revert feature was introduced. there was a plan to add that status to the calls but it couldn't have been done at the time in the legacy call instructions, because contracts were depending on on the behavior and introducing that would be would have been a breaking change. I guess I don't really have time to go through everything. But there's one more comment I wanted to make. And so there's this version of EIP which is in in draft mode But as we went through it we realized that there would be another option as well. Which would mean that these call instructions wouldn't. They would check whether the target is an actual contract. So it would do an exclude size check and the call would would fail. If it's not a a contract on the other side. doing so would simplify the rules even further, because these call instructions would only be usable to interact with contracts. what this would mean is that you know something like a transfer or pay instruction. A separate transfer of pay instruction would be needed. in order to to establish where you transfer to aways or not executing transfers to to code accounts. yeah, I think that's that's that's, in short, and you know, ideally, something like this. would be included in cancun. And then us would be much more simplified. For if work afterwards. + +**Tim Beiko**: thank you. okay, we're already right past time. I don't know if people have comments on this. Specifically. if not, I guess what I'd suggest is surely it seems like we're probably not in a spot to make like the final decisions about smaller EIP's today. And we're not even in a spot where 6780 and 4844 fully implemented in clients. so I would suggest. I don't know if we want to cify them, but it seems like M copy pay. These call upcodes as well as the existing as well as the existing yes, compiled, and and 4788 EIP's are sort of the ones we're considering, so should we move those 3 other ones? So like 5920, 5656, and then the call, 12 CFi and sort of restrict the discussion of those. And yeah, we can see in the next couple of weeks. how how things progress. Anyone object that + +**Stokes**: That sounds fine. I don't think we should add more, and if anything, we should probably buy a storage just freezing the current side. But right? Yeah. + +**Tim Beiko**: So that's what I'm saying is like we. We freeze the current set with those things. Everything else is the fact. They'll sort of excluded. Obviously there's some last minute issue. We can always change things. But that gives us like 3 EIP's currently in the fork, and we'd be up to, I believe, 5 cfied ones. because we have 3. Now, adding 3 and removing one + +**Stokes**: right and 4844 is a big one. Just so we're all aware. + +**Tim Beiko**: Yes. okay, we will let them know by client. sweet, let's wrap up here. We're already over time. Appreciate everyone sticking around and talk to you all on the next one of these. + +**Pooja**: Thank you. + +**Guillaume:** Thanks, Tim. Bye. + +**Peter SzilÃgyi (karalabe):** thank you. + +**Ahmad Bitar:** Bye. + + + + +# Attendees + +* Guillaume +* Tim Beiko +* Péter Szilágyi (karalabe) +* Stokes +* Alex Beregszaszi +* MariusVanDerWijden +* Ben Adams +* Danno Ferrin +* Andrew Ashikhmin +* Lightclient +* Andrew Ashikhmin +* Barnabas Busa +* Ben Edgington +* Gajinder +* Alexey (@flcl42) +* Danno Ferrin +* Ahmad Bitar +* Neville (Dedaub): From 26671b2db215664b7304e905ef9a17c80385cb9e Mon Sep 17 00:00:00 2001 From: Darkfire_rain <67558925+darkfire-rain@users.noreply.github.com> Date: Mon, 19 Jun 2023 18:17:44 -0400 Subject: [PATCH 4/4] Update README.md --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 79688ee9..b35ee516 100644 --- a/README.md +++ b/README.md @@ -42,6 +42,8 @@ The meetings are independent of any organization. However, Danny Ryan & Tim Beik | № | Date | Agenda | Notes | Recording | | --- | ------------------------------------ | --------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------- | +| 162 | May 25, 2023, 14:00-15:30 UTC | [agenda](https://github.com/ethereum/pm/issues/781) | [notes](AllCoreDevs-EL-Meetings/Meeting%20162.md) \| [Twitter](https://twitter.com/TimBeiko/status/1651612895892094977) |[Video](https://youtu.be/ajLQVC3E_mk) +| 161 | April 13, 2023, 14:00-15:30 UTC | [agenda](https://github.com/ethereum/pm/issues/759) | [notes](AllCoreDevs-EL-Meetings/Meeting%20161.md) \| [Twitter](https://twitter.com/TimBeiko/status/1651612895892094977) |[Video](https://youtu.be/ajLQVC3E_mk) | 160 | April 13, 2023, 14:00-15:30 UTC | [agenda](https://github.com/ethereum/pm/issues/759) | [notes](AllCoreDevs-EL-Meetings/Meeting%20160.md) \| [Twitter](https://twitter.com/TimBeiko/status/1651612895892094977) |[Video](https://youtu.be/ajLQVC3E_mk) | 159 | April 13, 2023, 14:00-15:30 UTC | [agenda](https://github.com/ethereum/pm/issues/754) | [notes](AllCoreDevs-EL-Meetings/Meeting%20159.md) \| [Twitter]() |[Video](https://www.youtube.com/watch?v=u8Nm8AGyCQM) | 158 | Mar 30, 2023, 14:00-15:30 UTC | [agenda](https://github.com/ethereum/pm/issues/744) | [notes](AllCoreDevs-EL-Meetings/Meeting%20158.md) \| [Twitter](https://twitter.com/christine_dkim/status/1641883892939476995) |[Video](https://youtu.be/RQ2WtyevRXE)