-
Notifications
You must be signed in to change notification settings - Fork 618
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rework sequencer storage & add handle update #711
Conversation
if (tx.writes.length > 200) { | ||
throw new InvalidRequestError('Two many writes. Max: 200') | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume this check lives here because you don't want it to be a hard limit in the lexicon (i.e. for all PDS implementations), is that right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea exactly. There are cases where you may want to do more than 200 writes & i feel that we shouldn't put this at the lexicon level. There's a good chance we'll remove this check at some point when we have a better handle on the repercussions of large mutations
.addColumn('sequencedAt', 'varchar', (col) => col.notNull()) | ||
.execute() | ||
|
||
await db.schema |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having trouble recalling which types of queries these indices support. I kinda like the comments we've added in some of the other migrations documenting this, have found it useful!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yup i'll add some comments 👍
if (dialect === 'pg') { | ||
builder = builder | ||
.addColumn('seq', 'bigserial', (col) => col.primaryKey()) | ||
.addColumn('invalidatedBy', 'bigint') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I dig the invalidatedBy
idea! I wonder if we should also make it a foreign key.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah yeah I'd be game 👍
const justRoot = new BlockMap() | ||
justRoot.add(commitData.blocks.get(commitData.commit)) | ||
carSlice = await blocksToCar(commitData.commit, justRoot) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💎
.innerJoin('repo_commit_history', (join) => | ||
join | ||
.onRef('repo_commit_history.creator', '=', 'repo_seq.did') | ||
.onRef('repo_commit_history.commit', '=', 'repo_seq.commit'), | ||
) | ||
.select([ | ||
'repo_seq.seq as seq', | ||
'repo_seq.did as did', | ||
'repo_seq.commit as commit', | ||
'repo_seq.sequencedAt as sequencedAt', | ||
'repo_commit_history.prev as prev', | ||
]) | ||
.selectAll() | ||
.orderBy('seq', 'asc') | ||
.where('invalidatedBy', 'is', null) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤤 so nice, this thing is gonna whiz.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👏 so nice
Co-authored-by: devin ivy <devinivy@gmail.com>
This reworks our sequencer storage so that we do the work upfront instead of collating it when we serve.
This does cause some duplicated data however, I do not think it's a concern for several reasons:
This allows our
repo_seq
table to hold heterogenous event types. It will also speed up the subscription route.This PR also adds in support for handle update events.
repo_op
table as we weren't using it anymoretooBig
events I chose a limit of 200 ops or 1MB of block data.batchWrite
invalidatedBy
column to seq which tracks if an event is invalidated by a later event namely: