Skip to content
This repository has been archived by the owner on Jan 13, 2025. It is now read-only.

Add getRecentPrioritizationFees RPC endpoint (backport #27278) #29562

Merged
merged 2 commits into from
Jan 7, 2023

Conversation

CriesofCarrots
Copy link
Contributor

#27278 for v1.14

CriesofCarrots and others added 2 commits January 6, 2023 19:27
* Plumb priority_fee_cache into rpc

* Add PrioritizationFeeCache api

* Add getRecentPrioritizationFees rpc endpoint

* Use MAX_TX_ACCOUNT_LOCKS to limit input keys

* Remove unused cache apis

* Map fee data by slot, and make rpc account inputs optional

* Add priority_fee_cache to rpc test framework, and add test

* Add endpoint to jsonrpc docs

* Update docs/src/developing/clients/jsonrpc-api.md

* Update docs/src/developing/clients/jsonrpc-api.md
@CriesofCarrots CriesofCarrots requested a review from mvines January 7, 2023 05:09
Copy link
Contributor

@mvines mvines left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎉

@CriesofCarrots
Copy link
Contributor Author

Made it through all CI jobs first try 🕺

@CriesofCarrots CriesofCarrots merged commit fa7055e into solana-labs:v1.14 Jan 7, 2023
Comment on lines +625 to +628
assert_eq!(
hashmap_of(vec![(1, 1)]),
prioritization_fee_cache.get_prioritization_fees(&[write_account_c])
);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry to make this comment out of the blue, but I'm trying to understand the implementation of the getRecentPrioritizationFees JSON-RPC method (also created a question in the Solana Stack Exchange page to address it).

The actual question is: at this point why should the cache contain data for write_account_c, since just a transaction that involves write_account_a and write_account_b has been submitted? Shouldn't it return an empty result?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

at this point why should the cache contain data for write_account_c

The cache doesn't in fact contain any explicit data for that account. If a key isn't found in the cache, get_prioritization_fees returns the minimum fee for any transaction in that slot:

let mut fee = prioritization_fee_read
.get_min_transaction_fee()
.unwrap_or_default();
for account_key in account_keys {
if let Some(account_fee) =
prioritization_fee_read.get_writable_account_fee(account_key)
{
fee = std::cmp::max(fee, account_fee);
}
}
Some((*slot, fee))

Copy link

@GCrispino GCrispino Mar 9, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@CriesofCarrots thanks for answering!

I guess that this also explain why I basically always get 0 values when not passing accounts as parameter to this method, because it means that there is always a transaction in the slot with 0 as the value for the priority fee and, since the minimum value is fetched from the cache, 0 is going to be returned. Is this explanation correct?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, correct.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That makes more sense now :) . Don't you think that the docs for this method should specify that the values returned are minimum values? It currently mentions that the returned value for each slot is the "per-compute-unit fee paid by at least one successfully landed transaction, specified in increments of 0.000001 lamports".

Although this is not wrong, having "minimum per-compute-unit fee paid by a landed transaction" instead would sound more representative to what happens for this method, in my opinion.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants