Skip to content

Conversation

pyropy
Copy link
Contributor

@pyropy pyropy commented Sep 29, 2025

Overview

Introduces FilBeam (operator) contract used for aggregating data set egress usage and settling of CDN-related payment rails inside the FWSS contract.

Core Functions

Usage Reporting (Controller Only)

recordUsageRollups(dataSetIds[], epochs[], cdnBytes[], cacheMissBytes[])
terminateCDNPaymentRails(dataSetId)

Settlement (Public)

settleCDNPaymentRails(dataSetIds[])
settleCacheMissPaymentRails(dataSetIds[])

Management (Owner Only)

setFilBeamOperatorController(address)
setCDNRatePerByte(uint256)
setCacheMissRatePerByte(uint256)

Constructor Parameters

  • fwssAddress - FWSS contract address
  • cdnRatePerByte - Rate for CDN usage
  • cacheMissRatePerByte - Rate for cache-miss usage
  • filBeamOperatorController - Authorized reporter address

Data Tracking

Each dataset maintains:

  • Accumulated CDN and cache-miss egress usage
  • Latest reported epoch
  • Last settlement epochs for both payment rails

Dependencies

Closes filbeam/roadmap#51

pyropy and others added 7 commits September 29, 2025 11:51
- Install @openzeppelin/contracts-upgradeable for upgradeable patterns
- Install @openzeppelin/contracts for proxy contracts
- Update .gitmodules and foundry.lock

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
- Add custom error definitions in Errors.sol
- Add IFWSS interface for payment rail integration
- Add MockFWSS contract for testing payment settlements

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
- Add FilBeam contract with OpenZeppelin upgradeable patterns
- Implement Ownable, Initializable, and UUPSUpgradeable
- Support usage-based payments for CDN and cache miss scenarios
- Add proxy pattern for contract upgrades
- Replace constructor with initialize() function for upgradeable pattern
- Add _authorizeUpgrade() for UUPS upgrade authorization

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
- Add 29 test cases covering all contract functionality
- Test upgradeable contract patterns and proxy deployment
- Test ownership, initialization, and upgrade mechanisms
- Test usage reporting and payment settlement flows
- Include fuzz testing for usage reporting
- Test error conditions and access controls

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
- Create deployment script with dynamic USDFC decimals detection
- Add USD per TiB to USDFC per byte conversion logic
- Support environment-based configuration
- Add comprehensive deployment logging
- Use ERC1967Proxy for upgradeable proxy pattern
- Include detailed documentation and usage examples

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
- Remove Counter.sol, Counter.t.sol, Counter.s.sol
- Clean up project to focus on FilBeam functionality
- Simplify codebase by removing template files

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
- Add SPEC.md with project requirements and design
- Update AGENTS.md with project context and instructions
- Document FilBeam usage-based payments functionality

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
@pyropy pyropy marked this pull request as draft September 29, 2025 09:57
@pyropy pyropy changed the title feat: implement FilBeam usage-based payments contract WIP: feat: implement FilBeam usage-based payments contract Sep 29, 2025
Add efficient batch processing for usage rollup reports with gas optimization
and atomic transaction guarantees.

- Add reportUsageRollupBatch method accepting arrays of dataset IDs, epochs, and usage data
- Refactor reportUsageRollup to use internal _reportUsageRollup for code deduplication
- Implement atomic batch processing (all succeed or all fail)
- Add comprehensive test suite with 10 new test cases covering:
  * Successful batch reporting across multiple datasets
  * Array length validation and error handling
  * Access control and input validation
  * Atomicity verification with partial failure scenarios
  * Integration with existing settlement functionality
- Update SPEC.md with detailed batch method documentation
- Maintain backward compatibility with existing single reporting method

Gas efficiency: Batch method reduces transaction costs for bulk operations
ideal for rollup workers reporting multiple usage periods.
Enable flexible pricing configuration with decimal precision support
for CDN and cache-miss rates.

- Add PRICE_DECIMALS environment variable to specify decimal places
- Update calculateUsdfcPerByte to handle scaled decimal inputs
- Support pricing like $12.50/TiB (1250 with 2 decimals)
- Maintain backward compatibility with whole number pricing
- Add comprehensive test suite with 9 test cases covering:
  * Whole number pricing (backward compatibility)
  * 2-3 decimal place scenarios
  * High precision pricing
  * Different token decimal configurations
  * Edge cases and fuzz testing
- Enhanced deployment logging with actual USD price display
- Update documentation with new parameter usage examples

Examples:
- $12.50/TiB: CDN_PRICE_USD_PER_TIB=1250 PRICE_DECIMALS=2
- $9.99/TiB: CDN_PRICE_USD_PER_TIB=999 PRICE_DECIMALS=2
- $10.00/TiB: CDN_PRICE_USD_PER_TIB=10 PRICE_DECIMALS=0
Add gas-efficient batch processing for CDN and cache-miss payment
rail settlements with atomic operation guarantees.

- Add settleCDNPaymentRailBatch method for bulk CDN settlements
- Add settleCacheMissPaymentRailBatch method for bulk cache-miss settlements
- Refactor single settlement methods to use internal functions
- Implement atomic batch processing (all succeed or all fail)
- Add comprehensive test suite with 9 new test cases covering:
  * Successful batch settlement across multiple datasets
  * Empty array handling
  * Error propagation and validation
  * Atomicity verification with partial failure scenarios
  * Integration with existing usage reporting
- Update SPEC.md with detailed batch settlement documentation
- Maintain backward compatibility with existing settlement methods

Gas efficiency: Batch methods significantly reduce transaction costs
for bulk settlement operations across multiple datasets.
Transform README from basic Foundry template to complete FilBeam
project documentation with deployment examples and API reference.

- Add project overview highlighting key features
- Add detailed deployment guide with environment variable configuration
- Add 3 deployment examples for different pricing scenarios:
  * Decimal pricing ($12.50 CDN, $15.75 cache miss)
  * Whole dollar pricing ($10 CDN, $15 cache miss)
  * High precision pricing ($9.995 CDN, $12.750 cache miss)
- Add pricing configuration reference table
- Add complete contract API documentation covering:
  * Usage reporting (single and batch methods)
  * Settlement operations (CDN and cache-miss rails)
  * Contract management (ownership and upgrades)
  * View functions for dataset information
- Add key concepts section explaining:
  * Batch operations and atomicity guarantees
  * Decimal pricing model and token compatibility
  * Independent settlement rail tracking

Provides developers with complete guidance for deploying and
interacting with FilBeam contracts.
Remove epochReported mapping and EpochAlreadyReported error. Epoch validation now uses only maxReportedEpoch comparison, reducing gas costs and contract complexity while maintaining duplicate epoch protection.
@pyropy pyropy requested a review from Copilot September 29, 2025 11:03
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Implements a comprehensive FilBeam smart contract system for usage-based payments in the Filecoin CDN ecosystem, featuring batch processing capabilities and flexible decimal pricing support.

  • Upgradeable UUPS proxy contract with independent CDN and cache-miss settlement rails
  • Batch operations for gas-efficient usage reporting and settlement processing
  • Decimal pricing system supporting fractional USD rates per TiB (e.g., $12.50, $9.99)

Reviewed Changes

Copilot reviewed 16 out of 17 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
src/FilBeam.sol Main contract implementation with batch operations and settlement logic
src/interfaces/IFWSS.sol Interface definition for FWSS contract integration
src/mocks/MockFWSS.sol Mock contract for testing FWSS functionality
src/Errors.sol Custom error definitions for contract validation
script/DeployFilBeam.s.sol Deployment script with decimal pricing calculations
test/FilBeam.t.sol Comprehensive test suite covering all contract functionality
test/DeployFilBeamDecimalPricing.t.sol Focused tests for decimal pricing calculations
README.md Updated documentation with API reference and deployment examples
SPEC.md Technical specification document

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

Replace onlyOwner modifier with onlyFilBeamController for usage reporting and payment rail termination functions. Contract initialization now requires filBeamController address parameter. Owner role remains for contract upgrades only.
…d functionality

- Add zero-amount check in settlement functions to skip external calls when amount is 0
- Change function parameters from int256 to uint256 for usage reporting
- Update UsageReported event to use uint256 instead of int256
- Remove redundant isInitialized flag, use maxReportedEpoch == 0 for initialization check
- Add rate update mechanism with setCDNRatePerByte and setCacheMissRatePerByte functions
- Add CDNRateUpdated and CacheMissRateUpdated events
- Update getDataSetUsage to return 5 values instead of 6
- Update README.md and SPEC.md with API changes

Breaking changes:
- reportUsageRollup and reportUsageRollupBatch now accept uint256 instead of int256
- getDataSetUsage returns 5 values (removed isInitialized)
- UsageReported event emits uint256 values

All tests pass (64/64)
@pyropy pyropy requested a review from Copilot September 30, 2025 14:38
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Copilot reviewed 16 out of 17 changed files in this pull request and generated 2 comments.


Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

@pyropy pyropy requested a review from Copilot September 30, 2025 15:28
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Copilot reviewed 16 out of 17 changed files in this pull request and generated 3 comments.


Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

@pyropy pyropy changed the title WIP: feat: implement FilBeam usage-based payments contract WIP: feat: implement FilBeam operator contract Oct 1, 2025
@pyropy pyropy changed the title WIP: feat: implement FilBeam operator contract WIP: feat: implement FilBeam (operator) contract Oct 1, 2025
@pyropy pyropy marked this pull request as ready for review October 1, 2025 11:11
@pyropy pyropy requested review from juliangruber and bajtos October 1, 2025 11:11
@pyropy pyropy self-assigned this Oct 1, 2025
@pyropy pyropy changed the title WIP: feat: implement FilBeam (operator) contract feat: implement FilBeam (operator) contract Oct 1, 2025
Copy link

@bajtos bajtos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great start!


// Get environment variables
address fwssAddress = vm.envAddress("FWSS_ADDRESS");
address usdfcAddress = vm.envAddress("USDFC_ADDRESS");
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we read the USDFC address from the FWSS smart contract?

See https://github.com/FilOzone/filecoin-services/blob/d21495a96daf58c01d1a1e362a96f642822e72f4/service_contracts/src/FilecoinWarmStorageService.sol#L184-L190

    // External contract addresses
    address public immutable pdpVerifierAddress;
    address public immutable paymentsContractAddress;
    IERC20Metadata public immutable usdfcTokenAddress;
    address public immutable filBeamBeneficiaryAddress;
    ServiceProviderRegistry public immutable serviceProviderRegistry;
    SessionKeyRegistry public immutable sessionKeyRegistry;

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great idea, added in e87edf9

mapping(uint256 => DataSetUsage) public dataSetUsage;

event UsageReported(
uint256 indexed dataSetId, uint256 indexed epoch, uint256 cdnBytesUsed, uint256 cacheMissBytesUsed
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC, we are reporting aggregated usage for the interval maxReportedEpoch+1 to endEpoch. Let's rename the event argument to make it clear what kind of epoch it is describing.

Suggested change
uint256 indexed dataSetId, uint256 indexed epoch, uint256 cdnBytesUsed, uint256 cacheMissBytesUsed
uint256 indexed dataSetId, uint256 indexed endEpoch, uint256 cdnBytesUsed, uint256 cacheMissBytesUsed

A few more alternatives to consider:

  • rollupEpoch
  • maxReportedEpoch
  • lastReportedEpoch
  • lastEpoch
  • untilEpoch
  • toEpoch (for consistency with the events below)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Renamed in 69814c0

src/FilBeam.sol Outdated
_;
}

function reportUsageRollup(uint256 dataSetId, uint256 newEpoch, uint256 cdnBytesUsed, uint256 cacheMissBytesUsed)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the perspective of the smart contract, this function is not reporting the usage, but it is recording the usage.

Suggested change
function reportUsageRollup(uint256 dataSetId, uint256 newEpoch, uint256 cdnBytesUsed, uint256 cacheMissBytesUsed)
function recordUsageRollup(uint256 dataSetId, uint256 newEpoch, uint256 cdnBytesUsed, uint256 cacheMissBytesUsed)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Renamed in 69814c0

src/FilBeam.sol Outdated
_reportUsageRollup(dataSetId, newEpoch, cdnBytesUsed, cacheMissBytesUsed);
}

function reportUsageRollupBatch(
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto

Suggested change
function reportUsageRollupBatch(
function recordUsageRollupBatch(

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are we confident we need this function?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@juliangruber I'd argue that we need, otherwise we'd need to call this contract for each data set separately.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

otherwise we'd need to call this contract for each data set separately.

that is correct, but have we identified that calling the contract for each data set separately is an issue?

Or do we know we always want to call this for many data sets, in which case shall we only use the batch function instead?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my understanding, this method will be called by the offchain worker to report egress usage. The worker needs to send data for each dataset with non-zero egress consumed.

I am concerned that sending one TX per dataset is not going to scale and will cost us too much on gas fees.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that is correct, but have we identified that calling the contract for each data set separately is an issue?

There is no exact issues but doing so would make our cloudflare worker that reports usage data much more complex (due to nonce tracking). Also having fewer transactions should make this easier to debug in case something goes wrong.

Or do we know we always want to call this for many data sets, in which case shall we only use the batch function instead?

I believe that we'll mainly use batch method. I think removing single methods might be a good idea.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've removed single methods in 1f34058

src/FilBeam.sol Outdated
_;
}

function reportUsageRollup(uint256 dataSetId, uint256 newEpoch, uint256 cdnBytesUsed, uint256 cacheMissBytesUsed)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find the name newEpoch confusing. Let's use the same name as we will use in the emitted event, see the discussion in https://github.com/filbeam/contracts/pull/1/files#r2405388959.

For example: toEpoch.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or maxReportedEpoch, to match the state field

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Renamed to toEpoch 69814c0

src/FilBeam.sol Outdated
usage.cacheMissBytesUsed += cacheMissBytesUsed;
usage.maxReportedEpoch = newEpoch;

emit UsageReported(dataSetId, newEpoch, cdnBytesUsed, cacheMissBytesUsed);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the event also include the start epoch of the period for which the rollup was reported?

Suggested change
emit UsageReported(dataSetId, newEpoch, cdnBytesUsed, cacheMissBytesUsed);
uint256 fromEpoch = usage.maxReportedEpoch + 1;
usage.maxReportedEpoch = newEpoch;
emit UsageReported(dataSetId, fromEpoch, toEpoch, cdnBytesUsed, cacheMissBytesUsed);

But then, if we want this to be truly robust, then the function _reportUsageRollup should accept fromEpoch as well and verify that fromEpoch > usage.maxReportedEpoch.

I guess we don't need any of that right now.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good idea, I also requested that above

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be great to have these safety mechanisms in place before we go on mainnet

Comment on lines +172 to +179
function setCDNRatePerByte(uint256 _cdnRatePerByte) external onlyOwner {
if (_cdnRatePerByte == 0) revert InvalidRate();

uint256 oldRate = cdnRatePerByte;
cdnRatePerByte = _cdnRatePerByte;

emit CDNRateUpdated(oldRate, _cdnRatePerByte);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How should we handle the price change for existing rails that have some egress consumption accounted for but not settled yet?

In your proposal, we apply the pricing change retroactively to all consumption that has not been paid for yet. That does not seem friendly to our users.

Can the price change trigger the settlement of all active rails? But that would mean we have to pay NETWORK_FEE for all SPs.

How about changing the internal data model instead:

  • Don't track cacheMissBytesUsed, but track cacheMissAmount instead.
  • When processing a new rollup, set cacheMissAmount += cdnBytesUsed * cdnRatePerByte.
  • When the rate changes, it will affect only future rollups.

(The same comment applies to SP cache-miss payments.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bajtos I really like that suggestion! WDYT @juliangruber ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question, this is really a product question. If we change the rate the user needs enough time to opt out of paying the new rate, if they disagree with it. @pyropy could you gather some product input on this? Eg are we fine increasing the rate immediately, can we only do it for new data sets, etc?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@juliangruber I think that this might be a good follow-up item as it seems too complex and out of scope for this milestone. I'll make sure to get some input from @patrickwoodhead on this too.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we change the rate the user needs enough time to opt out of paying the new rate, if they disagree with it.

I agree we should eventually implement a more user friendly process around price changes.

At the same time, I agree with Srdjan that we should focus on the minimum viable solution to ship egress-based payments soon.

I consider my proposal a good middle ground - we don't apply the price change retroactively (except for the brief period between the egress usage rollup submissions), and we keep the implementation very simple.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've applied suggestion from @bajtos in eed86bc

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we name this contract FilBeamOperator? There might be more other FilBeam contracts in the future

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done in 69814c0

src/FilBeam.sol Outdated
_reportUsageRollup(dataSetId, newEpoch, cdnBytesUsed, cacheMissBytesUsed);
}

function reportUsageRollupBatch(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are we confident we need this function?

src/FilBeam.sol Outdated
_;
}

function reportUsageRollup(uint256 dataSetId, uint256 newEpoch, uint256 cdnBytesUsed, uint256 cacheMissBytesUsed)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or maxReportedEpoch, to match the state field

src/FilBeam.sol Outdated

DataSetUsage storage usage = dataSetUsage[dataSetId];

if (newEpoch <= usage.maxReportedEpoch) revert InvalidEpoch();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about adding a safety argument fromReportedEpoch, with this:

if (fromReportedEpoch != usage.maxReportedEpoch + 1) revert InvalidEpoch()

pyropy added 11 commits October 7, 2025 11:15
  Update FWSS interface function name to better reflect that it handles
  both CDN and cache-miss settlement for FilBeam payment rails.

  - Update IFWSS interface
  - Update MockFWSS implementation
  - Update calls in FilBeamOperator contract
  - Update SPEC.md documentation
  - Remove recordUsageRollup, settleCDNPaymentRail, and settleCacheMissPaymentRail
  - Keep only batch methods: recordUsageRollupBatch, settleCDNPaymentRailBatch, settleCacheMissPaymentRailBatch
  - Update tests to use batch methods with single-element arrays
  - Add _singleUint256Array helper function for test readability
  - Update README.md and SPEC.md to document batch-only API
  - recordUsageRollupBatch → recordUsageRollups
  - settleCDNPaymentRailBatch → settleCDNPaymentRails
  - settleCacheMissPaymentRailBatch → settleCacheMissPaymentRails
  - Add usdfcTokenAddress() getter to IFWSS interface
  - Update deployment script to query FWSS for USDFC address
  - Remove USDFC_ADDRESS environment variable requirement
  - Add setUsdfcTokenAddress() to MockFWSS for testing
  - Update deployment documentation
…ent time

Addresses feedback from to ensure rate changes only affect future
usage reports, not accumulated unsettled usage.

Changes:
- Rename DataSetUsage fields: cdnBytesUsed → cdnAmount, cacheMissBytesUsed → cacheMissAmount
- Calculate amounts at report time: amount += bytes * currentRate
- Store amounts instead of bytes between settlements
- Settlement functions now use stored amounts instead of calculating
  based on current rate
- Update getDataSetUsage to return amounts instead of bytes
- Update all tests to check amounts (bytes * rate) instead of raw bytes
- Update documentation to clarify rate changes only affect future reports
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Copilot reviewed 16 out of 17 changed files in this pull request and generated no new comments.


Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Create a verifier contract that receives and exposes per-data set egress data
3 participants