-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(v2.2): metadata override #732
Conversation
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. 🗂️ Base branches to auto review (1)
Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughThis pull request implements extensive refactoring and enhancements across the repository. Changes include updates to ignore patterns and Earthfile package versions, removal of deprecated dependencies in command and driver initialization, and comprehensive type adjustments—particularly converting numeric IDs to pointer types and adding corresponding “WithID” methods. Numerous internal API controllers, mocks, routers, and tests have been updated to use unified type references and versioned parameters. In addition, migration scripts, bucket factory interfaces, and client SDK endpoints have been revised to reflect a new database connection abstraction and improved API versioning. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Router
participant V1Controller
Client->>Router: GET /_info
Router->>V1Controller: Call GetInfo(systemController, version)
V1Controller-->>Router: Return server info
Router-->>Client: Respond with info
Possibly related PRs
Suggested reviewers
Poem
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
85b929e
to
b1a81b7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments. If you are seeing this consistently it is likely a permissions issue. Please check "Moderation" -> "Code review limits" under your organization settings.
Actionable comments posted: 11
🧹 Nitpick comments (26)
internal/storage/bucket/migrations/18-transactions-fill-inserted-at/up.sql (1)
11-20
: Great optimization for Azure Postgres compatibility and performance.The approach to replace session-level trigger control with column-specific triggers is an excellent solution that:
- Avoids the need for superuser privileges on Azure Postgres
- Improves performance by only firing the trigger when metadata actually changes
- Creates properly scoped triggers per ledger
This is a well-thought-out change that addresses both compatibility and efficiency concerns.
For future maintenance, consider documenting this Azure Postgres compatibility requirement in your migration guidelines or infrastructure documentation, as it represents an important platform-specific constraint.
internal/storage/bucket/migrations/16-create-transaction-id-index-on-moves/up.sql (1)
1-1
: Conditional Index on Moves Transactions ID with Formatting Consistency Note
The query conditionally adds theconcurrently
keyword for creating an index on themoves
table. While functionally correct, note that the templating for"{{ .Schema }}"
in this file contains extra whitespace compared to"{{.Schema}}"
in the other migration files. For consistency and to avoid any potential templating issues, please review and standardize the formatting.internal/storage/bucket/migrations/14-transaction-reference-index/up.sql (1)
1-1
: Conditional "concurrently" Keyword InclusionThis change conditionally inserts the
concurrently
keyword based on the.Transactional
flag, which is a good strategy for handling both transactional and non-transactional migration contexts. Ensure that the.Transactional
variable is correctly set in your deployment/migration configuration so that the intended behavior is achieved. Additionally, consider adding an inline comment to explain the purpose of the condition for future maintainers.-create unique index {{ if not .Transactional }}concurrently{{end}} transactions_reference2 on "{{.Schema}}".transactions (ledger, reference) where reference <> ''; +-- The .Transactional flag determines if the migration should run non-transactionally and include the "concurrently" keyword. +create unique index {{ if not .Transactional }}concurrently{{end}} transactions_reference2 on "{{.Schema}}".transactions (ledger, reference) where reference <> '';internal/machine/vm/run_test.go (1)
455-509
: Expand test coverage for numeric edge casesThe current tests only cover positive integers. Financial applications should test edge cases like decimal values, negative numbers, and zero values to ensure proper handling.
Consider adding these test cases:
testCases := []testCase{ { name: "float64 conversion", inputVars: map[string]any{ "amount": map[string]any{ "asset": "USD", "amount": float64(999999999999999), }, }, expected: map[string]string{ "amount": "USD 999999999999999", }, }, { name: "big int conversion", inputVars: map[string]any{ "amount": map[string]any{ "asset": "USD", "amount": func() string { ret, _ := big.NewInt(0).SetString("9999999999999999999999999999999999999999", 10) return ret.String() }(), }, }, expected: map[string]string{ "amount": "USD 9999999999999999999999999999999999999999", }, }, + { + name: "decimal number conversion", + inputVars: map[string]any{ + "amount": map[string]any{ + "asset": "EUR", + "amount": float64(123.45), + }, + }, + expected: map[string]string{ + "amount": "EUR 123.45", + }, + }, + { + name: "negative number conversion", + inputVars: map[string]any{ + "amount": map[string]any{ + "asset": "JPY", + "amount": float64(-5000), + }, + }, + expected: map[string]string{ + "amount": "JPY -5000", + }, + }, + { + name: "zero value conversion", + inputVars: map[string]any{ + "amount": map[string]any{ + "asset": "BTC", + "amount": float64(0), + }, + }, + expected: map[string]string{ + "amount": "BTC 0", + }, + }, }internal/storage/system/migrations.go (1)
14-14
: Good interface-based design improvement.Changing the function parameter from
*bun.DB
tobun.IDB
is a positive improvement. This switch to an interface-based approach allows for better abstraction, increased flexibility, and improved testability through dependency injection.This change aligns well with the goal of improving the database abstraction mentioned in the PR objectives. Using interfaces instead of concrete types is a best practice that will make future modifications to the database layer easier, especially if different database implementations are needed.
internal/storage/ledger/accounts_test.go (1)
29-30
: Enhanced test setup with time adjustment and debugging context.These changes improve the test by:
- Setting
now
to one minute in the past to create a more realistic testing timeline- Enabling debug mode for the database operations, which will help with diagnosing test issues
These modifications support the PR objective of improving "transaction reversion handling for empty middle accounts" by creating a more controlled testing environment with better observability.
Consider adding a comment explaining why the time is set one minute in the past, as this temporal adjustment might not be immediately obvious to other developers.
-now := time.Now().Add(-time.Minute) +// Set time one minute in the past to ensure proper ordering of test transactions +now := time.Now().Add(-time.Minute)internal/controller/ledger/controller_default.go (1)
408-414
: Great enhancement for transaction reversion handlingThis important addition properly handles the case where a destination account is also a source in some postings, which fixes balance inconsistencies during transaction reversions. This resolves the "transaction reversion handling for empty middle accounts" issue mentioned in the PR objectives.
Consider adding a comment explaining why this block is necessary, highlighting that it specifically handles accounts that appear as both source and destination in different postings:
if _, ok := balances[posting.Destination]; ok { - // if destination is also a source in some posting, since balances should only contain posting sources + // Handle the special case where an account is both a destination in this posting + // and a source in another posting. Since balances map only tracks accounts that + // appear as sources, we need to explicitly update the balance when the account + // is acting as a destination. balances[posting.Destination][posting.Asset] = balances[posting.Destination][posting.Asset].Add( balances[posting.Destination][posting.Asset], posting.Amount, ) }internal/storage/ledger/logs_test.go (1)
26-26
: Improved test naming conventionRenaming from
TestInsertLog
toTestLogsInsert
follows a more consistent naming pattern that puts the subject (Logs) before the action (Insert), making it easier to group related tests.internal/storage/ledger/accounts.go (1)
77-77
: Added rows affected debug outputAnother debugging statement that dumps the number of affected rows.
-spew.Dump(rowsAffected) +// Consider using a structured logger instead: +// logger.Debug("accounts metadata updated", "rows_affected", rowsAffected)internal/storage/driver/driver.go (5)
30-31
: Storing DB and migration config in the Driver.The new fields,
db *bun.DB
,migrationRetryPeriod time.Duration
, andparallelBucketMigrations int
, promote clarity regarding how the driver manages database operations and parallel migrations. Verify that these concurrency settings align with your performance requirements.Also applies to: 35-36
136-137
: Downgrade detection for individual buckets.Using
d.bucketFactory.GetMigrator(b, d.db)
is consistent with the new database injection pattern. Confirm that each bucket migrator is properly configured for concurrency if multiple migrations run in parallel.
178-178
: Create bucket within the worker pool.This fosters concurrency by injecting
d.db
. Just be mindful of potential DB connection pool constraints if scaling up.
220-220
: Up-to-date check for the system store.Straightforward. Keep an eye out for potential performance impacts if called frequently.
228-228
: Listing distinct buckets for minimal version checks.Reiterating the note on indexing/caching if the bucket list grows large.
internal/storage/ledger/legacy/adapters.go (6)
76-78
: Panic for GetOne method on logs.
Usingpanic("never used")
might cause unexpected crashes. Prefer returning an error to gracefully handle unimplemented functionality.
80-82
: Panic for Count method on logs.
Same concern as above—returning an error can be safer than panicking.
157-181
: Refine big.Int usage.
At line 175, callingAdd(new(big.Int), balance)
is effectively adding 0 and could be simplified by directly usingbalance
orbig.NewInt(0)
.
183-185
: Panic for Count method on aggregated balances.
Returning an error is often safer than panicking.
208-210
: Unimplemented GetOne method for aggregated volumes.
Same note aboutpanic
: returning an error is safer.
212-214
: Unimplemented Count method for aggregated volumes.
Consider returning an error to handle the unimplemented case more gracefully.pkg/client/USAGE.md (1)
21-25
: Consider adding version migration guidanceSince this appears to be part of a significant API version change, it would be helpful to include a comment or note in the documentation mentioning the migration from V2 to V1 for the GetInfo method, explaining the reasoning behind it and any other endpoints that might be affected by similar changes.
- res, err := s.Ledger.V1.GetInfo(ctx) - if err != nil { - log.Fatal(err) - } - if res.ConfigInfoResponse != nil { + // Note: In v2.2, GetInfo was moved from Ledger to Ledger.V1 namespace + // as part of the compatibility layer with v2.1 + res, err := s.Ledger.V1.GetInfo(ctx) + if err != nil { + log.Fatal(err) + } + if res.ConfigInfoResponse != nil {🧰 Tools
🪛 markdownlint-cli2 (0.17.2)
21-21: Hard tabs
Column: 1(MD010, no-hard-tabs)
22-22: Hard tabs
Column: 1(MD010, no-hard-tabs)
23-23: Hard tabs
Column: 1(MD010, no-hard-tabs)
24-24: Hard tabs
Column: 1(MD010, no-hard-tabs)
25-25: Hard tabs
Column: 1(MD010, no-hard-tabs)
internal/api/v2/controllers_transactions_revert_test.go (1)
84-84
: Consider making the version configurable rather than hardcodedThe version string "develop" is hardcoded in the test, which might make it difficult to test with different API versions. Consider making this a configurable parameter or constant that could be easily adjusted for testing various version compatibility scenarios.
- router := NewRouter(systemController, auth.NewNoAuth(), "develop", os.Getenv("DEBUG") == "true") + const testVersion = "develop" // Version used for testing + router := NewRouter(systemController, auth.NewNoAuth(), testVersion, os.Getenv("DEBUG") == "true")Or better yet, consider using a test helper function:
- router := NewRouter(systemController, auth.NewNoAuth(), "develop", os.Getenv("DEBUG") == "true") + router := newTestRouter(t, systemController) // Uses a helper that centralizes router test configinternal/controller/ledger/log_process.go (1)
140-140
: More explicit idempotency hash validation.The condition for checking the idempotency hash has been improved to use an explicit length check instead of treating the string as a boolean.
This is a good change as
len(log.IdempotencyHash) > 0
more clearly expresses the intent to check if the hash has content, rather than relying on string-to-boolean conversion behaviors.For even better readability, you could consider:
-if len(log.IdempotencyHash) > 0 { +if log.IdempotencyHash != "" {Both approaches are valid, but the empty string comparison might be slightly more idiomatic in Go for string presence checks.
pkg/client/v2.go (1)
34-35
: Avoid repeating OAuth2 scope.
Notice theOAuth2Scopes
array here adds"ledger:read"
twice. While not harmful, it may be cleaner to deduplicate it if the code is not strictly auto-generated.- OAuth2Scopes: []string{"ledger:read", "ledger:read"}, + OAuth2Scopes: []string{"ledger:read"},internal/storage/driver/driver_test.go (2)
34-52
: Enhance goroutine error handling with context propagationThe current implementation collects errors from goroutines, which is good, but doesn't handle timeout or cancellation scenarios. Consider adding context propagation to ensure goroutines can be properly terminated if the test needs to time out.
-go func() { +go func(i int) { defer wg.Done() + + // Use a context with timeout to prevent hanging goroutines + timeoutCtx, cancel := context.WithTimeout(ctx, 5*time.Second) + defer cancel() l, err := ledger.New(fmt.Sprintf("ledger%d", i), ledger.Configuration{ Bucket: buckets[rand.Int31n(int32(len(buckets)))], }) if err != nil { errors <- err return } - _, err = d.CreateLedger(ctx, l) + _, err = d.CreateLedger(timeoutCtx, l) if err != nil { errors <- err return } -}() +}(i)
39-40
: Add seed for random number generation to ensure test reproducibilityThe test uses random bucket selection without setting a fixed seed, which could lead to non-deterministic test behavior.
+// Set a fixed seed for reproducible tests +rand.Seed(42) + l, err := ledger.New(fmt.Sprintf("ledger%d", i), ledger.Configuration{ Bucket: buckets[rand.Int31n(int32(len(buckets)))], })
🛑 Comments failed to post (11)
internal/machine/vm/run.go (1)
38-43:
⚠️ Potential issueFix potential precision loss and add missing default case in type switch
The current implementation converts float64 amounts to integers, which can cause precision loss for large numbers or numbers with decimal points. Additionally, there's no default case to handle other types.
Apply this change to handle amounts more robustly:
switch amount := v["amount"].(type) { case string: s.Script.Vars[k] = fmt.Sprintf("%s %s", v["asset"], amount) case float64: - s.Script.Vars[k] = fmt.Sprintf("%s %d", v["asset"], int(amount)) + // Preserve original precision by using the numeric format directly + s.Script.Vars[k] = fmt.Sprintf("%s %v", v["asset"], amount) +default: + // Handle other types gracefully + s.Script.Vars[k] = fmt.Sprintf("%s %v", v["asset"], amount) }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.switch amount := v["amount"].(type) { case string: s.Script.Vars[k] = fmt.Sprintf("%s %s", v["asset"], amount) case float64: // Preserve original precision by using the numeric format directly s.Script.Vars[k] = fmt.Sprintf("%s %v", v["asset"], amount) default: // Handle other types gracefully s.Script.Vars[k] = fmt.Sprintf("%s %v", v["asset"], amount) }
internal/storage/ledger/accounts.go (2)
60-65: 🛠️ Refactor suggestion
Remove redundant debug statements
Six identical debug print statements that will bloat logs with repetitive "rows affected" messages without providing any context about which rows or how many.
-fmt.Println("rows affected") -fmt.Println("rows affected") -fmt.Println("rows affected") -fmt.Println("rows affected") -fmt.Println("rows affected") -fmt.Println("rows affected") +// Consider using a single log statement with context, if needed: +// fmt.Printf("Update accounts metadata: %d rows affected\n", rowsAffected)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.// Consider using a single log statement with context, if needed: // fmt.Printf("Update accounts metadata: %d rows affected\n", rowsAffected)
71-71:
⚠️ Potential issueAdded table dump for debugging
This will dump the entire accounts table to logs after updating metadata, which could potentially include sensitive information and create very large log entries.
This is likely only intended for development debugging. Dumping entire database tables to logs in production would be a serious concern for both performance and security reasons.
-store.DumpTables(ctx, "accounts") +// Only enable in development mode: +// if isDevelopmentMode() { +// store.DumpTables(ctx, "accounts") +// }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.// Only enable in development mode: // if isDevelopmentMode() { // store.DumpTables(ctx, "accounts") // }
internal/log.go (2)
107-109:
⚠️ Potential issuePotential nil pointer dereference
previous != nil
does not guaranteeprevious.ID != nil
. Ifprevious.ID
is nil,*previous.ID
will panic.--- a/internal/log.go +++ b/internal/log.go @@ -105,6 +105,10 @@ func (l Log) ChainLog(previous *Log) Log { ret.ComputeHash(previous) if previous != nil { - ret.ID = pointer.For(*previous.ID + 1) + if previous.ID == nil { + ret.ID = pointer.For(1) + } else { + ret.ID = pointer.For(*previous.ID + 1) + } } else { ret.ID = pointer.For(1) }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if previous != nil { if previous.ID == nil { ret.ID = pointer.For(1) } else { ret.ID = pointer.For(*previous.ID + 1) } } else { ret.ID = pointer.For(1) }
356-356:
⚠️ Potential issuePotential nil pointer dereference
Dereferencingr.RevertedTransaction.ID
could panic ifID
is nil. Consider handling or guarding this case.--- a/internal/log.go +++ b/internal/log.go @@ -355,7 +355,13 @@ func (r RevertedTransaction) GetMemento() any { { - RevertedTransactionID: *r.RevertedTransaction.ID, + RevertedTransactionID: func() int { + if r.RevertedTransaction.ID == nil { + // Decide on fallback behavior – returning 0 or an error + return 0 + } + return *r.RevertedTransaction.ID + }(), RevertTransaction: transactionResume{ Postings: r.RevertTransaction.Postings, Metadata: r.RevertTransaction.Metadata,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.RevertedTransactionID: func() int { if r.RevertedTransaction.ID == nil { // Decide on fallback behavior – returning 0 or an error return 0 } return *r.RevertedTransaction.ID }(), RevertTransaction: transactionResume{ Postings: r.RevertTransaction.Postings, Metadata: r.RevertTransaction.Metadata, },
internal/storage/bucket/migrations/11-make-stateless/up.sql (1)
465-465:
⚠️ Potential issueFixed character encoding for hash calculation.
Changed from LATIN1 to UTF-8 encoding when calculating the hash with previous hash data. This directly addresses the PR objective of fixing transactions with special characters in metadata.
This change is crucial as it ensures proper handling of non-ASCII characters in metadata, preventing corruption during hash calculation. Without this change, special characters in metadata could cause incorrect hash values and potential data integrity issues.
internal/storage/ledger/legacy/adapters.go (2)
22-39: 🛠️ Refactor suggestion
Potential panic from type assertion.
At line 25, the code doesaddress = value.(string)
. Ifvalue
is not a string, this will panic. Consider validating the type or checking for errors.
103-120: 🛠️ Refactor suggestion
Potential panic from type assertion.
At line 106,id = value.(int)
can panic if the value is not anint
. Consider adding error handling.internal/api/v2/controllers_transactions_create_test.go (1)
419-419: 💡 Verification agent
🧩 Analysis chain
Inconsistent router configuration parameters
The router initialization has been changed from using a boolean expression to a hardcoded string "develop" as the third parameter, but the fourth parameter still uses the environment variable check. This inconsistency suggests a potential issue with the router configuration.
🏁 Script executed:
#!/bin/bash # Find the NewRouter definition to understand the expected parameters rg -A 3 "func NewRouter\(" --type goLength of output: 659
Router Initialization Parameter Mismatch – Remove Extra Boolean Argument
The function signatures for
NewRouter
in all its instances (e.g., ininternal/api/router.go
andinternal/api/v2/routes.go
) expect exactly three arguments:
systemController system.Controller
authenticator auth.Authenticator
version string
However, in
internal/api/v2/controllers_transactions_create_test.go
(line 419),NewRouter
is being called with a fourth parameter (os.Getenv("DEBUG") == "true"
). This extra boolean parameter is inconsistent with the current function definition and should be removed.Action Required:
- Update the router initialization in the test file to:
router := NewRouter(systemController, auth.NewNoAuth(), "develop")pkg/client/USAGE.md (1)
25-25: 💡 Verification agent
🧩 Analysis chain
Response structure updated: V2ConfigInfoResponse → ConfigInfoResponse
The response check has been updated from
res.V2ConfigInfoResponse
tores.ConfigInfoResponse
, aligning with the broader API versioning changes throughout the codebase.
🏁 Script executed:
#!/bin/bash # Verify this change is consistent across the codebase echo "Checking for any remaining V2ConfigInfoResponse references that might need updating:" rg "V2ConfigInfoResponse" --type go -C 2Length of output: 3582
Consistent API Naming Update Required
The documentation snippet in
pkg/client/USAGE.md
correctly reflects the new usage withres.ConfigInfoResponse
. However, the grep output indicates that many parts of the codebase—such as:
pkg/client/models/components/v2configinforesponse.go
(definition of theV2ConfigInfoResponse
struct),pkg/client/models/operations/v2getinfo.go
(field and getter method),pkg/client/v2.go
(assignment tores.V2ConfigInfoResponse
), and- Test files in
test/e2e/
referencingV2ConfigInfoResponse
—still refer to the old
V2ConfigInfoResponse
naming.Please verify whether these references are intended (e.g., for backward compatibility) or if the broader API versioning update should extend to these areas as well. If the latter is the case, further updates are required to fully align the codebase with the new
ConfigInfoResponse
naming.🧰 Tools
🪛 markdownlint-cli2 (0.17.2)
25-25: Hard tabs
Column: 1(MD010, no-hard-tabs)
internal/storage/driver/driver_test.go (1)
22-67:
⚠️ Potential issueBug in concurrent ledger creation loop
There's an issue in the concurrent ledger creation test logic at line 34. The code uses
range countLedgers
butcountLedgers
is a constant, not an iterable collection.Apply this fix to properly iterate through the desired range:
-for i := range countLedgers { +for i := 0; i < countLedgers; i++ {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func TestLedgersCreate(t *testing.T) { t.Parallel() ctx := logging.TestingContext() d := driver.New(db, ledgerstore.NewFactory(db), bucket.NewDefaultFactory()) buckets := []string{"bucket1", "bucket2"} const countLedgers = 80 wg := sync.WaitGroup{} wg.Add(countLedgers) errors := make(chan error, countLedgers) for i := 0; i < countLedgers; i++ { go func(i int) { // capture the loop variable correctly defer wg.Done() l, err := ledger.New(fmt.Sprintf("ledger%d", i), ledger.Configuration{ Bucket: buckets[rand.Int31n(int32(len(buckets)))], }) if err != nil { errors <- err return } _, err = d.CreateLedger(ctx, l) if err != nil { errors <- err return } }(i) } wg.Wait() close(errors) for err := range errors { require.NoError(t, err) } hasReachMinimalVersion, err := d.HasReachMinimalVersion(ctx) require.NoError(t, err) require.True(t, hasReachMinimalVersion) err = d.UpgradeAllBuckets(ctx) require.NoError(t, err) }
No description provided.