Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(v2.2): metadata override #732

Merged
merged 1 commit into from
Mar 5, 2025
Merged

Conversation

gfyrag
Copy link
Contributor

@gfyrag gfyrag commented Mar 5, 2025

No description provided.

@gfyrag gfyrag requested a review from a team as a code owner March 5, 2025 16:10
Copy link

coderabbitai bot commented Mar 5, 2025

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

🗂️ Base branches to auto review (1)
  • main

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

This pull request implements extensive refactoring and enhancements across the repository. Changes include updates to ignore patterns and Earthfile package versions, removal of deprecated dependencies in command and driver initialization, and comprehensive type adjustments—particularly converting numeric IDs to pointer types and adding corresponding “WithID” methods. Numerous internal API controllers, mocks, routers, and tests have been updated to use unified type references and versioned parameters. In addition, migration scripts, bucket factory interfaces, and client SDK endpoints have been revised to reflect a new database connection abstraction and improved API versioning.

Changes

File(s) Change Summary
.gitignore, Earthfile, deployments/pulumi/Earthfile Added an ignore entry for Pulumi.*.yaml files; updated earthly package version from v0.19.0 to v0.19.1.
cmd/buckets_upgrade.go, cmd/root.go Removed systemstore dependency and adjusted driver initialization to call bucket.NewDefaultFactory() without passing the DB instance.
internal/api/... (controllers, mocks, router, v1 & v2 test files) Updated method signatures to replace ledger0 with ledger/internal types; exported functions (e.g., GetInfo); added new parameters such as version strings (e.g., "develop") to router instantiations.
internal/README.md, internal/log.go, internal/transaction.go, internal/machine/vm/run.go Changed ID fields from int to *int and introduced WithID methods; improved type handling (e.g., amount type switch); refined logging and error reporting.
internal/storage/bucket/..., internal/storage/driver/..., internal/storage/system/... Transitioned to passing a database connection as bun.IDB rather than storing *bun.DB; removed the systemStore field from Driver; updated factory and migrator method signatures accordingly.
internal/storage/bucket/migrations/* Modified SQL migration scripts: added new SQL functions, conditional index creation (with concurrently logic), trigger adjustments, and changed character encoding from LATIN1 to UTF-8.
pkg/client/* (README.md, USAGE.md, ledger.go, v2.go, testserver/api.go) Updated API client calls to use versioned endpoints (e.g., Ledger.V1.GetInfo and Ledger.V2.GetInfo/GetMetrics); removed deprecated methods; refreshed documentation and SDK version from 0.5.1 to 0.5.2.
internal/storage/ledger/legacy/adapters.go & related test files Introduced new legacy paginated resource adapters for accounts, logs, transactions, aggregated balances, and volumes; updated tests to dereference ID pointers; added new filtering options in resource handlers.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Router
    participant V1Controller
    Client->>Router: GET /_info
    Router->>V1Controller: Call GetInfo(systemController, version)
    V1Controller-->>Router: Return server info
    Router-->>Client: Respond with info
Loading

Possibly related PRs

  • fix: import from 2.1 #688: Addresses similar pointer type adjustments for ID fields in Log and Transaction structs, ensuring consistent identifier handling across the codebase.

Suggested reviewers

  • paul-nicolas

Poem

In a field of code where bugs once played,
I hop in joy with changes arrayed.
Pointers dance like carrots in light,
Factories updated, shining so bright.
With each new test, my heart takes flight—
A bunny’s cheer for code done right! 🐇


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@gfyrag gfyrag changed the base branch from main to release/v2.2 March 5, 2025 16:13
@gfyrag gfyrag force-pushed the hotfix/v2.2/metadata-override branch from 85b929e to b1a81b7 Compare March 5, 2025 16:16
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments. If you are seeing this consistently it is likely a permissions issue. Please check "Moderation" -> "Code review limits" under your organization settings.

Actionable comments posted: 11

🧹 Nitpick comments (26)
internal/storage/bucket/migrations/18-transactions-fill-inserted-at/up.sql (1)

11-20: Great optimization for Azure Postgres compatibility and performance.

The approach to replace session-level trigger control with column-specific triggers is an excellent solution that:

  1. Avoids the need for superuser privileges on Azure Postgres
  2. Improves performance by only firing the trigger when metadata actually changes
  3. Creates properly scoped triggers per ledger

This is a well-thought-out change that addresses both compatibility and efficiency concerns.

For future maintenance, consider documenting this Azure Postgres compatibility requirement in your migration guidelines or infrastructure documentation, as it represents an important platform-specific constraint.

internal/storage/bucket/migrations/16-create-transaction-id-index-on-moves/up.sql (1)

1-1: Conditional Index on Moves Transactions ID with Formatting Consistency Note
The query conditionally adds the concurrently keyword for creating an index on the moves table. While functionally correct, note that the templating for "{{ .Schema }}" in this file contains extra whitespace compared to "{{.Schema}}" in the other migration files. For consistency and to avoid any potential templating issues, please review and standardize the formatting.

internal/storage/bucket/migrations/14-transaction-reference-index/up.sql (1)

1-1: Conditional "concurrently" Keyword Inclusion

This change conditionally inserts the concurrently keyword based on the .Transactional flag, which is a good strategy for handling both transactional and non-transactional migration contexts. Ensure that the .Transactional variable is correctly set in your deployment/migration configuration so that the intended behavior is achieved. Additionally, consider adding an inline comment to explain the purpose of the condition for future maintainers.

-create unique index {{ if not .Transactional }}concurrently{{end}} transactions_reference2 on "{{.Schema}}".transactions (ledger, reference) where reference <> '';
+-- The .Transactional flag determines if the migration should run non-transactionally and include the "concurrently" keyword.
+create unique index {{ if not .Transactional }}concurrently{{end}} transactions_reference2 on "{{.Schema}}".transactions (ledger, reference) where reference <> '';
internal/machine/vm/run_test.go (1)

455-509: Expand test coverage for numeric edge cases

The current tests only cover positive integers. Financial applications should test edge cases like decimal values, negative numbers, and zero values to ensure proper handling.

Consider adding these test cases:

testCases := []testCase{
    {
        name: "float64 conversion",
        inputVars: map[string]any{
            "amount": map[string]any{
                "asset":  "USD",
                "amount": float64(999999999999999),
            },
        },
        expected: map[string]string{
            "amount": "USD 999999999999999",
        },
    },
    {
        name: "big int conversion",
        inputVars: map[string]any{
            "amount": map[string]any{
                "asset": "USD",
                "amount": func() string {
                    ret, _ := big.NewInt(0).SetString("9999999999999999999999999999999999999999", 10)
                    return ret.String()
                }(),
            },
        },
        expected: map[string]string{
            "amount": "USD 9999999999999999999999999999999999999999",
        },
    },
+   {
+       name: "decimal number conversion",
+       inputVars: map[string]any{
+           "amount": map[string]any{
+               "asset":  "EUR",
+               "amount": float64(123.45),
+           },
+       },
+       expected: map[string]string{
+           "amount": "EUR 123.45",
+       },
+   },
+   {
+       name: "negative number conversion",
+       inputVars: map[string]any{
+           "amount": map[string]any{
+               "asset":  "JPY",
+               "amount": float64(-5000),
+           },
+       },
+       expected: map[string]string{
+           "amount": "JPY -5000",
+       },
+   },
+   {
+       name: "zero value conversion",
+       inputVars: map[string]any{
+           "amount": map[string]any{
+               "asset":  "BTC",
+               "amount": float64(0),
+           },
+       },
+       expected: map[string]string{
+           "amount": "BTC 0",
+       },
+   },
}
internal/storage/system/migrations.go (1)

14-14: Good interface-based design improvement.

Changing the function parameter from *bun.DB to bun.IDB is a positive improvement. This switch to an interface-based approach allows for better abstraction, increased flexibility, and improved testability through dependency injection.

This change aligns well with the goal of improving the database abstraction mentioned in the PR objectives. Using interfaces instead of concrete types is a best practice that will make future modifications to the database layer easier, especially if different database implementations are needed.

internal/storage/ledger/accounts_test.go (1)

29-30: Enhanced test setup with time adjustment and debugging context.

These changes improve the test by:

  1. Setting now to one minute in the past to create a more realistic testing timeline
  2. Enabling debug mode for the database operations, which will help with diagnosing test issues

These modifications support the PR objective of improving "transaction reversion handling for empty middle accounts" by creating a more controlled testing environment with better observability.

Consider adding a comment explaining why the time is set one minute in the past, as this temporal adjustment might not be immediately obvious to other developers.

-now := time.Now().Add(-time.Minute)
+// Set time one minute in the past to ensure proper ordering of test transactions
+now := time.Now().Add(-time.Minute)
internal/controller/ledger/controller_default.go (1)

408-414: Great enhancement for transaction reversion handling

This important addition properly handles the case where a destination account is also a source in some postings, which fixes balance inconsistencies during transaction reversions. This resolves the "transaction reversion handling for empty middle accounts" issue mentioned in the PR objectives.

Consider adding a comment explaining why this block is necessary, highlighting that it specifically handles accounts that appear as both source and destination in different postings:

 if _, ok := balances[posting.Destination]; ok {
-  // if destination is also a source in some posting, since balances should only contain posting sources
+  // Handle the special case where an account is both a destination in this posting
+  // and a source in another posting. Since balances map only tracks accounts that
+  // appear as sources, we need to explicitly update the balance when the account
+  // is acting as a destination.
   balances[posting.Destination][posting.Asset] = balances[posting.Destination][posting.Asset].Add(
     balances[posting.Destination][posting.Asset],
     posting.Amount,
   )
 }
internal/storage/ledger/logs_test.go (1)

26-26: Improved test naming convention

Renaming from TestInsertLog to TestLogsInsert follows a more consistent naming pattern that puts the subject (Logs) before the action (Insert), making it easier to group related tests.

internal/storage/ledger/accounts.go (1)

77-77: Added rows affected debug output

Another debugging statement that dumps the number of affected rows.

-spew.Dump(rowsAffected)
+// Consider using a structured logger instead:
+// logger.Debug("accounts metadata updated", "rows_affected", rowsAffected)
internal/storage/driver/driver.go (5)

30-31: Storing DB and migration config in the Driver.

The new fields, db *bun.DB, migrationRetryPeriod time.Duration, and parallelBucketMigrations int, promote clarity regarding how the driver manages database operations and parallel migrations. Verify that these concurrency settings align with your performance requirements.

Also applies to: 35-36


136-137: Downgrade detection for individual buckets.

Using d.bucketFactory.GetMigrator(b, d.db) is consistent with the new database injection pattern. Confirm that each bucket migrator is properly configured for concurrency if multiple migrations run in parallel.


178-178: Create bucket within the worker pool.

This fosters concurrency by injecting d.db. Just be mindful of potential DB connection pool constraints if scaling up.


220-220: Up-to-date check for the system store.

Straightforward. Keep an eye out for potential performance impacts if called frequently.


228-228: Listing distinct buckets for minimal version checks.

Reiterating the note on indexing/caching if the bucket list grows large.

internal/storage/ledger/legacy/adapters.go (6)

76-78: Panic for GetOne method on logs.
Using panic("never used") might cause unexpected crashes. Prefer returning an error to gracefully handle unimplemented functionality.


80-82: Panic for Count method on logs.
Same concern as above—returning an error can be safer than panicking.


157-181: Refine big.Int usage.
At line 175, calling Add(new(big.Int), balance) is effectively adding 0 and could be simplified by directly using balance or big.NewInt(0).


183-185: Panic for Count method on aggregated balances.
Returning an error is often safer than panicking.


208-210: Unimplemented GetOne method for aggregated volumes.
Same note about panic: returning an error is safer.


212-214: Unimplemented Count method for aggregated volumes.
Consider returning an error to handle the unimplemented case more gracefully.

pkg/client/USAGE.md (1)

21-25: Consider adding version migration guidance

Since this appears to be part of a significant API version change, it would be helpful to include a comment or note in the documentation mentioning the migration from V2 to V1 for the GetInfo method, explaining the reasoning behind it and any other endpoints that might be affected by similar changes.

-	res, err := s.Ledger.V1.GetInfo(ctx)
-	if err != nil {
-		log.Fatal(err)
-	}
-	if res.ConfigInfoResponse != nil {
+	// Note: In v2.2, GetInfo was moved from Ledger to Ledger.V1 namespace
+	// as part of the compatibility layer with v2.1
+	res, err := s.Ledger.V1.GetInfo(ctx)
+	if err != nil {
+		log.Fatal(err)
+	}
+	if res.ConfigInfoResponse != nil {
🧰 Tools
🪛 markdownlint-cli2 (0.17.2)

21-21: Hard tabs
Column: 1

(MD010, no-hard-tabs)


22-22: Hard tabs
Column: 1

(MD010, no-hard-tabs)


23-23: Hard tabs
Column: 1

(MD010, no-hard-tabs)


24-24: Hard tabs
Column: 1

(MD010, no-hard-tabs)


25-25: Hard tabs
Column: 1

(MD010, no-hard-tabs)

internal/api/v2/controllers_transactions_revert_test.go (1)

84-84: Consider making the version configurable rather than hardcoded

The version string "develop" is hardcoded in the test, which might make it difficult to test with different API versions. Consider making this a configurable parameter or constant that could be easily adjusted for testing various version compatibility scenarios.

- router := NewRouter(systemController, auth.NewNoAuth(), "develop", os.Getenv("DEBUG") == "true")
+ const testVersion = "develop" // Version used for testing
+ router := NewRouter(systemController, auth.NewNoAuth(), testVersion, os.Getenv("DEBUG") == "true")

Or better yet, consider using a test helper function:

- router := NewRouter(systemController, auth.NewNoAuth(), "develop", os.Getenv("DEBUG") == "true")
+ router := newTestRouter(t, systemController) // Uses a helper that centralizes router test config
internal/controller/ledger/log_process.go (1)

140-140: More explicit idempotency hash validation.

The condition for checking the idempotency hash has been improved to use an explicit length check instead of treating the string as a boolean.

This is a good change as len(log.IdempotencyHash) > 0 more clearly expresses the intent to check if the hash has content, rather than relying on string-to-boolean conversion behaviors.

For even better readability, you could consider:

-if len(log.IdempotencyHash) > 0 {
+if log.IdempotencyHash != "" {

Both approaches are valid, but the empty string comparison might be slightly more idiomatic in Go for string presence checks.

pkg/client/v2.go (1)

34-35: Avoid repeating OAuth2 scope.
Notice the OAuth2Scopes array here adds "ledger:read" twice. While not harmful, it may be cleaner to deduplicate it if the code is not strictly auto-generated.

-	OAuth2Scopes:   []string{"ledger:read", "ledger:read"},
+	OAuth2Scopes:   []string{"ledger:read"},
internal/storage/driver/driver_test.go (2)

34-52: Enhance goroutine error handling with context propagation

The current implementation collects errors from goroutines, which is good, but doesn't handle timeout or cancellation scenarios. Consider adding context propagation to ensure goroutines can be properly terminated if the test needs to time out.

-go func() {
+go func(i int) {
    defer wg.Done()
+   
+   // Use a context with timeout to prevent hanging goroutines
+   timeoutCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
+   defer cancel()

    l, err := ledger.New(fmt.Sprintf("ledger%d", i), ledger.Configuration{
        Bucket: buckets[rand.Int31n(int32(len(buckets)))],
    })
    if err != nil {
        errors <- err
        return
    }

-   _, err = d.CreateLedger(ctx, l)
+   _, err = d.CreateLedger(timeoutCtx, l)
    if err != nil {
        errors <- err
        return
    }
-}()
+}(i)

39-40: Add seed for random number generation to ensure test reproducibility

The test uses random bucket selection without setting a fixed seed, which could lead to non-deterministic test behavior.

+// Set a fixed seed for reproducible tests
+rand.Seed(42)
+
l, err := ledger.New(fmt.Sprintf("ledger%d", i), ledger.Configuration{
    Bucket: buckets[rand.Int31n(int32(len(buckets)))],
})
🛑 Comments failed to post (11)
internal/machine/vm/run.go (1)

38-43: ⚠️ Potential issue

Fix potential precision loss and add missing default case in type switch

The current implementation converts float64 amounts to integers, which can cause precision loss for large numbers or numbers with decimal points. Additionally, there's no default case to handle other types.

Apply this change to handle amounts more robustly:

switch amount := v["amount"].(type) {
case string:
    s.Script.Vars[k] = fmt.Sprintf("%s %s", v["asset"], amount)
case float64:
-   s.Script.Vars[k] = fmt.Sprintf("%s %d", v["asset"], int(amount))
+   // Preserve original precision by using the numeric format directly
+   s.Script.Vars[k] = fmt.Sprintf("%s %v", v["asset"], amount)
+default:
+   // Handle other types gracefully
+   s.Script.Vars[k] = fmt.Sprintf("%s %v", v["asset"], amount)
}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

			switch amount := v["amount"].(type) {
			case string:
				s.Script.Vars[k] = fmt.Sprintf("%s %s", v["asset"], amount)
			case float64:
				// Preserve original precision by using the numeric format directly
				s.Script.Vars[k] = fmt.Sprintf("%s %v", v["asset"], amount)
			default:
				// Handle other types gracefully
				s.Script.Vars[k] = fmt.Sprintf("%s %v", v["asset"], amount)
			}
internal/storage/ledger/accounts.go (2)

60-65: 🛠️ Refactor suggestion

Remove redundant debug statements

Six identical debug print statements that will bloat logs with repetitive "rows affected" messages without providing any context about which rows or how many.

-fmt.Println("rows affected")
-fmt.Println("rows affected")
-fmt.Println("rows affected")
-fmt.Println("rows affected")
-fmt.Println("rows affected")
-fmt.Println("rows affected")
+// Consider using a single log statement with context, if needed:
+// fmt.Printf("Update accounts metadata: %d rows affected\n", rowsAffected)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

			// Consider using a single log statement with context, if needed:
			// fmt.Printf("Update accounts metadata: %d rows affected\n", rowsAffected)

71-71: ⚠️ Potential issue

Added table dump for debugging

This will dump the entire accounts table to logs after updating metadata, which could potentially include sensitive information and create very large log entries.

This is likely only intended for development debugging. Dumping entire database tables to logs in production would be a serious concern for both performance and security reasons.

-store.DumpTables(ctx, "accounts")
+// Only enable in development mode:
+// if isDevelopmentMode() {
+//   store.DumpTables(ctx, "accounts")
+// }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

			// Only enable in development mode:
			// if isDevelopmentMode() {
			//   store.DumpTables(ctx, "accounts")
			// }
internal/log.go (2)

107-109: ⚠️ Potential issue

Potential nil pointer dereference
previous != nil does not guarantee previous.ID != nil. If previous.ID is nil, *previous.ID will panic.

--- a/internal/log.go
+++ b/internal/log.go
@@ -105,6 +105,10 @@ func (l Log) ChainLog(previous *Log) Log {
        ret.ComputeHash(previous)
        if previous != nil {
-               ret.ID = pointer.For(*previous.ID + 1)
+               if previous.ID == nil {
+                       ret.ID = pointer.For(1)
+               } else {
+                       ret.ID = pointer.For(*previous.ID + 1)
+               }
        } else {
                ret.ID = pointer.For(1)
        }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

		if previous != nil {
			if previous.ID == nil {
				ret.ID = pointer.For(1)
			} else {
				ret.ID = pointer.For(*previous.ID + 1)
			}
		} else {
			ret.ID = pointer.For(1)
		}

356-356: ⚠️ Potential issue

Potential nil pointer dereference
Dereferencing r.RevertedTransaction.ID could panic if ID is nil. Consider handling or guarding this case.

--- a/internal/log.go
+++ b/internal/log.go
@@ -355,7 +355,13 @@ func (r RevertedTransaction) GetMemento() any {
 {
-    RevertedTransactionID: *r.RevertedTransaction.ID,
+    RevertedTransactionID: func() int {
+        if r.RevertedTransaction.ID == nil {
+            // Decide on fallback behavior – returning 0 or an error
+            return 0
+        }
+        return *r.RevertedTransaction.ID
+    }(),
     RevertTransaction: transactionResume{
         Postings:  r.RevertTransaction.Postings,
         Metadata:  r.RevertTransaction.Metadata,
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

		RevertedTransactionID: func() int {
			if r.RevertedTransaction.ID == nil {
				// Decide on fallback behavior – returning 0 or an error
				return 0
			}
			return *r.RevertedTransaction.ID
		}(),
		RevertTransaction: transactionResume{
			Postings:  r.RevertTransaction.Postings,
			Metadata:  r.RevertTransaction.Metadata,
		},
internal/storage/bucket/migrations/11-make-stateless/up.sql (1)

465-465: ⚠️ Potential issue

Fixed character encoding for hash calculation.

Changed from LATIN1 to UTF-8 encoding when calculating the hash with previous hash data. This directly addresses the PR objective of fixing transactions with special characters in metadata.

This change is crucial as it ensures proper handling of non-ASCII characters in metadata, preventing corruption during hash calculation. Without this change, special characters in metadata could cause incorrect hash values and potential data integrity issues.

internal/storage/ledger/legacy/adapters.go (2)

22-39: 🛠️ Refactor suggestion

Potential panic from type assertion.
At line 25, the code does address = value.(string). If value is not a string, this will panic. Consider validating the type or checking for errors.


103-120: 🛠️ Refactor suggestion

Potential panic from type assertion.
At line 106, id = value.(int) can panic if the value is not an int. Consider adding error handling.

internal/api/v2/controllers_transactions_create_test.go (1)

419-419: 💡 Verification agent

🧩 Analysis chain

Inconsistent router configuration parameters

The router initialization has been changed from using a boolean expression to a hardcoded string "develop" as the third parameter, but the fourth parameter still uses the environment variable check. This inconsistency suggests a potential issue with the router configuration.


🏁 Script executed:

#!/bin/bash
# Find the NewRouter definition to understand the expected parameters
rg -A 3 "func NewRouter\(" --type go

Length of output: 659


Router Initialization Parameter Mismatch – Remove Extra Boolean Argument

The function signatures for NewRouter in all its instances (e.g., in internal/api/router.go and internal/api/v2/routes.go) expect exactly three arguments:

  • systemController system.Controller
  • authenticator auth.Authenticator
  • version string

However, in internal/api/v2/controllers_transactions_create_test.go (line 419), NewRouter is being called with a fourth parameter (os.Getenv("DEBUG") == "true"). This extra boolean parameter is inconsistent with the current function definition and should be removed.

Action Required:

  • Update the router initialization in the test file to:
    router := NewRouter(systemController, auth.NewNoAuth(), "develop")
pkg/client/USAGE.md (1)

25-25: 💡 Verification agent

🧩 Analysis chain

Response structure updated: V2ConfigInfoResponse → ConfigInfoResponse

The response check has been updated from res.V2ConfigInfoResponse to res.ConfigInfoResponse, aligning with the broader API versioning changes throughout the codebase.


🏁 Script executed:

#!/bin/bash
# Verify this change is consistent across the codebase

echo "Checking for any remaining V2ConfigInfoResponse references that might need updating:"
rg "V2ConfigInfoResponse" --type go -C 2

Length of output: 3582


Consistent API Naming Update Required

The documentation snippet in pkg/client/USAGE.md correctly reflects the new usage with res.ConfigInfoResponse. However, the grep output indicates that many parts of the codebase—such as:

  • pkg/client/models/components/v2configinforesponse.go (definition of the V2ConfigInfoResponse struct),
  • pkg/client/models/operations/v2getinfo.go (field and getter method),
  • pkg/client/v2.go (assignment to res.V2ConfigInfoResponse), and
  • Test files in test/e2e/ referencing V2ConfigInfoResponse

—still refer to the old V2ConfigInfoResponse naming.

Please verify whether these references are intended (e.g., for backward compatibility) or if the broader API versioning update should extend to these areas as well. If the latter is the case, further updates are required to fully align the codebase with the new ConfigInfoResponse naming.

🧰 Tools
🪛 markdownlint-cli2 (0.17.2)

25-25: Hard tabs
Column: 1

(MD010, no-hard-tabs)

internal/storage/driver/driver_test.go (1)

22-67: ⚠️ Potential issue

Bug in concurrent ledger creation loop

There's an issue in the concurrent ledger creation test logic at line 34. The code uses range countLedgers but countLedgers is a constant, not an iterable collection.

Apply this fix to properly iterate through the desired range:

-for i := range countLedgers {
+for i := 0; i < countLedgers; i++ {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

func TestLedgersCreate(t *testing.T) {
	t.Parallel()
	ctx := logging.TestingContext()

	d := driver.New(db, ledgerstore.NewFactory(db), bucket.NewDefaultFactory())

	buckets := []string{"bucket1", "bucket2"}
	const countLedgers = 80

	wg := sync.WaitGroup{}
	wg.Add(countLedgers)
	errors := make(chan error, countLedgers)
	for i := 0; i < countLedgers; i++ {
		go func(i int) {  // capture the loop variable correctly
			defer wg.Done()

			l, err := ledger.New(fmt.Sprintf("ledger%d", i), ledger.Configuration{
				Bucket: buckets[rand.Int31n(int32(len(buckets)))],
			})
			if err != nil {
				errors <- err
				return
			}

			_, err = d.CreateLedger(ctx, l)
			if err != nil {
				errors <- err
				return
			}
		}(i)
	}
	wg.Wait()

	close(errors)

	for err := range errors {
		require.NoError(t, err)
	}

	hasReachMinimalVersion, err := d.HasReachMinimalVersion(ctx)
	require.NoError(t, err)
	require.True(t, hasReachMinimalVersion)

	err = d.UpgradeAllBuckets(ctx)
	require.NoError(t, err)
}

@gfyrag gfyrag merged commit 63541a0 into release/v2.2 Mar 5, 2025
8 checks passed
@gfyrag gfyrag deleted the hotfix/v2.2/metadata-override branch March 5, 2025 16:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant