Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(parser) add flag to log diff on failure #991

Merged
merged 22 commits into from
Feb 12, 2021
Merged

Conversation

rainest
Copy link
Contributor

@rainest rainest commented Dec 18, 2020

What this PR does / why we need it:
Add a --log-failed-config flag to write a config diff to disk if applying that config failed.

Background
Diagnosing reconciliation loop failures is currently difficult for end users. In the event of a failure, we log admin API responses, but those don't always provide a clear indication of the issue: information about conflicts or invalid configuration isn't always easy to link back to Kubernetes resources, and in some cases, the admin API just encounters a fatal error that provides no useful info at all. Tracebacks or debug messages in Kong logs may provide additional hints, but those aren't necessarily intuitive either.

Logging what the controller wants to apply can simplify that debugging process: if you can see the proposed config, and how it differs from the last successfully-applied configuration, it should be easier to find problem Kubernetes resources related to it or apply it to a test Kong instance that does not have the same Kubernetes resources available--useful since we usually want to avoid shipping and running debug proxy code in actual user environments if possible.

Implementation considerations

  • Ideally, we'd show a diff per resource to cut down on noise info (we don't care about the config that does work). However, the controller builds config into a single unit before passing it off to the DB-less config endpoint or deck (which can't report back about K8S resources because it has no knowledge of them). Diffing the complete configuration is imperfect, but provides at least some additional information in absence of more granular information about which K8S resources were used to derive generated configuration.
  • The complete configuration can contain sensitive information, so this should be gated behind a flag to only dump config when you know you have an issue and want to enable debug dumps. That clashes with the diff approach if you're unsure which piece of K8S config is it fault, so I probably need to have it dump the raw config also.
  • Complete config dumps would be noisy in logs, and again, some config is sensitive, so while writing a temporary file that you'll then need to copy out of a running container is a bit cumbersome, it seems better than logging.
  • This excludes custom entities, but as those are just a user-provided YAML blob, I don't see much reason to include them--we really only care about stuff sourced from K8S config.
  • This borrows deck's JSON diff function with some minor modifications. The controller doesn't touch deck when building a DB-less state file, so we can't reuse that yet--if we're adding that to deck (as is proposed), then it'd probably make sense to instead add a report back mechanism to the deck solver and have it provide the diff content.

@hbagdi
Copy link
Member

hbagdi commented Dec 18, 2020

Why can't this be done at debug log level instead?
+1 to adding this feature but -1 for adding yet another configuration option.

@rainest
Copy link
Contributor Author

rainest commented Dec 18, 2020

Rework a bit to create a temp dir at start and use that, since the linter doesn't like static paths under /tmp, and that's a realistic possible concern in container-less dev runs.

The rationale for the additional config option is twofold:

  • We generate a complete state from all resources and then attempt to apply it, and that attempt may fail. We can't reliably know what part is the problem, since it's either a single blob we pass to /config (which may just respond with a "nope! something went wrong!" error, with a corresponding traceback like An unexpected error occurred when hmac-auth credential resources created. #884 that doesn't provide a good indication of what failed) or a single blob we pass to deck's solver (which does run individual requests, but doesn't report back details about them, and can also encounter inscrutable Lua tracebacks). The controller will diligently retry on failure, so while we could log the blob on each attempt, it'd be quite verbose and repetitive.
  • The state blob can and will contain sensitive info (credentials and such), and we want to limit where it goes. The standard logging channel will often persist elsewhere (scraping container logs and saving them to a central log collector is common practice in many K8S clusters), and we don't want state info to go into it. We can (and should) mitigate that by adding a sanitize mode to toDeckContent() (rather, add new deck logic that strips sensitive content and a flag to toDeckContent() that enables it), but that still wouldn't make it any less noisy.

Since we may ask users to enable debug logs for problems that don't require full state dumps, I think it's reasonable to add a separate flag for it.

My end goal here is to provide a simpler end-user diagnostic experience while intentionally gating it behind something that you wouldn't normally enable for other reasons. Currently, getting state info out of the controller requires either hooking up a Go debugger (to inspect in-memory state) or a tcpdump sidecar (to inspect state as the controller sends it to the admin API over the network). Those have the same sensitive data concerns alongside usability hurdles, because they're general-purpose tools and require knowledge of how our systems work to extract data from them effectively. "Enable flag, copy contents of temp directory as problem occurs, ship results to us" are hopefully simpler instructions than "eh, just tcpdump it!", and dumping state further gets us something we can easily use to reproduce a problem offline in a test environment (since it's the format that deck and DB-less use).

@hbagdi
Copy link
Member

hbagdi commented Dec 22, 2020

To be clear, I'm not against adding this feature itself. This is certainly useful for debugging but I'm not convinced that this needs to be behind a new configuration flag. This should be logged at debug-level. Debug-level can have sensitive info in it and of course, one shouldn't use it in production.

@rainest
Copy link
Contributor Author

rainest commented Jan 6, 2021

In practice, we may need to use this in production sometimes. It's not always possible to replicate an unexplained issue in a test environment. It arguably should be, but the ease of doing so depends on tooling we don't control and is prone to user error. Furthermore, even when replicating production state in a test environment is possible, a perfect replica will still contain production's sensitive values.

Further rework to instead make this part of debug-level logging, but redact certificate keys and credential secrets by default, only showing them if you further pass --log-sensitive-config. Still needs testing to validate.

The implementation overwrites the fields in question in state in-place after attempting to apply it. This should be fine as we generate state from scratch on every update anyway, so the mangled data only goes to the diff log.

@rainest rainest force-pushed the feat/diff-dump-config branch 2 times, most recently from 59f728c to 81ac9dc Compare January 14, 2021 17:12
@rainest rainest marked this pull request as ready for review January 14, 2021 17:17
Copy link
Contributor

@mflendrich mflendrich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On a high level, I have several concerns:

  • I'm not a fan of making KIC's logic more complex by adding new execution paths and outputs, which is what this PR does.
  • As is, this PR leaks tempdirs at the rate of one per KIC process lifetime. Both in a non-containerized and a containerized (e.g. crash loop) scenario this can lead to uncontrolled storage exhaustion. We have to control the number of files that KIC has potential to create over time.
  • The proposed sanitization mechanism is subtractive (and, hence, has potential to miss potentially sensitive entities added in the future, and puts additional burden on developers: requires them to remember that this function exists, and to update it). Anonymization/sanitization is an ungrateful problem (ask anyone who's worked on GDPR compliance 😁) and it's better to avoid having it at all. From a security perspective, anyone with access to KIC sufficient to manage these settings and read the file, is able to get the sensitive details, so the benefit of implementing sanitization here is unclear to me.

Therefore, the cost-benefit ratio of this change isn't on the undoubtedly positive side for me. I would like to propose a wholly different approach in line with the KISS principle. Not sure how much work it would be - just a thought experiment.

The 30k ft view of how KIC works today is roughly the following (stateless, except for state.State changes being incremental) pipeline triggered by the controller loop:
K8s etcd contents -1-> state.State -2-> KongState -3-> file.Contents -4-> Kong

The solution would be to refactor appropriate steps (-1-> through -4->), unless done so already, into reusable "filters". We could then implement a one-shot command-line tool (that tool might hypothetically be a cobra subcommand of KIC, or a separate binary) that would run a subset of the pipeline just once (as opposed to running in a controller loop). With such a tool, the solution to the subject problem would be:

oops, KIC is failing for a customer. Ask them to run the one-shot tool, as a k8s pod, executing steps 1, 2, 3 of the pipeline and printing/dumping the result

Having KIC structured like this would additionally help us test part of KIC's functionality in better separation.

@rainest
Copy link
Contributor Author

rainest commented Jan 19, 2021

I'm not a fan of making KIC's logic more complex by adding new execution paths and outputs, which is what this PR does.

This could be said of any new functionality. This augments OnUpdate() and adds a couple simple helper functions to output additional diagnostic information without modifying translation or reconciliation. I think it's reasonable.

As is, this PR leaks tempdirs at the rate of one per KIC process lifetime.

There shouldn't be more than one process under normal circumstances--controller.Start() (which creates the temporary directory) is our container PID 1. If it dies, the container gets recycled.

I'm not really concerned with non-containerized instances since they should only be used for development work.

so the benefit of implementing sanitization here is unclear to me.
We may ask users to ship these to us for review.

That aside, I do need to rework the redaction method to produce valid configuration: the basic REDACTED string will fail validation since it's not unique (credentials and certificates need some random component to not collide) and not a PEM (the certificate endpoint requires an actual certificate with a matching key). Generating better values for those should be pretty simple, however.

the cost-benefit ratio of this change isn't on the undoubtedly positive side for me.

We don't currently have any good means of diagnosing the cause of a conflict (the single generated Kong state prevents us from logging Kong errors on a per-resource basis) or incorrect translation (the only way to see admin API payloads is to use tcpdump).

These difficulties are a consequence of our current reconciliation loop, and I agree that we'd be best served by refactoring it. However, we don't have set idea of what that'd look like or when we'd be able to complete it, and with that in mind, I think we need to add something to better diagnose problems in the existing loop. If you want practical examples of where/how this would be useful:

  • I was unable to determine the cause (ultimately two consumers with the same username) of a reconciliation failure in a fresh test environment because we don't receive detailed errors from deck (we only receive a generic "config rejected" error) and don't report what we're processing well (when syncing to an empty database, the controller seemingly reports that it's syncing a single, random piece of configuration only, and can even report that it's "syncing" things it will ultimately discard because of a class mismatch). Dumping the target configuration and attempting to apply it using deck indicated the issue instantly, since deck does report individual problem entities when used independently.
  • Support and I spent several days working with a user attempting to locate the issue with some configuration. Much of this time was spent working through incorrect hypotheses developed based on inaccurate user reports and piecemeal data (subsets of K8S configuration, PSQL backups, and tcpdumps) that did not allow us to replicate the issue. It'd be much easier to do so if we could say "dump the current DB contents and collect the controller dump", as they're in the same format and can be loaded directly into Kong.
  • My attempt to replicate An unexpected error occurred when hmac-auth credential resources created. #884 just failed; I don't get the same error and that configuration is loaded as expected. That sort of issue (where we have an unhandled Lua exception) often requires custom Lua code to debug effectively, and it's much easier to develop and apply that locally than to go back and forth with the user seeing the issue. With a configuration dump, we don't need to worry about determining why an attempt to reproduce K8S configuration isn't causing the same issue.

@rainest
Copy link
Contributor Author

rainest commented Jan 21, 2021

I do need to rework the redaction method to produce valid configuration: the basic REDACTED string will fail validation since it's not unique (credentials and certificates need some random component to not collide) and not a PEM (the certificate endpoint requires an actual certificate with a matching key)

Having played around with this some, it works, but purely random values always generate a diff for these resources. To avoid that, either:

  • Delete the sensitive items altogether. This is fine for credentials (nothing depends on them) but poses a minor challenge for certificates, which the Service client-cert annotation can reference (SNIs reference certificates also, but are children of go-kong Certificate objects).
  • Deterministically generate the redaction content. For certificates, seed key generation with something (serial?), and for credentials, hash somehow. Either SHA2 with a random salt (chosen at startup) if we don't care about comparisons across instances; bcrypt if we do.

Copy link
Contributor

@mflendrich mflendrich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As agreed in a video call (@rainest @mflendrich @shaneutt):

  • the long-term solution to this problem should be a standalone tool that performs one pass of KIC and produces intermediate results
  • as a stop-gap (because the long-term solution is more costly) we'll implement config dumping as a side effect of the reconciliation loop (which this PR does).

PTAL at the attached comments (written with a motivation to minimize the added tech debt associated with the stop-gap)

internal/ingress/controller/kong.go Outdated Show resolved Hide resolved
internal/ingress/controller/kong.go Outdated Show resolved Hide resolved
internal/ingress/controller/parser/kongstate/consumer.go Outdated Show resolved Hide resolved
internal/ingress/controller/kong.go Outdated Show resolved Hide resolved
@rainest rainest changed the base branch from main to next January 26, 2021 21:22
Travis Raines added 3 commits January 26, 2021 13:23
Add a new --log-failed-config flag. When enabled, if the reconciliation
loop encounters an error, perform a diff against the last known good
configuration and write it to a temporary file.

At log level debug, dump a copy of the target configuration and a diff
against the last known good configuration (if any) to disk. By default,
redact certificate keys and credential secrets from the dump. The
--log-sensitive-fields flag dumps these in the clear.

These dumps can be used for offline troubleshooting outside the original
Kubernetes environment. They are compatible with decK, and provide a way
to inspect sync failures outside the controller in cases where the
controller's error logging is ambiguous.
@codecov
Copy link

codecov bot commented Jan 26, 2021

Codecov Report

Merging #991 (2954cde) into next (90f818d) will decrease coverage by 0.07%.
The diff coverage is 28.57%.

Impacted file tree graph

@@            Coverage Diff             @@
##             next     #991      +/-   ##
==========================================
- Coverage   53.84%   53.77%   -0.08%     
==========================================
  Files          33       34       +1     
  Lines        3172     3193      +21     
==========================================
+ Hits         1708     1717       +9     
- Misses       1332     1345      +13     
+ Partials      132      131       -1     
Impacted Files Coverage Δ
cli/ingress-controller/main.go 0.22% <0.00%> (-0.01%) ⬇️
pkg/util/types.go 0.00% <0.00%> (ø)
cli/ingress-controller/flags.go 96.38% <100.00%> (+0.13%) ⬆️
pkg/parser/parser.go 85.26% <0.00%> (+1.33%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 90f818d...2954cde. Read the comment docs.

Travis Raines added 3 commits February 3, 2021 13:54
Remove some cruft from old iterations.

Update comments and docstrings.

Only create tmpdir if dumps are enabled.
Merge didn't conflict these since they were in different locations in
the file. Remove old versions in favor of the #1021 version.
cli/ingress-controller/flags.go Outdated Show resolved Hide resolved
internal/ingress/controller/controller.go Outdated Show resolved Hide resolved
internal/ingress/controller/controller.go Outdated Show resolved Hide resolved
internal/ingress/controller/kong.go Outdated Show resolved Hide resolved
internal/ingress/controller/kong.go Outdated Show resolved Hide resolved
Travis Raines added 2 commits February 4, 2021 18:33
Convert dump configuration to a defined type iota, rename dump directory
config, and move dump directory initialization to main/pass it to the
controller constructor.
Simply save config with descriptive names. We no longer diff so we don't
need to keep old good configs in memory.

Log a warning if we cannot save.
cli/ingress-controller/main.go Outdated Show resolved Hide resolved
cli/ingress-controller/flags.go Outdated Show resolved Hide resolved
cli/ingress-controller/main.go Outdated Show resolved Hide resolved
internal/ingress/controller/kong.go Outdated Show resolved Hide resolved
internal/ingress/controller/kong.go Outdated Show resolved Hide resolved
@rainest
Copy link
Contributor Author

rainest commented Feb 9, 2021

Variant that incorporates #1029 in https://github.com/Kong/kubernetes-ingress-controller/tree/feat/dump-config pending that getting merged to next.

cli/ingress-controller/flags.go Outdated Show resolved Hide resolved
cli/ingress-controller/main.go Show resolved Hide resolved
internal/ingress/controller/controller.go Outdated Show resolved Hide resolved
internal/ingress/controller/kong.go Outdated Show resolved Hide resolved
internal/ingress/controller/kong.go Outdated Show resolved Hide resolved
internal/ingress/controller/kong.go Outdated Show resolved Hide resolved
Travis Raines and others added 8 commits February 11, 2021 14:27
Co-authored-by: Michał Flendrich <michal@flendrich.pro>
Re-add config dump following merger of decouple refactor (PR #1029).

Auto-merge created a confusing conflict, so it was easier to merge it in
with strategy "theirs" and then place the dump path in OnUpdate and
helper back after the fact.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants