Skip to content
This repository has been archived by the owner on Jun 19, 2022. It is now read-only.

Manual verification - Trace GCP Source -> GCP Broker -> Cloud Run Service #1147

Closed
Harwayne opened this issue May 27, 2020 · 7 comments
Closed
Assignees
Labels
area/observability kind/feature-request New feature or request priority/1 Blocks current release defined by release/* label or blocks current milestone release/2 storypoint/2
Milestone

Comments

@Harwayne
Copy link
Contributor

Exit Criteria
Manual verification that a complete Trace GCP Source -> GCP Broker -> Cloud Run Service is visible in StackDriver.

@Harwayne Harwayne added the kind/feature-request New feature or request label May 27, 2020
@Harwayne Harwayne changed the title Manual verification - Complete Trace GCP Source -> GCP Broker -> Cloud Run Service Manual verification - Trace GCP Source -> GCP Broker -> Cloud Run Service May 27, 2020
@Harwayne Harwayne added priority/1 Blocks current release defined by release/* label or blocks current milestone release/1 labels May 27, 2020
@Harwayne Harwayne added this to the Backlog milestone May 27, 2020
@ian-mi ian-mi self-assigned this Jun 2, 2020
@grantr grantr modified the milestones: Backlog, v0.16.0-M2 Jun 10, 2020
@grantr grantr assigned Harwayne and unassigned ian-mi Jun 10, 2020
@Harwayne
Copy link
Contributor Author

Harwayne commented Jun 18, 2020

Helper scripts to setup and tear down auth:

#!/bin/bash

set -e -u

export GSA_PROJECT=fill_me_in
export PROJECT=fill_me_in
export CLUSTER_NAME=fill_me_in

function secrets() {
  type=$1
  secret_name=$2
  gcloud iam service-accounts create "${CLUSTER_NAME}-$type" --project $GSA_PROJECT
  gcloud iam service-accounts keys create $type.json --project $GSA_PROJECT \
    --iam-account="${CLUSTER_NAME}-$type@$GSA_PROJECT.iam.gserviceaccount.com"
  kubectl --namespace cloud-run-events create secret generic $secret_name --from-file=key.json=$type.json
  rm -f "$type.json"
}

secrets "control" "google-cloud-key"
secrets "broker" "google-cloud-broker-key"
secrets "sources" "google-cloud-sources-key"

function iamPolicy() {
  type=$1
  shift 1
  roles=("$@")
  member="serviceAccount:${CLUSTER_NAME}-$type@$GSA_PROJECT.iam.gserviceaccount.com"
  for role in "${roles[@]}"; do
    gcloud projects add-iam-policy-binding $PROJECT \
      --role "roles/$role" \
      --member="$member"
  done
}

iamPolicy "control" \
  "logging.admin" \
  "pubsub.editor" \
  "cloudscheduler.admin" \
  "storage.admin"
iamPolicy "broker" \
  "pubsub.editor"
iamPolicy "sources" \
  "pubsub.editor" \
  "cloudtrace.agent"

Clean up

#!/bin/bash

set -u

export GSA_PROJECT=fill_me_in
export PROJECT=fill_me_in
export CLUSTER_NAME=fill_me_in

function deleteSecrets() {
  type=$1
  secret_name=$2
  gcloud iam service-accounts delete "${CLUSTER_NAME}-$type@$GSA_PROJECT.iam.gserviceaccount.com" --project $GSA_PROJECT --quiet
  kubectl --namespace cloud-run-events delete secret $secret_name
}

deleteSecrets "control" "google-cloud-key"
deleteSecrets "broker" "google-cloud-broker-key"
deleteSecrets "sources" "google-cloud-sources-key"

function deleteIamPolicy() {
  type=$1
  shift 1
  roles=("$@")
  member="serviceAccount:${CLUSTER_NAME}-$type@$GSA_PROJECT.iam.gserviceaccount.com"
  for role in "${roles[@]}"; do
    gcloud projects remove-iam-policy-binding $PROJECT \
      --role "roles/$role" \
      --member="$member" \
      --quiet
  done
}

deleteIamPolicy "control" \
  "logging.admin" \
  "pubsub.editor" \
  "cloudscheduler.admin" \
  "storage.admin"
deleteIamPolicy "broker" \
  "pubsub.editor"
deleteIamPolicy "sources" \
  "pubsub.editor" \
  "cloudtrace.agent"

@Harwayne
Copy link
Contributor Author

Moving secrets:

kubectl --namespace cloud-run-events get secret google-cloud-sources-key -o yaml | \
sed -e 's/^.*namespace: cloud-run-events.*$//g' \
    -e 's/name: google-cloud-sources-key/name: google-cloud-key/g' | \
kubectl --namespace default apply -f -

@Harwayne
Copy link
Contributor Author

Harwayne commented Jun 18, 2020

Updating the tracing configuration:

kubectl --namespace cloud-run-events get cm config-tracing -o yaml | sed 's/^  sample-rate: "0.1"/  sample-rate: "1.0"/g' | kubectl apply -f -
kubectl --namespace knative-serving get cm config-tracing -o yaml | sed -e 's/addonmanager.kubernetes.io\/mode: Reconcile/addonmanager.kubernetes.io\/mode: EnsureExists/g' -e 's/^data:/data:\n  backend: "stackdriver"\n  sample-rate: "0.0"/g'| kubectl apply -f -

Creating the Topic:

gcloud pubsub topics create testing --project $PROJECT

Creating all the resources:

cat << EOF | kubectl --namespace default apply -f -
apiVersion: eventing.knative.dev/v1beta1
kind: Broker
metadata:
  name: default
  namespace: default
  annotations:
    "eventing.knative.dev/broker.class": "googlecloud"
---
apiVersion: eventing.knative.dev/v1beta1
kind: Trigger
metadata:
  name: t-display
spec:
  broker: default
  subscriber:
    ref:
     apiVersion: serving.knative.dev/v1
     kind: Service
     name: event-display
---
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: event-display
  namespace: default
spec:
  template:
    spec:
      containers:
        - # This corresponds to
          # https://github.com/knative/eventing-contrib/tree/master/cmd/event_display/main.go
          image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display
---
apiVersion: events.cloud.google.com/v1alpha1
kind: CloudPubSubSource
metadata:
  name: ps-source
spec:
  topic: testing
  sink:
    ref:
      apiVersion: eventing.knative.dev/v1beta1
      kind: Broker
      name: default
EOF

Publish message:

gcloud pubsub topics publish testing --message "Trace me" --project $PROJECT

I can see two distinct traces:

Eventing Trace in StackDriver:
trace

Serving Trace in StackDriver:
servingtrace

Using WireShark, I can see that the request from the Broker fanout to event-display has the Traceparent: 00-016b4e0ef1df7e3b046a9aa892986fb0-f0096d65ef62e37b-01 header. Which is how it is passed around inside eventing (there is also a Ce-Traceparent header with the same value). My guess is that serving does not pick up that trace id and instead makes a new one.

@Harwayne
Copy link
Contributor Author

Harwayne commented Jun 19, 2020

From what I can see in the serving code, serving uses B3 headers.

https://github.com/knative/serving/blob/a77fe5e1b0850202f8f28a82c862a252e2999659/pkg/activator/handler/handler.go#L58 creates an ochttp.Transport without providing Propagation. So it defaults to b3.HTTPFormat. Which uses the following headers:

  • "X-B3-TraceId"
  • "X-B3-SpanId"
  • "X-B3-Sampled"

Whereas the trigger is sending Traceparent.

@ian-mi
Copy link
Contributor

ian-mi commented Jun 22, 2020

I've verified that serving is still propagating the tracecontext headers even if it is not using them for its own spans. I tested this by setting up a ksvc with a tracestate ochttp handler and verifying that the receiver span was not disconnected from the broker span despite the presence of disconnected activator and proxy spans.
event-trace
serving-trace

@Harwayne
Copy link
Contributor Author

Harwayne commented Aug 3, 2020

/close

The trace is now running all the way from the CloudPubSubSource through the googlecloud Broker to the Knative Service, even without anything being done by the user container inside in the Knative Service.

2020-08-03-Trace

@knative-prow-robot
Copy link
Contributor

@Harwayne: Closing this issue.

In response to this:

/close

The trace is now running all the way from the CloudPubSubSource through the googlecloud Broker to the Knative Service, even without anything being done by the user container inside in the Knative Service.

2020-08-03-Trace

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area/observability kind/feature-request New feature or request priority/1 Blocks current release defined by release/* label or blocks current milestone release/2 storypoint/2
Projects
None yet
Development

No branches or pull requests

4 participants