Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Example on Envoy-xray integration #10442

Closed
BhagathSeelam opened this issue Mar 18, 2020 · 23 comments
Closed

Example on Envoy-xray integration #10442

BhagathSeelam opened this issue Mar 18, 2020 · 23 comments
Labels
area/docs question Questions that are neither investigations, bugs, nor enhancements stale stalebot believes this issue/PR has not been touched recently

Comments

@BhagathSeelam
Copy link

BhagathSeelam commented Mar 18, 2020

Can someone provide configuration example on Envoy with Aws xray?

Thanks

@mattklein123 mattklein123 added question Questions that are neither investigations, bugs, nor enhancements area/docs labels Mar 18, 2020
@mattklein123
Copy link
Member

cc @marcomagdy would be nice to get a doc update on this if needed.

@marcomagdy
Copy link
Contributor

tracing:
  http:
    name: envoy.tracers.xray
    config:
      sampling_rule_manifest:
        filename: /tmp/rules.json
      daemon_endpoint:
        protocol: UDP
        address: 127.0.0.1
        port_value: 2000

For more information on the contents of the rule.json file see the x-ray documentation

I'll update the documentation in Envoy with a complete example as soon as I can.
Let me know if you still have questions.

@BhagathSeelam
Copy link
Author

Hi @marcomagdy

thanks for quick response, I added following configuration in envoy.yaml, still i don't see segment are being push to xray_daemon.
Please let me know, if any more configuration needs to be added.

tracing:
http:
name: envoy.tracers.xray
config:
segment_name: envoy-xray
sampling_rule_manifest:
filename: /xray_daemon/rules.json
daemon_endpoint:
protocol: UDP
address: 127.0.0.1
port_value: 2000


debug logs from xray_daemon
2020-03-24T15:43:50Z [Info] Initializing AWS X-Ray daemon 3.2.0
2020-03-24T15:43:50Z [Debug] Listening on UDP 127.0.0.1:2000
2020-03-24T15:43:50Z [Info] Using buffer memory limit of 9 MB
2020-03-24T15:43:50Z [Info] 144 segment buffers allocated
2020-03-24T15:43:50Z [Debug] Using proxy address:
2020-03-24T15:43:50Z [Debug] Fetch region us-east-1 from commandline/config file
2020-03-24T15:43:50Z [Info] Using region: us-east-1
2020-03-24T15:43:50Z [Debug] ARN of the AWS resource running the daemon:
2020-03-24T15:43:50Z [Debug] No Metadata set for telemetry records
2020-03-24T15:43:50Z [Debug] Using Endpoint: https://xray.us-east-1.amazonaws.com
2020-03-24T15:43:50Z [Debug] Telemetry initiated
2020-03-24T15:43:50Z [Info] HTTP Proxy server using X-Ray Endpoint : https://xray.us-east-1.amazonaws.com
2020-03-24T15:43:50Z [Debug] Using Endpoint: https://xray.us-east-1.amazonaws.com
2020-03-24T15:43:50Z [Debug] Batch size: 50
2020-03-24T15:43:50Z [Info] Starting proxy http server on 127.0.0.1:2000
2020-03-24T15:44:50Z [Debug] Skipped telemetry data as no segments found
2020-03-24T15:45:50Z [Debug] Skipped telemetry data as no segments found
2020-03-24T15:46:50Z [Debug] Skipped telemetry data as no segments found
2020-03-24T15:47:50Z [Debug] Skipped telemetry data as no segments found
2020-03-24T15:48:50Z [Debug] Skipped telemetry data as no segments found

@marcomagdy
Copy link
Contributor

There is no more configuration needed.
I don't have visibility into the rules you are supplying to the tracer (/xray_daemon/rules.json).

You either have a low sampling rate that's why you are not seeing anything sent to the daemon. You need to send more requests. Or the rules you set are not matching any of the requests you are making.

@BhagathSeelam
Copy link
Author

@marcomagdy

I am allowing all paths with *

here is my rules.json file

{
"version": 2,
"rules": [
{
"priority":1,
"description": "Envoy XRay Sampling rules",
"service_name": "",
"http_method": "
",
"url_path": "*",
"fixed_target": 100,
"rate": 1
}
],
"default": {
"fixed_target": 1,
"rate": 0.1
}
}

@marcomagdy
Copy link
Contributor

marcomagdy commented Mar 24, 2020

Verify two things:

  1. You have tracing: {} in the connection manager settings (see below)
  2. Use the (non-deprecated) typed_config in the tracer (my bad for suggesting those earlier) and set the segment_name (see XRay Tracer: Segfault in finishSpan #10142)
static_resources:
   listeners:
   - name: listener_0
     address:
       socket_address: { address: 0.0.0.0, port_value: 10000 }
     filter_chains:
     - filters:
       - name: envoy.http_connection_manager
         typed_config:
           "@type": type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
           stat_prefix: ingress_http
           tracing: {}  # <---- this MUST exist for the tracer to work
           codec_type: AUTO
           route_config:

Use the non-deprecated configuration:

tracing:
  http:
    name: envoy.tracers.xray
    typed_config:
      "@type": type.googleapis.com/envoy.config.trace.v3.XRayConfig
      segment_name: "Envoy" # You must set this until v1.14 
      daemon_endpoint:
        protocol: UDP
        address: 127.0.0.1
        port_value: 2000
      sampling_rule_manifest:
        filename: /xray_daemon/rules.json

@BhagathSeelam
Copy link
Author

BhagathSeelam commented Mar 25, 2020

Hi @marcomagdy,

Thank for quick help, After adding tracing:{}, I can now see segments buffer being pushed by xray_daemon to xray API and able to view trace in console.

However i can only see client--> envoy trace in console, what is the configuration needed for tracing http filters like "envoy.filters.http.jwt_authn , envoy.config.filter.http.ext_authz.v2.ExtAuthz" and downstream routes.

@marcomagdy
Copy link
Contributor

None of the tracers collect traces inside of Envoy AFAIK (definitely not X-Ray).
Ideially Envoy is should be transparent to your upstream/downstream.

@BhagathSeelam
Copy link
Author

@marcomagdy if am understanding correct, xray tace id cannot be passed to upstream clusters to see end-end distrbuted trace view map in AWS console ?

@marcomagdy
Copy link
Contributor

xray-trace-id does get passed to upstream clusters.

@BhagathSeelam
Copy link
Author

@marcomagdy, Without xray-trace-id being passed to upstream, how can we implement distributed tracing using envoy?

Can we except this feature anytime soon?

@marcomagdy
Copy link
Contributor

it DOES get passed to upstream

@BhagathSeelam
Copy link
Author

@marcomagdy Thank you, Sorry i overlooked your previous comment

@BhagathSeelam
Copy link
Author

BhagathSeelam commented Mar 25, 2020

@marcomagdy

One last final question, xray-trace-id is being passed to http_filters and upstream cluster.

ParentId is being passed to http_filters, but it's not passing parentId to upstream cluster which is resulting in creating two map view in aws Xray console.


XRAY RAW DATA

{
"Duration": 0.331,
"Id": "1-5e7b6991-33813f25583be063089c35dd",
"Segments": [
{
"Document": {
"id": "00000156c7d1af31", <-- parent_id
"name": "Envoy",
"start_time": 1585146257.647294,
"trace_id": "1-5e7b6991-33813f25583be063089c35dd",
"end_time": 1585146257.97831,
"http": {
"request": {
"url": "http://example.com/api/v1/test",
"method": "GET",
"user_agent": "PostmanRuntime/7.17.1"
},
"response": {}
},
"annotations": {
"response_flags": "-",
"component": "proxy",
"upstream_cluster": "app",
"request_size": "0",
"downstream_cluster": "-"
}
},
"Id": "00000156c7d1af31"
},
{
"Document": {
"id": "00000156db369a50",
"name": "async ext_authz egress",
"start_time": 1585146257.972886,
"trace_id": "1-5e7b6991-33813f25583be063089c35dd",
"end_time": 1585146257.975567,
"parent_id": "00000156c7d1af31", <-- parent_id is passed to 1st http_filter
"annotations": {
"ext_authz_status": "ext_authz_ok",
"component": "proxy",
"upstream_cluster": "ext_authz"
}
},
"Id": "00000156db369a50"
},
{
"Document": {
"id": "00000156c7d3a2d4",
"name": "JWT Remote PubKey Fetch",
"start_time": 1585146257.647633,
"trace_id": "1-5e7b6991-33813f25583be063089c35dd",
"end_time": 1585146257.972484,
"parent_id": "00000156c7d1af31", <-- parent_id is passed to 2nd http_filter
"http": {
"response": {}
},
"annotations": {
"response_flags": "-",
"component": "proxy",
"upstream_cluster": "jkws_cluster",
"upstream_address": "xxxxxxxx"
}
},
"Id": "00000156c7d3a2d4"
}
{
"Document": {
"id": "0656e874fa33f59d",
"name": "webservice",
"start_time": 1585146257.976,
"trace_id": "1-5e7b6991-33813f25583be063089c35dd",
"end_time": 1585146257.978, <-- parent_id is missing to upstream_cluster
"http": {
"request": {
"url": "http://example.com/api/v1/test",
"method": "GET",
"user_agent": "PostmanRuntime/7.17.1",
"client_ip": "xxxxxxx",
"x_forwarded_for": true
},
"response": {
"status": 200,
"content_length": 50
}
},
"aws": {
"xray": {
"sdk_version": "2.3.0",
"sdk": "X-Ray for Java"
}
},
"service": {
"runtime": "OpenJDK 64-Bit Server VM",
"runtime_version": "1.8.0_242"
}
},
"Id": "0656e874fa33f59d"
}
]
}

@marcomagdy
Copy link
Contributor

Maybe the filter is stripping out the trace header. That's not something the tracer is doing.

@BhagathSeelam
Copy link
Author

BhagathSeelam commented Apr 14, 2020

@marcomagdy after upgrading envoy version to latest 1.14.1, i can now see parentId is being is passed to upstream clusters. However i am seeing same document name(defaulted to route cluster name) for all http_filters(envoy.filters.http.jwt_authn and envoy.ext_authz)

is this expected behaviour ?
Below is raw_data from aws xray console

{
    "Duration": 0.254,
    "Id": "1-5e959e6f-b7b2649af04414a63b1c9fcc",
    "Segments": [
        {
            "Document": {
                "id": "00001e8e513c491b",
                "name": "app",
                "start_time": 1586863728.045335,
                "trace_id": "1-5e959e6f-b7b2649af04414a63b1c9fcc",
                "end_time": 1586863728.048488,
                "parent_id": "00001e8e4272f98b",
                "annotations": {
                    "ext_authz_status": "ext_authz_ok",
                    "component": "proxy",
                    "upstream_cluster": "ext_authz"
                }
            },
            "Id": "00001e8e513c491b"
        },
        {
            "Document": {
                "id": "2b871b497b07c104",
                "name": "webservice",
                "start_time": 1586863728.049,
                "trace_id": "1-5e959e6f-b7b2649af04414a63b1c9fcc",
                "end_time": 1586863728.05,
                "parent_id": "00001e8e4272f98b",
                "http": {
                    "request": {
                        "url": "http://example.com/api/v1/test",
                        "method": "GET",
                        "user_agent": "PostmanRuntime/7.17.1",
                        "client_ip": "xxxxxx",
                        "x_forwarded_for": true
                    },
                    "response": {
                        "status": 200,
                        "content_length": 50
                    }
                },
                "aws": {
                    "xray": {
                        "sdk_version": "2.3.0",
                        "sdk": "X-Ray for Java"
                    }
                },
                "service": {
                    "runtime": "OpenJDK 64-Bit Server VM",
                    "runtime_version": "1.8.0_242"
                }
            },
            "Id": "2b871b497b07c104"
        },
        {
            "Document": {
                "id": "00001e8e4272f98b",
                "name": "app",
                "start_time": 1586863727.797046,
                "trace_id": "1-5e959e6f-b7b2649af04414a63b1c9fcc",
                "end_time": 1586863728.050865,
                "http": {
                    "request": {
                        "url": "http://example.com/api/v1/test",
                        "method": "GET",
                        "user_agent": "PostmanRuntime/7.17.1",
                        "client_ip": "xxxxxxx"
                    },
                    "response": {}
                },
                "annotations": {
                    "response_flags": "-",
                    "component": "proxy",
                    "upstream_cluster": "app",
                    "request_size": "0",
                    "downstream_cluster": "-"
                }
            },
            "Id": "00001e8e4272f98b"
        },
        {
            "Document": {
                "id": "00001e8e427512a1",
                "name": "app",
                "start_time": 1586863727.797399,
                "trace_id": "1-5e959e6f-b7b2649af04414a63b1c9fcc",
                "end_time": 1586863728.044906,
                "parent_id": "00001e8e4272f98b",
                "http": {
                    "response": {}
                },
                "annotations": {
                    "response_flags": "-",
                    "component": "proxy",
                    "upstream_cluster": "jkws_cluster",
                    "upstream_address": "xxxxxxxxxxxx"
                }
            },
            "Id": "00001e8e427512a1"
        }
    ]
}```

@marcomagdy
Copy link
Contributor

The segment name is that same for all traces of a given envoy.

@BhagathSeelam
Copy link
Author

@marcomagdy In my current configuration segment_name is same for all traces which "app", is there a way to add unique segment_name for each upstream_cluster ?

@marcomagdy
Copy link
Contributor

Not that I know of. Envoy uses a single tracer for all clusters.

@stale
Copy link

stale bot commented May 16, 2020

This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or other activity occurs. Thank you for your contributions.

@stale stale bot added the stale stalebot believes this issue/PR has not been touched recently label May 16, 2020
@stale
Copy link

stale bot commented May 24, 2020

This issue has been automatically closed because it has not had activity in the last 37 days. If this issue is still valid, please ping a maintainer and ask them to label it as "help wanted". Thank you for your contributions.

@stale stale bot closed this as completed May 24, 2020
@nageshmusini
Copy link

I am trying with below config in envoy, but not working. any suggestions?

image

@SPopenko
Copy link

Hello @marcomagdy! Could you please suggest how one should modify config to use DNS (resolvable inside k8s cluster with istio) for address instead of IP ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/docs question Questions that are neither investigations, bugs, nor enhancements stale stalebot believes this issue/PR has not been touched recently
Projects
None yet
Development

No branches or pull requests

5 participants