Skip to content
This repository has been archived by the owner on Jun 13, 2023. It is now read-only.

Config for spans and opentracing_propagate_context #3

Open
s1rc opened this issue Mar 17, 2020 · 22 comments
Open

Config for spans and opentracing_propagate_context #3

s1rc opened this issue Mar 17, 2020 · 22 comments

Comments

@s1rc
Copy link

s1rc commented Mar 17, 2020

Does this package support spans of traces as they pass through the services?

i.e.

  1. nginx-ingress
  2. nginx
  3. PHP app

Currently I'm seeing 1 & 2 as a span of the same trace, but all subsequent requests in the actual app are showing as individual traces.

@svensp
Copy link
Collaborator

svensp commented Mar 17, 2020

Let's say it is supposed to do that.

I could not find a standard for HTTP headers at the time of writing the package so it propagates(and reads) context through the X-TRACE header.

There seems to be some movement towards a standard header here: https://github.com/opentracing/specification/blob/master/rfc/trace_identifiers.md but I would appreciate an example trace header from your setup to see where nginx places the trace context

@s1rc
Copy link
Author

s1rc commented Mar 17, 2020

I'm still relatively new to tracing and jaeger, is there a way to find this out somewhere in the Jaeger UI?

I had followed this doc by nginx, https://www.nginx.com/blog/opentracing-with-nginx-ingress-controller-for-kubernetes/ only difference is that my nginx-ingress controller wasn't self compiled, it supports opentracing/jaeger.

Edit: Should note that I did do a curl -I of the url and there was no X-TRACE header present.

@svensp
Copy link
Collaborator

svensp commented Mar 17, 2020

No you would check this in your php application.
Simply dump the information from \Request::header() and post the result here. Remove anything that might be confidential of course. The tracing context should be somewhere in there.

As for no X-TRACE header being present:

  • You're looking at the response to the browser. There's no reason to include the trace there as far as I know. Tracing is done in server-to-server requests
  • This package only reads X-TRACE. ipunkt/laravel-jaeger-guzzle adds a middleware to guzzle to read the current span and send the request with matching X-TRACE. It is not widely used thought as most of our traced server-to-server traffic currently going through rabbitmq.

@s1rc
Copy link
Author

s1rc commented Mar 17, 2020

I've attached a sanitized set of headers as json. Doesn't look to be any X-TRACE header present. Perhaps my nginx.conf for Laravel doesn't forward that on to php-fpm/the app.

Headers.txt

@svensp
Copy link
Collaborator

svensp commented Mar 17, 2020

Indeed there is nothing there. Did you add the location-snippet as mentioned in the tutorial?

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: opentracing-ingress
  annotations:
    nginx.org/location-snippets: |
      opentracing_propagate_context;
spec:
  rules:
  - host: example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: app1-svc
          servicePort: 80

According to the tutorial that line should pass the tracing context to the application(well, it's nginx but it should just pass through that)

@s1rc
Copy link
Author

s1rc commented Mar 17, 2020

Yeah that is there in my ingress yml.

I found this reference to OpenTracing directives: https://github.com/opentracing-contrib/nginx-opentracing/blob/master/doc/Reference.md

I see that there is a opentracing_fastcgi_propagate_context and tried replacing opentracing_propagate_context in my location location ~ \.php$ {} nginx.conf block. Now there is a request header with key uber-trace-id, I've attached another json txt of the sanitized output.

Headers-with-opentracing_fastcgi_propagate_context.txt

location ~ \.php$ {
      # Opentracing
      opentracing_operation_name $uri;
      opentracing_tag app app-nginx-php-fpm;
      opentracing_fastcgi_propagate_context;
      try_files $uri /index.php =404;
      fastcgi_split_path_info ^(.+\.php)(/.+)$;
      #fastcgi_pass unix:/run/php/php7.2-fpm.sock;
      fastcgi_pass 127.0.0.1:9000;
      fastcgi_buffers 16 16k;
      fastcgi_index index.php;
      fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
      include fastcgi_params;
}

@svensp
Copy link
Collaborator

svensp commented Mar 17, 2020

That should be what we're looking for.

I'll have to do some digging to find out how to pass only that Id to my jaeger lib - the context there requires some other parameters to be set.

@s1rc
Copy link
Author

s1rc commented Mar 18, 2020

If you need any more info or to test I'm happy to help where I can.

@svensp
Copy link
Collaborator

svensp commented Mar 18, 2020

I have created a branch uber_trace_id where I'm trying to get this to work but as of right now it is completly untested, pretty much just 'this might work that way'

I'll try to set up a minikube test env sometimes tomorrow

@s1rc
Copy link
Author

s1rc commented Mar 19, 2020

Testing with that branch I'm getting exception:
7#7: *24 FastCGI sent in stderr: "PHP message: PHP Fatal error: Uncaught Error: Call to a member function addLog() on null in /var/www/vendor/ipunkt/laravel-jaeger/src/Context/SpanContext.php:215

@svensp
Copy link
Collaborator

svensp commented Mar 20, 2020

I'm trying to build a test setup in minikube that works similar to yours. So far I have the standalone working and sending traces from laravel but not the part about sending traces from the nginx ingress.
Still, the error about addLog on null should be fixed now.

@s1rc
Copy link
Author

s1rc commented Mar 20, 2020

Thank the addLog error is fixed.

Jaeger is still showing the nginx/ingress and Laravel traces as different spans.

@svensp
Copy link
Collaborator

svensp commented Mar 20, 2020

I finally got it working on my end.

It looks to me like the tutorial you posted above is using a somewhat out of date version of the nginx-ingress.

The current config file directives to start opentracing with jager are here:

The current annotation to add a snippet to the location is here:

For reference here is the ConfigMap which enabled jaeger traces from the nginx-ingress for me:

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-load-balancer-conf
  namespace: kube-system
data:
  hsts: "false"
  map-hash-bucket-size: "128"
  enable-opentracing: "True"
  jaeger-collector-host: "agent.spacebattle.svc.cluster.local"
  jaeger-service-name: "nginx-ingress"
  opentracing: "True"

And the ingress:

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: spacebattle
  namespace: default
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/configuration-snippet: |
      opentracing_propagate_context;
spec:
  rules:
  - host: spacebattle.cluster.test
    http:
      paths:
      - path: /
        backend:
          serviceName: spacebattle-asterkeks-de-http
          servicePort: web

The UBER-TRACE-ID sent by the ingress looks like this: 950798db9aa1e246:20ea53ef7306ae4a:950798db9aa1e246:1 - that is the full information required by the jaeger lib I use in the background.

I see the nginx-ingress and laravel spans as part of a single trace now.

@svensp
Copy link
Collaborator

svensp commented Mar 20, 2020

Looking at the location entry you posted earlier I think you configured your laravel application nginx to post the trace instead of the ingress controller - I'm seeing 2 spans from the ingress controller alone as well - and for some reason, probably a version difference, your application nginx sends a shortened version of the uber-trace-id which my backend lib can't deal with

@s1rc
Copy link
Author

s1rc commented Mar 20, 2020

I've used your annotations for the actual ingress and now it seems to work. Before I was enabling opentracing on just the ingress instead of globally, so now with it globally enabled I need to figure out how to disable tracing on certain ingresses (i.e. I don't need to trace prometheus).

Now I'm able to trace through the call, however when spanning the trace from nginx-ingress > nginx > Laravel, it lists only nginx-igress: / <Initial Span ID> so it's really hard to find which trace span to diagnose (they are all the same at the root). Before opentracing_propagate_context tracing the Laravel only trace would show App name and full URL. I'm not sure if that's an nginx-ingress/nginx config or not.

Also as a suggestion, would it be possible to override the service name used? It looks to use the app name, but when we have multiple apps (dev/staging/prod) with the same env app name, it would be good to be able to differentiate in Jaeger UI.

image

@svensp
Copy link
Collaborator

svensp commented Mar 20, 2020

If you've been able to enable tracing on certain ingresses only then it should also be possible to set it so it sends the full uber-trace-id from there. I'll try to find out some more but it might be faster to just dig around the nginx-ingress and what it does with the values given in the config map.

I don't have a have an idea right now how to make the ingress span more informative but I have some advice that might make it unnecessary:

  • The called URL in Laravel is available as Operation in the search mask. If you select the url you want there then you'll only get relevant traces
  • Also in the search mask is Tags which lets you filter for traces with specific tags only. One notable use case for this is filtering for environment=production, environment=staging etc.
    The environment tag got lost somewhere in the uber-trace-id branch changes but it should be back now

That said I don't think different service names are useful but they are certainly easy to implement

@svensp
Copy link
Collaborator

svensp commented Mar 21, 2020

Disabling Tracing

I did some digging and the opentracing directive is allowed all the way down to the location so disabling tracing on certain ingresses should be possible with the following snippet. However I agree that turning it off for some ingresses fells wrong and it should be explicitly turned on for those that should be traced.

annotations:
  nginx.ingress.kubernetes.io/configuration-snippet: |
    opentracing off;

Since it can be turned off here it also can be turned on of course. I have not been able to do this properly tho because the opentracing_load_tracer directive wants a path to a config json inside the nginx pod - and I don't see a way to create that file from the ingress yet.

More information in the nginx span

https://github.com/opentracing-contrib/nginx-opentracing/blob/master/doc/Reference.md lists some more directives which can be used in the ingress to customize the span created by nginx. Notably opentracing_operation_name to customize the operation displayed after nginx-ingress:

This example will show the location and query parameters but not the hostname:

  annotations:
    nginx.ingress.kubernetes.io/configuration-snippet: |
      opentracing_operation_name $request_uri;

@svensp
Copy link
Collaborator

svensp commented Mar 21, 2020

Update: you can enable tracing on a per-ingress basis if your nginx-ingress version is v0.27.0+
The version is notable because minikube 1.8.2 uses v0.26.2 with minikube addons enable ingress

nginx-ingress config:

config:
  enable-opentracing: "False"
  jaeger-collector-host: "agent.jaeger.svc.cluster.local"
  jaeger-service-name: "nginx-ingress"

ingress annotations:

annotations:
  nginx.ingress.kubernetes.io/enable-opentracing: "true"
  nginx.ingress.kubernetes.io/configuration-snippet: |
    opentracing_propagate_context;
    # optional:
    opentracing_operation_name $request_uri;

@s1rc
Copy link
Author

s1rc commented Mar 23, 2020

Thank you, I was able to set up the tracing on only one ingress.

I had found the opentracing_operation_name annotation, which is definitely helpful.

The tagging of environment will be helpful for sure, still would be awesome to be configure the service name manually so we can add to other services tracing data by service name. Have yet to figure out how to specify tag filtering there.

@s1rc
Copy link
Author

s1rc commented Nov 2, 2020

@svensp any chance this can be merged into master/tagged version?

@svensp
Copy link
Collaborator

svensp commented Nov 3, 2020

I've been meaning to for a while but the branch currently only reads the standard header, no longer the custom header my packages used until then thus making it a breaking change.
I've tried making the code compatible with both versions but I no longer have as much time to work on it.

I could slap a 2.0.0 tag on it tho, making it a harder upgrade path but you'll have a tagged version to work with

@s1rc
Copy link
Author

s1rc commented Nov 18, 2020

@svensp thanks, for now not to worry. We've managed to set something else up. Thank you for your work!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants