Tracing Tutorial with Jaeger
This walkthrough shows how to create a local cluster and deploy a number of components, including an ingress-nginx ingress controller, and Jaeger to store, query and visualise traces.
On the prepared cluster we will deploy Kyverno with tracing enabled and a couple of policies.
Finally we will exercise the Kyverno webhooks by creating a Pod, then we will use Jaeger to find and examine the corresponding trace.
Please note that this walkthrough uses kind to create a local cluster with a specific label on the control plane node. This is necessary as we are using an ingress-nginx deployment specifically crafted to work with kind. All other components setup should not be kind specific but may require different configuration depending on the target cluster.
Cluster Setup
In this first step we are going to create a local cluster using kind.
The created cluster will have two nodes, one master node and one worker node.
Note that the master node maps host ports 80
and 443
to the container node.
If those ports are already in use they can be changed by editing the hostPort
stanza in the config manifest below.
To create the local cluster, run the following command:
1kind create cluster --config - <<EOF
2kind: Cluster
3apiVersion: kind.x-k8s.io/v1alpha4
4nodes:
5 - role: control-plane
6 kubeadmConfigPatches:
7 - |-
8 kind: InitConfiguration
9 nodeRegistration:
10 kubeletExtraArgs:
11 node-labels: "ingress-ready=true"
12 extraPortMappings:
13 - containerPort: 80
14 hostPort: 80
15 protocol: TCP
16 - containerPort: 443
17 hostPort: 443
18 protocol: TCP
19 - role: worker
20EOF
Ingress NGINX Setup
In order to access Grafana from our browser, we need to deploy an ingress controller.
We are going to install ingress-nginx with the following command:
1kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
2sleep 15
3kubectl wait --namespace ingress-nginx --for=condition=ready pod --selector=app.kubernetes.io/component=controller --timeout=90s
Jaeger Setup
Jaeger will allow us to store, search and visualise traces.
Jaeger is made of multiple components and is capable of using multiple storage solutions like Elasticsearch or Cassandra. In this tutorial, we will deploy the all-in-one version of Jaeger and storage will be done in memory.
We can deploy Jaeger using Helm with the following command:
1helm install jaeger --namespace monitoring --create-namespace --wait \
2 --repo https://jaegertracing.github.io/helm-charts jaeger \
3 --values - <<EOF
4storage:
5 type: none
6provisionDataStore:
7 cassandra: false
8agent:
9 enabled: false
10collector:
11 enabled: false
12query:
13 enabled: false
14allInOne:
15 enabled: true
16 ingress:
17 enabled: true
18 hosts:
19 - localhost
20EOF
At this point, the Jaeger UI should be available at http://localhost.
Kyverno Setup
We now need to install Kyverno with tracing enabled and pointing to our Jaeger collector.
We can deploy Kyverno using Helm with the following command:
1helm install kyverno --namespace kyverno --create-namespace --wait \
2 --repo https://kyverno.github.io/kyverno kyverno \
3 --values - <<EOF
4admissionController:
5 tracing:
6 # enable tracing
7 enabled: true
8 # jaeger backend url
9 address: jaeger-collector.monitoring
10 # jaeger backend port for opentelemetry traces
11 port: 4317
12
13backgroundController:
14 tracing:
15 # enable tracing
16 enabled: true
17 # jaeger backend url
18 address: jaeger-collector.monitoring
19 # jaeger backend port for opentelemetry traces
20 port: 4317
21
22cleanupController:
23 tracing:
24 # enable tracing
25 enabled: true
26 # jaeger backend url
27 address: jaeger-collector.monitoring
28 # jaeger backend port for opentelemetry traces
29 port: 4317
30
31reportsController:
32 tracing:
33 # enable tracing
34 enabled: true
35 # jaeger backend url
36 address: jaeger-collector.monitoring
37 # jaeger backend port for opentelemetry traces
38 port: 4317
39EOF
Kyverno policies Setup
Finally we need to deploy some policies in the cluster so that Kyverno can configure admission webhooks accordingly.
We are going to deploy the kyverno-policies
Helm chart (with the Baseline
profile of PSS) using the following command:
1helm install kyverno-policies --namespace kyverno --create-namespace --wait \
2 --repo https://kyverno.github.io/kyverno kyverno-policies \
3 --values - <<EOF
4validationFailureAction: Enforce
5EOF
Note that we are setting validationFailureAction
to Enforce
because Audit
-mode policies are processed asynchronously and will produce a separate trace from the main one (both traces are linked together though, but not with a parent/child relationship).
Create a Pod and observe the corresponding trace
With everything in place we can exercise the Kyverno admission webhooks by creating a Pod and locating the corresponding trace in Jaeger.
Run the following command to create a Pod:
1kubectl run nginx --image=nginx
After that, navigate to the Jaeger UI and search for traces with the following criteria:
- Service:
kyverno
, every trace defines a service name and all traces coming from Kyverno will use thekyverno
service name - Operation:
ADMISSION POST /validate/fail
, every span defines a span name and root spans created by Kyverno when receiving an admission request have their name computed from the http operation and path (ADMISSION <HTTP OPERATION> <HTTP PATH>
. The/validate/fail
path indicates that it’s a validating webhook that was configured to fail the admission request in case of error. Fail mode is the default).
The list should show the trace for the previous Pod creation request:
Clicking on the trace will take you to the trace details, showing all spans covered by the Pod admission request:
The trace shows individual spans of all the policies that were just installed, with child spans for every rule that was checked (but not necessarily evaluated). The sum of all spans equals the trace time or the entire time Kyverno spent processing the Pod admission request.