Validate Rules
Validation rules are probably the most common and practical types of rules you will be working with, and the main use case for admission controllers such as Kyverno. In a typical validation rule, one defines the mandatory properties with which a given resource should be created. When a new resource is created by a user or process, the properties of that resource are checked by Kyverno against the validate rule. If those properties are validated, meaning there is agreement, the resource is allowed to be created. If those properties are different, the creation is blocked. The behavior of how Kyverno responds to a failed validation check is determined by the validationFailureAction
field. It can either be blocked (Enforce
) or allowed yet recorded in a policy report (Audit
). Validation rules in Audit
mode can also be used to get a report on matching resources which violate the rule(s), both upon initial creation and when Kyverno initiates periodic scans of Kubernetes resources. Resources in violation of an existing rule placed in Audit
mode will also surface in an event on the resource in question.
To validate resource data, define a pattern in the validation rule. For more advanced processing using tripartite expressions (key-operator-value), define a deny element in the validation rule along with a set of conditions that control when to allow or deny the request.
Basic Validations
As a basic example, consider the below ClusterPolicy
which validates that any new Namespace that is created has the label purpose
with the value of production
.
1apiVersion: kyverno.io/v1
2# The `ClusterPolicy` kind applies to the entire cluster.
3kind: ClusterPolicy
4metadata:
5 name: require-ns-purpose-label
6# The `spec` defines properties of the policy.
7spec:
8 # The `validationFailureAction` tells Kyverno if the resource being validated should be allowed but reported (`Audit`) or blocked (`Enforce`).
9 validationFailureAction: Enforce
10 # The `rules` is one or more rules which must be true.
11 rules:
12 - name: require-ns-purpose-label
13 # The `match` statement sets the scope of what will be checked. In this case, it is any `Namespace` resource.
14 match:
15 any:
16 - resources:
17 kinds:
18 - Namespace
19 # The `validate` statement tries to positively check what is defined. If the statement, when compared with the requested resource, is true, it is allowed. If false, it is blocked.
20 validate:
21 # The `message` is what gets displayed to a user if this rule fails validation.
22 message: "You must have label `purpose` with value `production` set on all new namespaces."
23 # The `pattern` object defines what pattern will be checked in the resource. In this case, it is looking for `metadata.labels` with `purpose=production`.
24 pattern:
25 metadata:
26 labels:
27 purpose: production
If a new Namespace with the following definition is submitted to Kyverno, given the ClusterPolicy
above, it will be allowed (validated). This is because it contains the label of purpose=production
, which is the only pattern being validated in the rule.
1apiVersion: v1
2kind: Namespace
3metadata:
4 name: prod-bus-app1
5 labels:
6 purpose: production
By contrast, if a new Namespace with the below definition is submitted, given the ClusterPolicy
above, it will be blocked (invalidated). As you can see, its value of the purpose
label differs from that required in the policy. But this isn’t the only way a validation can fail. If, for example, the same Namespace is requested which has no labels defined whatsoever, it too will be blocked for the same reason.
1apiVersion: v1
2kind: Namespace
3metadata:
4 name: prod-bus-app1
5 labels:
6 purpose: development
Save the above manifest as ns.yaml
and try to create it with your sample ClusterPolicy
in place.
1$ kubectl create -f ns.yaml
2Error from server: error when creating "ns.yaml": admission webhook "validate.kyverno.svc" denied the request:
3
4resource Namespace//prod-bus-app1 was blocked due to the following policies
5
6require-ns-purpose-label:
7 require-ns-purpose-label: 'Validation error: You must have label `purpose` with value `production` set on all new namespaces.; Validation rule require-ns-purpose-label failed at path /metadata/labels/purpose/'
Change the development
value to production
and try again. Kyverno permits creation of your new Namespace resource.
Validation Failure Action
The validationFailureAction
attribute controls admission control behaviors for resources that are not compliant with a policy. If the value is set to Enforce
, resource creation or updates are blocked when the resource does not comply. When the value is set to Audit
, a policy violation is logged in a PolicyReport
or ClusterPolicyReport
but the resource creation or update is allowed. For preexisting resources which violate a newly-created policy set to Enforce
mode, Kyverno will allow subsequent updates to those resources which continue to violate the policy as a way to ensure no existing resources are impacted. However, should a subsequent update to the violating resource(s) make them compliant, any further updates which would produce a violation are blocked.
Validation Failure Action Overrides
Using validationFailureActionOverrides
, you can specify which actions to apply per Namespace. This attribute is only available for ClusterPolicies.
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: check-label-app
5spec:
6 validationFailureAction: Audit
7 validationFailureActionOverrides:
8 - action: Enforce # Action to apply
9 namespaces: # List of affected namespaces
10 - default
11 - action: Audit
12 namespaces:
13 - test
14 rules:
15 - name: check-label-app
16 match:
17 any:
18 - resources:
19 kinds:
20 - Pod
21 validate:
22 message: "The label `app` is required."
23 pattern:
24 metadata:
25 labels:
26 app: "?*"
In the above policy, for Namespace default
, validationFailureAction
is set to Enforce
and for Namespace test
, it’s set to Audit
. For all other Namespaces, the action defaults to the validationFailureAction
field.
Patterns
A validation rule which checks resource data is defined as an overlay pattern that provides the desired configuration. Resource configurations must match fields and expressions defined in the pattern to pass the validation rule. The following rules are followed when processing the overlay pattern:
- Validation will fail if a field is defined in the pattern and if the field does not exist in the configuration.
- Undefined fields are treated as wildcards.
- A validation pattern field with the wildcard value ‘*’ will match zero or more alphanumeric characters. Empty values are matched. Missing fields are not matched.
- A validation pattern field with the wildcard value ‘?’ will match any single alphanumeric character. Empty or missing fields are not matched.
- A validation pattern field with the wildcard value ‘?*’ will match any alphanumeric characters and requires the field to be present with non-empty values.
- A validation pattern field with the value
null
or "" (empty string) requires that the field not be defined or has no value. - The validation of siblings is performed only when one of the field values matches the value defined in the pattern. You can use the conditional anchor to explicitly specify a field value that must be matched. This allows writing rules like ‘if fieldA equals X, then fieldB must equal Y’.
- Validation of child values is only performed if the parent matches the pattern.
Wildcards
*
- matches zero or more alphanumeric characters?
- matches a single alphanumeric character
For a couple of examples on how wildcards work in rules, see the following.
This policy requires that all containers in all Pods have resource requests and limits defined (CPU limits intentionally omitted):
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: all-containers-need-requests-and-limits
5spec:
6 validationFailureAction: Enforce
7 rules:
8 - name: check-container-resources
9 match:
10 any:
11 - resources:
12 kinds:
13 - Pod
14 validate:
15 message: "All containers must have CPU and memory resource requests and limits defined."
16 pattern:
17 spec:
18 containers:
19 # Select all containers in the pod. The `name` field here is not specifically required but serves
20 # as a visual aid for instructional purposes.
21 - name: "*"
22 resources:
23 limits:
24 # '?' requires 1 alphanumeric character and '*' means that
25 # there can be 0 or more characters. Using them together
26 # e.g. '?*' requires at least one character.
27 memory: "?*"
28 requests:
29 memory: "?*"
30 cpu: "?*"
The following validation rule checks for a label in Deployment, StatefulSet, and DaemonSet resources:
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: check-label-app
5spec:
6 validationFailureAction: Enforce
7 rules:
8 - name: check-label-app
9 match:
10 any:
11 - resources:
12 kinds:
13 - Deployment
14 - StatefulSet
15 - DaemonSet
16 validate:
17 message: "The label `app` is required."
18 pattern:
19 spec:
20 template:
21 metadata:
22 labels:
23 app: "?*"
In order to treat special characters like wildcards as literals, see this section in the JMESPath page.
Operators
Operators in the following support list values in addition to scalar values. Many of these operators also support checking of durations (ex., 12h) and semver (ex., 1.4.1).
Operator | Meaning |
---|---|
> | greater than |
< | less than |
>= | greater than or equals to |
<= | less than or equals to |
! | not equals |
| | logical or |
& | logical and |
- | within a range |
!- | outside a range |
Note
The-
operator provides an easier way of validating the value in question falls within a closed interval [a,b]
. Thus, constructing the a-b
condition is equivalent of writing the value >= a & value <= b
. Likewise, the !-
operator can be used to negate a range. Thus, constructing the a!-b
condition is equivalent of writing the value < a | value > b
.Note
There is no operator forequals
as providing a field value in the pattern requires equality to the value.An example of using an operator in a pattern style validation rule is shown below.
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: validate
5spec:
6 validationFailureAction: Enforce
7 rules:
8 - name: validate-replica-count
9 match:
10 any:
11 - resources:
12 kinds:
13 - Deployment
14 validate:
15 message: "Replica count for a Deployment must be greater than or equal to 2."
16 pattern:
17 spec:
18 replicas: ">=2"
Anchors
Anchors allow conditional processing (i.e. “if-then-else”) and other logical checks in validation patterns. The following types of anchors are supported:
Anchor | Tag | Behavior |
---|---|---|
Conditional | () | If tag with the given value (including child elements) is specified, then peer elements will be processed. e.g. If image has tag latest then imagePullPolicy cannot be IfNotPresent. (image): “*:latest” imagePullPolicy: “!IfNotPresent” |
Equality | =() | If tag is specified, then processing continues. For tags with scalar values, the value must match. For tags with child elements, the child element is further evaluated as a validation pattern. e.g. If hostPath is defined then the path cannot be /var/lib =(hostPath): path: “!/var/lib” |
Existence | ^() | Works on the list/array type only. If at least one element in the list satisfies the pattern. In contrast, a conditional anchor would validate that all elements in the list match the pattern. e.g. At least one container with image nginx:latest must exist. ^(containers): - image: nginx:latest |
Negation | X() | The tag cannot be specified. The value of the tag is not evaluated (use exclamation point to negate a value). The value should ideally be set to "null" (quotes surrounding null).e.g. Hostpath tag cannot be defined. X(hostPath): “null” |
Global | <() | The content of this condition, if false, will cause the entire rule to be skipped. Valid for both validate and strategic merge patches. |
Anchors and child elements: Conditional and Equality
Child elements are handled differently for conditional and equality anchors.
For conditional anchors, the child element is considered to be part of the “if” clause, and all peer elements are considered to be part of the “then” clause. For example, consider the following ClusterPolicy
pattern statement:
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: conditional-anchor-dockersock
5spec:
6 validationFailureAction: Enforce
7 background: false
8 rules:
9 - name: conditional-anchor-dockersock
10 match:
11 any:
12 - resources:
13 kinds:
14 - Pod
15 validate:
16 message: "If a hostPath volume exists and is set to `/var/run/docker.sock`, the label `allow-docker` must equal `true`."
17 pattern:
18 metadata:
19 labels:
20 allow-docker: "true"
21 (spec):
22 (volumes):
23 - (hostPath):
24 path: "/var/run/docker.sock"
This reads as “If a hostPath volume exists and the path equals /var/run/docker.sock, then a label “allow-docker” must be specified with a value of true.” In this case, the conditional checks the spec.volumes
and spec.volumes.hostPath
map. The child element of spec.volumes.hostPath
is the path
key and so the check ends the “If” evaluation at path
. The entire metadata
object is a peer element to the spec
object because these reside at the same hierarchy within a Pod definition. Therefore, conditional anchors can not only compare peers when they are simple key/value, but also when peers are objects or YAML maps.
For equality anchors, a child element is considered to be part of the “then” clause. Now, consider the same ClusterPolicy
as above but using equality anchors:
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: equality-anchor-no-dockersock
5spec:
6 validationFailureAction: Enforce
7 background: false
8 rules:
9 - name: equality-anchor-no-dockersock
10 match:
11 any:
12 - resources:
13 kinds:
14 - Pod
15 validate:
16 message: "If a hostPath volume exists, it must not be set to `/var/run/docker.sock`."
17 pattern:
18 =(spec):
19 =(volumes):
20 - =(hostPath):
21 path: "!/var/run/docker.sock"
This is read as “If a hostPath volume exists, then the path must not be equal to /var/run/docker.sock”. In this sample, the object spec.volumes.hostPath
is being checked, which is where the “If” evaluation ends. Similar to the conditional example above, the path
key is a child to hostPath
and therefore is the one being evaluated under the “then” check.
Note
In both of these examples, the validation rule merely checks for the existence of ahostPath
volume definition. It does not validate whether a container is actually consuming the volume.Existence anchor: At Least One
The existence anchor is used to check that, in a list of elements, at least one element exists that matches the pattern. This is done by using the ^()
notation for the field. The existence anchor only works on array/list type data.
For example, this pattern will check that at least one container is using an image named nginx:latest
:
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: existence-anchor-at-least-one-nginx
5spec:
6 validationFailureAction: Enforce
7 rules:
8 - name: existence-anchor-at-least-one-nginx
9 match:
10 any:
11 - resources:
12 kinds:
13 - Pod
14 validate:
15 message: "At least one container must use the image `nginx:latest`."
16 pattern:
17 spec:
18 ^(containers):
19 - image: nginx:latest
Contrast this existence anchor, which checks for at least one instance, with a wildcard which checks for every instance.
1 pattern:
2 spec:
3 containers:
4 - name: "*"
5 image: nginx:latest
This snippet above instead states that every entry in the array of containers, regardless of name, must have the image
set to nginx:latest
.
Global Anchor
The global anchor is a way to use a condition anywhere in a resource to base a decision. If the condition enclosed in the global anchor is true, the rest of the rule must apply. If the condition enclosed in the global anchor is false, the rule is skipped.
In this example, a container image coming from a registry called corp.reg.com
is required to mount an imagePullSecret called my-registry-secret
.
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: sample
5spec:
6 validationFailureAction: Enforce
7 rules:
8 - name: check-container-image
9 match:
10 any:
11 - resources:
12 kinds:
13 - Pod
14 validate:
15 message: Images coming from corp.reg.com must use the correct imagePullSecret.
16 pattern:
17 spec:
18 containers:
19 - name: "*"
20 <(image): "corp.reg.com/*"
21 imagePullSecrets:
22 - name: my-registry-secret
The below Pod has a single container which meets the global anchor’s specifications, but the rest of the pattern does not match. The Pod is therefore blocked.
1apiVersion: v1
2kind: Pod
3metadata:
4 name: static-web
5 labels:
6 role: myrole
7spec:
8 containers:
9 - name: web
10 image: corp.reg.com/nginx
11 ports:
12 - name: web
13 containerPort: 80
14 protocol: TCP
15 imagePullSecrets:
16 - name: other-secret
anyPattern
In some cases, content can be defined at different levels. For example, a security context can be defined at the Pod or Container level. The validation rule should pass if either one of the conditions is met.
The anyPattern
tag is a logical OR across multiple validation patterns and can be used to check if any one of the patterns in the list match.
Note
Either one ofpattern
or anyPattern
is allowed in a rule; they both can’t be declared in the same rule. 1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: require-run-as-non-root
5spec:
6 background: true
7 validationFailureAction: Enforce
8 rules:
9 - name: check-containers
10 match:
11 any:
12 - resources:
13 kinds:
14 - Pod
15 validate:
16 message: >-
17 Running as root is not allowed. The fields spec.securityContext.runAsNonRoot,
18 spec.containers[*].securityContext.runAsNonRoot, and
19 spec.initContainers[*].securityContext.runAsNonRoot must be `true`.
20 anyPattern:
21 # spec.securityContext.runAsNonRoot must be set to true. If containers and/or initContainers exist which declare a securityContext field, those must have runAsNonRoot also set to true.
22 - spec:
23 securityContext:
24 runAsNonRoot: true
25 containers:
26 - =(securityContext):
27 =(runAsNonRoot): true
28 =(initContainers):
29 - =(securityContext):
30 =(runAsNonRoot): true
31 # All containers and initContainers must define (not optional) runAsNonRoot=true.
32 - spec:
33 containers:
34 - securityContext:
35 runAsNonRoot: true
36 =(initContainers):
37 - securityContext:
38 runAsNonRoot: true
The anyPattern
method is best suited for validation cases which do not use a negated condition. In the above example, only one of the spec
contents must be valid. The same is true of negated conditions, however in the below example, this is slightly more difficult to reason about in that when negated, the anyPattern
option allows any resource to pass so long as it doesn’t have at least one of the patterns.
1validate:
2 message: Cannot use Flux v1 annotation.
3 anyPattern:
4 - metadata:
5 =(annotations):
6 X(fluxcd.io/*): "*?"
7 - metadata:
8 =(annotations):
9 X(flux.weave.works/*): "*?"
If the desire is to state, “neither annotation named fluxcd.io/
nor flux.weave.works/
may be present”, then this would need two separate rules to express as including either one would mean the other is valid and therefore the resource is allowed.
Note
Due to a bug in Kubernetes v1.23 which was fixed in v1.23.3, use ofanyPattern
in the v1.23 release requires v1.23.3 at a minimum.Deny rules
In addition to applying patterns to check resources, a validate rule can deny a request based on a set of conditions written as expressions. A deny
condition is an expression constructed of key, operator, value, and an optional message field. Unlike a pattern, when a deny
condition evaluates to true
it blocks a resource. Pattern expressions by contrast, when true, allow a resource.
Deny rules are more powerful and expressive than simple patterns but are also more complex to write. Use deny rules when:
- You need advanced selection logic with multiple “if” conditions.
- You need access to the full contents of the AdmissionReview.
- You need access to more built-in variables.
- You need access to the complete JMESPath filtering system.
An example of a deny rule is shown below. In deny rules, you write expressions similar to those in Kubernetes resources such as selectors. Deny rules, or “conditions”, must be nested under an any
, all
, or potentially both in order to control the decision-making logic. In this snippet, a resource will be denied if ANY of the following expressions are true.
{{ request.object.data.team }}
Equals eng{{ request.object.data.unit }}
Equals green
1validate:
2 message: Main message is here.
3 deny:
4 conditions:
5 any:
6 - key: "{{ request.object.data.team }}"
7 operator: Equals
8 value: eng
9 message: The expression team = eng failed.
10 - key: "{{ request.object.data.unit }}"
11 operator: Equals
12 value: green
13 message: The expression unit = green failed.
Placing these two conditions under an all
block instead would require that both of them be true to produce the deny behavior.
Kyverno performs short-circuiting on deny conditions to abort processing when a decision can be reached. The first expression to evaluate to a true
in an any
block discontinues further evaluation. The first expression to evaluate to false
in an all
block does the same.
If the optional message
field is included, it will be printed for a condition which evaluates to false
keeping in mind how short-circuiting works.
Deny rules are incapable of producing a pass
result in a Policy Report because the desired action is to deny so, therefore, the results will either be skip
or fail
.
See also Preconditions.
Deny DELETE requests based on labels
This policy denies delete
requests for objects with the label app.kubernetes.io/managed-by: kyverno
and for all users who do not have the cluster-admin
role.
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: deny-deletes
5spec:
6 validationFailureAction: Enforce
7 background: false
8 rules:
9 - name: block-deletes-for-kyverno-resources
10 match:
11 any:
12 - resources:
13 selector:
14 matchLabels:
15 app.kubernetes.io/managed-by: kyverno
16 exclude:
17 any:
18 - clusterRoles:
19 - cluster-admin
20 validate:
21 message: "Deleting {{request.oldObject.kind}}/{{request.oldObject.metadata.name}} is not allowed"
22 deny:
23 conditions:
24 any:
25 - key: "{{request.operation}}"
26 operator: Equals
27 value: DELETE
Block changes to a custom resource
This policy denies admission review requests for updates or deletes to a custom resource, unless the request is from a specific ServiceAccount or matches specified Roles.
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: block-updates-to-custom-resource
5spec:
6 validationFailureAction: Enforce
7 background: false
8 rules:
9 - name: block-updates-to-custom-resource
10 match:
11 any:
12 - resources:
13 kinds:
14 - SomeCustomResource
15 exclude:
16 any:
17 - subjects:
18 - kind: ServiceAccount
19 name: custom-controller
20 - clusterRoles:
21 - custom-controller:*
22 - cluster-admin
23 validate:
24 message: "Modifying or deleting this custom resource is forbidden."
25 deny: {}
Prevent changing NetworkPolicy resources
This policy prevents users from changing NetworkPolicy resources with names that end with -default
.
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: deny-netpol-changes
5spec:
6 validationFailureAction: Enforce
7 background: false
8 rules:
9 - name: deny-netpol-changes
10 match:
11 any:
12 - resources:
13 kinds:
14 - NetworkPolicy
15 names:
16 - "*-default"
17 exclude:
18 any:
19 - clusterRoles:
20 - cluster-admin
21 validate:
22 message: "Changing default network policies is not allowed."
23 deny: {}
foreach
The foreach
declaration simplifies validation of sub-elements in resource declarations, for example containers in a Pod.
A foreach
declaration can contain multiple entries to process different sub-elements e.g. one to process a list of containers and another to process the list of initContainers in a Pod.
Each foreach
entry must contain a list
attribute, written as a JMESPath expression without braces, that defines the sub-elements it processes. For example, iterating over the list of containers in a Pod is performed using this list
declaration:
1list: request.object.spec.containers
When a foreach
is processed, the Kyverno engine will evaluate list
as a JMESPath expression to retrieve zero or more sub-elements for further processing. The value of the list
field may also resolve to a simple array of strings, for example as defined in a context variable. The value of the list
field should not be enclosed in braces even though it is a JMESPath expression.
A variable element
is added to the processing context on each iteration. This allows referencing data in the element using element.<name>
where name is the attribute name. For example, using the list request.object.spec.containers
when the request.object
is a Pod allows referencing the container image as element.image
within a foreach
.
The following child declarations are permitted in a foreach
:
In addition, each foreach
declaration can contain the following declarations:
- Context: to add additional external data only available per loop iteration.
- Preconditions: to control when a loop iteration is skipped
elementScope
: controls whether to use the current list element as the scope for validation. Defaults to “true” if not specified.
Here is a complete example to enforce that all container images are from a trusted registry:
1apiVersion : kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: check-images
5spec:
6 validationFailureAction: Enforce
7 background: false
8 rules:
9 - name: check-registry
10 match:
11 any:
12 - resources:
13 kinds:
14 - Pod
15 preconditions:
16 any:
17 - key: "{{request.operation}}"
18 operator: NotEquals
19 value: DELETE
20 validate:
21 message: "unknown registry"
22 foreach:
23 - list: "request.object.spec.initContainers"
24 pattern:
25 image: "trusted-registry.io/*"
26 - list: "request.object.spec.containers"
27 pattern:
28 image: "trusted-registry.io/*"
Note that the pattern
is applied to the element
and hence does not need to specify spec.containers
and can directly reference the attributes of the element
, which is a container
in the example above.
Nested foreach
The foreach
object also supports nesting multiple foreach declarations to form loops within loops. When using nested loops, the special variable {{elementIndex}}
requires a loop number to identify which element to process. Preconditions are supported only at the top-level loop and not per inner loop.
This sample illustrates using nested foreach loops to validate that every hostname does not ends with new.com
.
1apiVersion : kyverno.io/v2beta1
2kind: ClusterPolicy
3metadata:
4 name: check-ingress
5spec:
6 validationFailureAction: Enforce
7 background: false
8 rules:
9 - name: check-tls-secret-host
10 match:
11 any:
12 - resources:
13 kinds:
14 - Ingress
15 validate:
16 message: "All TLS hosts must use a domain of old.com."
17 foreach:
18 - list: request.object.spec.tls[]
19 foreach:
20 - list: "element.hosts"
21 deny:
22 conditions:
23 all:
24 - key: "{{element}}"
25 operator: Equals
26 value: "*.new.com"
A sample Ingress which may get blocked by this look like the below.
1apiVersion: networking.k8s.io/v1
2kind: Ingress
3metadata:
4 name: kuard
5 labels:
6 app: kuard
7spec:
8 rules:
9 - host: kuard.old.com
10 http:
11 paths:
12 - backend:
13 service:
14 name: kuard
15 port:
16 number: 8080
17 path: /
18 pathType: ImplementationSpecific
19 - host: hr.old.com
20 http:
21 paths:
22 - backend:
23 service:
24 name: kuard
25 port:
26 number: 8090
27 path: /myhr
28 pathType: ImplementationSpecific
29 tls:
30 - hosts:
31 - kuard.old.com
32 - kuard-foo.new.com
33 secretName: foosecret.old.com
34 - hosts:
35 - hr.old.com
36 secretName: hr.old.com
Nested foreach statements are also supported in mutate rules. See the documentation here for further details.
Manifest Validation
Kyverno has the ability to verify signed Kubernetes YAML manifests created with the Sigstore k8s-manifest-sigstore project. Using this capability, a Kubernetes YAML manifest is signed using one or multiple methods, which includes support for both keyed and keyless signing like in image verification, and through a policy definition Kyverno can validate these signatures prior to creation. This capability also includes support for field exclusions, multiple signatures, and dry-run mode.
Generate a key pair used to sign a manifest by using the cosign
command-line tool.
1cosign generate-key-pair
Install the kubectl-sigstore
command-line tool using one of the provided methods.
Sign the YAML manifest using the private key generated in the first step.
1$ kubectl-sigstore sign -f secret.yaml -k cosign.key --tarball no -o secret-signed.yaml
2Enter password for private key:
3Using payload from: /tmp/kubectl-sigstore-temp-dir1572288324/tmp-blob-file
40D 7ѫO2�Ď��D)�I��!@t�0���X� Xmj���7���+u
5 ���_ڑ)ۆ�d�0�qHINFO[0004] signed manifest generated at secret-signed.yaml
The secret.yaml
manifest provided as an input has been signed using your private key and the signed version is output at secret-signed.yaml
.
1apiVersion: v1
2data:
3 api_token: MDEyMzQ1Njc4OWFiY2RlZg==
4kind: Secret
5metadata:
6 annotations:
7 cosign.sigstore.dev/message: H4sIAAAAAAAA/zTMPQrCQBBA4X5OMVeIWA2kU7sYVFC0kXEzyJr9c3cirKcXlXSv+R4ne5RcbAyErwYGViZA5GSvGkcJhN1qXbv3rtk+zLI/bex5sXeXe9vCaMNAeBCTRcGL8owd38SVbyG6aFh/d5lyTAKIgb0Q+lr+UmsSwj7xcxL4BAAA//+dVuynjwAAAA==
8 cosign.sigstore.dev/signature: MEQCIDfRq08y5MSOFo3iiEQUKdRJw9YhQHTjMAXwgO0eWO+hAiBYbR5qpa3wBjfN+d4rdQy5iNFf2pEp24aHZJgwyHEaSA==
9 labels:
10 location: europe
11 name: mysecret
12type: Opaque
Create the Kyverno policy which matches on Secrets and will be used to validate the signatures.
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: validate-secrets
5spec:
6 validationFailureAction: Enforce
7 background: true
8 rules:
9 - name: validate-secrets
10 match:
11 any:
12 - resources:
13 kinds:
14 - Secret
15 validate:
16 manifests:
17 attestors:
18 - count: 1
19 entries:
20 - keys:
21 publicKeys: |-
22 -----BEGIN PUBLIC KEY-----
23 MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEStoX3dPCFYFD2uPgTjZOf1I5UFTa
24 1tIu7uoGoyTxJqqEq7K2aqU+vy+aK76uQ5mcllc+TymVtcLk10kcKvb3FQ==
25 -----END PUBLIC KEY-----
To test the operation of this rule, modify the signed Secret to change some aspect of the manifest. For example, by changing even the value of the location
label from europe
to asia
will cause the signed manifest to be invalid. Kyverno will reject the altered manifest because the signature was only valid for the original Secret manifest.
1$ kubectl apply -f secret-signed.yaml
2Error from server: error when creating "secret-signed.yaml": admission webhook "validate.kyverno.svc-fail" denied the request:
3
4policy Secret/default/mysecret for resource violation:
5
6validate-secrets:
7 validate-secrets: 'manifest verification failed; verifiedCount 0; requiredCount
8 1; message .attestors[0].entries[0].keys: failed to verify signature. diff found;
9 {"items":[{"key":"metadata.labels.location","values":{"after":"asia","before":"europe"}}]}'
The difference between the signed manifest and supplied manifest is shown as part of the failure message.
Change the value of the location
label back to europe
and attempt to create the manifest once again.
1$ kubectl apply -f secret-signed.yaml
2secret/mysecret created
The creation is allowed since the signature was validated according to the original contents.
In many cases, you may wish to secure a portion of a manifest while allowing alterations to other portions. For example, you may wish to sign manifests for Deployments which prevent tampering with any fields other than the replica count. Use the ignoreFields
portion to define the object type and allowed fields which can differ from the signed original.
The below policy example shows how to match on Deployments and verify signed manifests while allowing changes to the spec.replicas
field.
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: validate-deployment
5spec:
6 validationFailureAction: Enforce
7 background: true
8 rules:
9 - name: validate-deployment
10 match:
11 any:
12 - resources:
13 kinds:
14 - Deployment
15 validate:
16 manifests:
17 attestors:
18 - count: 1
19 entries:
20 - keys:
21 publicKeys: |-
22 -----BEGIN PUBLIC KEY-----
23 MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEStoX3dPCFYFD2uPgTjZOf1I5UFTa
24 1tIu7uoGoyTxJqqEq7K2aqU+vy+aK76uQ5mcllc+TymVtcLk10kcKvb3FQ==
25 -----END PUBLIC KEY-----
26 ignoreFields:
27 - objects:
28 - kind: Deployment
29 fields:
30 - spec.replicas
Kyverno will permit the creation of a signed Deployment as long as the only difference between the signed original and the submitted manifest is the spec.replicas
field. Modifications to any other field(s) will trigger a failure, for example if the spec.template.spec.containers[0].image
field is changed from the default of busybox:1.28
to evilimage:1.28
.
Rather than using ignoreFields to handle expected controller mutations, the dryRun
object can be used to eliminate default changes by these and admission controllers. Set enable
to true
under the dryRun
object as shown below and specify a Namespace in which the dry run will occur. Using other Namespaces or dry running with cluster-scoped or custom resources may entail giving additional privileges to the Kyverno ServiceAccount.
1validate:
2 manifests:
3 dryRun:
4 enable: true
5 namespace: my-dryrun-ns
The manifest validation feature shares many of the same abilities as the verify images rule type. For a more thorough explanation of the available fields, use the kubectl explain clusterpolicy.spec.rules.validate.manifests
command.
Pod Security
Starting in Kyverno 1.8, a new subrule type called podSecurity
is available. This subrule type dramatically simplifies the process of writing and applying Pod Security Standards profiles and controls. By integrating the same libraries as used in Kubernetes’ Pod Security Admission, enabled by default in 1.23 and stable in 1.25, Kyverno is able to apply all or some of the controls and profiles in a single rule while providing capabilities not possible in Pod Security Admission. Standard match
and exclude
processing is available just like with other rules. This subrule type is enabled when a validate
rule is written with a podSecurity
object, detailed below.
The podSecurity feature has the following advantages over the Kubernetes built-in Pod Security Admission feature:
- Cluster-wide application of Pod Security Standards does not require an AdmissionConfiguration file nor any modifications to any control plane components.
- Namespace application of Pod Security Standards does not require assignment of a label.
- Specific controls may be exempted from a given profile.
- Container images may be exempted along with a control exemption.
- Enforcement of Pod controllers is automatic.
- Auditing of Pods in violation may be viewed in-cluster by examining a Policy Report Custom Resource.
- Testing of Pods and Pod controller manifests in a CI/CD pipeline is enabled via the Kyverno CLI.
For example, this policy enforces the latest version of the Pod Security Standards baseline profile in a single rule across the entire cluster.
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: psa
5spec:
6 background: true
7 validationFailureAction: Enforce
8 rules:
9 - name: baseline
10 match:
11 any:
12 - resources:
13 kinds:
14 - Pod
15 validate:
16 podSecurity:
17 level: baseline
18 version: latest
The podSecurity.level
field indicates the profile to be applied. Applying the baseline
profile automatically includes all the controls outlined in the baseline profile.
The podSecurity.version
field indicates which version of the Pod Security Standards should be applied. Use of the latest
version indicates the latest version of the Pod Security Standards should be applied. This field allows prior versions, for example v1.24
, to support the pinning to specific versions of the Pod Security Standards.
Attempting to apply a Pod which does not meet all of the controls included in the baseline profile will result in a blocking action.
1apiVersion: v1
2kind: Pod
3metadata:
4 name: badpod01
5spec:
6 hostIPC: true
7 containers:
8 - name: container01
9 image: dummyimagename
The failure message returned indicates which level, version, and specific control(s) were responsible for the failure.
1Error from server: error when creating "bad.yaml": admission webhook "validate.kyverno.svc-fail" denied the request:
2
3policy Pod/default/badpod01 for resource violation:
4
5psa:
6 baseline: |
7 Validation rule 'baseline' failed. It violates PodSecurity "baseline:latest": ({Allowed:false ForbiddenReason:host namespaces ForbiddenDetail:hostIPC=true})
Similarly, the restricted profile may be applied by changing the level
field.
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: psa
5spec:
6 background: true
7 validationFailureAction: Enforce
8 rules:
9 - name: restricted
10 match:
11 any:
12 - resources:
13 kinds:
14 - Pod
15 validate:
16 podSecurity:
17 level: restricted
18 version: latest
Applying the same Pod as above will now return additional information in the message about the cumulative violations.
Error from server: error when creating "bad.yaml": admission webhook "validate.kyverno.svc-fail" denied the request:
policy Pod/default/badpod01 for resource violation:
psa:
baseline: |
Validation rule 'baseline' failed. It violates PodSecurity "restricted:latest": ({Allowed:false ForbiddenReason:allowPrivilegeEscalation != false ForbiddenDetail:container "container01" must set securityContext.allowPrivilegeEscalation=false})
({Allowed:false ForbiddenReason:unrestricted capabilities ForbiddenDetail:container "container01" must set securityContext.capabilities.drop=["ALL"]})
({Allowed:false ForbiddenReason:host namespaces ForbiddenDetail:hostIPC=true})
({Allowed:false ForbiddenReason:runAsNonRoot != true ForbiddenDetail:pod or container "container01" must set securityContext.runAsNonRoot=true})
({Allowed:false ForbiddenReason:seccompProfile ForbiddenDetail:pod or container "container01" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost"})
Note
Therestricted
profile is inclusive of the baseline
profile. Therefore, any Pod in violation of baseline
is implicitly in violation of restricted
.Exemptions
When it is necessary to exempt specific controls within a profile while applying all others, the podSecurity.exclude[]
object may be used. Controls which have restricted fields at the Pod spec
level need only specify the controlName
field, the value of which must be a valid name of a Pod Security Standard control. Controls which have restricted fields at the Pod containers[]
level must additionally specify the images[]
list. Wildcards are supported in the value of images[]
allowing for flexible exemption. And controls which have restricted fields at both spec
and containers[]
levels must specify two objects in the exclude[]
field: once with controlName
and the other with both controlName
and images[]
.
For example, the below policy applies the baseline profile across the entire cluster while exempting any Pod that violates the Host Namespaces control.
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: psa
5spec:
6 background: true
7 validationFailureAction: Enforce
8 rules:
9 - name: baseline
10 match:
11 any:
12 - resources:
13 kinds:
14 - Pod
15 validate:
16 podSecurity:
17 level: baseline
18 version: latest
19 exclude:
20 - controlName: Host Namespaces
The following Pod violates the Host Namespaces control because it sets spec.hostIPC: true
yet is allowed due to the exclusion.
1apiVersion: v1
2kind: Pod
3metadata:
4 name: badpod01
5spec:
6 hostIPC: true
7 containers:
8 - name: container01
9 image: dummyimagename
When a control exemption is requested where the control defines only container-level fields, the images[]
list must be present with at least one entry. Wildcards (*
) are supported both as the sole value and as a component of an image name.
For example, the below policy enforces the restricted profile but exempts containers running either the nginx
or redis
image from following the Capabilities control.
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: psa
5spec:
6 background: true
7 validationFailureAction: Enforce
8 rules:
9 - name: restricted
10 match:
11 any:
12 - resources:
13 kinds:
14 - Pod
15 validate:
16 podSecurity:
17 level: restricted
18 version: latest
19 exclude:
20 - controlName: Capabilities
21 images:
22 - nginx*
23 - redis*
The following Pod, running the nginx:1.1.9
image, will be allowed although it violates the Capabilities control by virtue of it adding a forbidden capability.
1apiVersion: v1
2kind: Pod
3metadata:
4 name: goodpod01
5spec:
6 containers:
7 - name: container01
8 image: nginx:1.1.9
9 securityContext:
10 allowPrivilegeEscalation: false
11 runAsNonRoot: true
12 seccompProfile:
13 type: RuntimeDefault
14 capabilities:
15 add:
16 - SYS_ADMIN
17 drop:
18 - ALL
The same policy would result in blocking a Pod in which a container running the busybox:1.28
image attempted the same thing.
Error from server: error when creating "temp.yaml": admission webhook "validate.kyverno.svc-fail" denied the request:
policy Pod/default/badpod01 for resource violation:
psa:
restricted: |
Validation rule 'restricted' failed. It violates PodSecurity "restricted:latest": ({Allowed:false ForbiddenReason:non-default capabilities ForbiddenDetail:container "container01" must not include "SYS_ADMIN" in securityContext.capabilities.add})
({Allowed:false ForbiddenReason:unrestricted capabilities ForbiddenDetail:container "container01" must not include "SYS_ADMIN" in securityContext.capabilities.add})
Note
Note that in the above error message, the Pod is in violation of the Capabilities control at both the baseline and restricted profiles, hence the multiple entries.When a control is to be excluded which contains fields at both the spec
and containers[]
level, in order for that control to be fully excluded it must have exclusions for both. The controlName
field assumes spec
level while controlName
plus images[]
assumes the containers[]
level.
For example, the Seccomp control in the restricted profile mandates that the securityContext.seccompProfile.type
field be set to either RuntimeDefault
or Localhost
. The securityContext
object may be defined at one or both the spec
or container[]
levels. The container[]
fields may be undefined/nil if the Pod-level field is set appropriately. Conversely, the Pod-level field may be undefined/nil if all container- level fields are set. In order to completely exclude this control, two entries must exist in the podSecurity.exclude[]
object. The below policy enforces the restricted profile across the entire cluster while fully exempting the Seccomp control from all images.
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: psa
5spec:
6 background: true
7 validationFailureAction: Enforce
8 rules:
9 - name: restricted
10 match:
11 any:
12 - resources:
13 kinds:
14 - Pod
15 validate:
16 podSecurity:
17 level: restricted
18 version: latest
19 exclude:
20 - controlName: Seccomp
21 - controlName: Seccomp
22 images:
23 - '*'
An example Pod which satisfies all controls in the restricted profile except the Seccomp control is therefore allowed.
1apiVersion: v1
2kind: Pod
3metadata:
4 name: goodpod01
5spec:
6 securityContext:
7 seccompProfile:
8 type: Unconfined
9 containers:
10 - name: container01
11 image: busybox:1.28
12 securityContext:
13 allowPrivilegeEscalation: false
14 runAsNonRoot: true
15 # seccompProfile:
16 # type: Unconfined
17 capabilities:
18 drop:
19 - ALL
Regardless of where the disallowed type: Unconfined
field is specified, Kyverno allows the Pod.
Multiple control names may be excluded by listing them individually keeping in mind the previously-described rules. Refer to the Pod Security Standards documentation for a listing of all present controls, restricted fields, and allowed values.
PSA Interoperability
Kyverno’s podSecurity validate subrule type and Kubernetes’ Pod Security Admission (PSA) are compatible and may be used together in a single cluster with an understanding of where each begins and ends. These are a few of the most common strategies when employing both technologies.
Note
Pods which are blocked by PSA in enforce mode do not result in an AdmissionReview request being sent to admission controllers. Therefore, if a Pod is blocked by PSA, Kyverno cannot apply policies to it.Use PSA to enforce the baseline profile cluster-wide and use Kyverno podSecurity subrule to enforce or audit the restricted profile with more granularity.
Advantage: Reduces some of the processing on Kyverno by blocking non-compliant Pods at the source while allowing more flexible control on exclusions not possible with PSA.
Use PSA to enforce either baseline or restricted on a per-Namespace basis and use Kyverno podSecurity cluster-wide or on different Namespaces.
Advantage: Does not require configuring an AdmissionConfiguration file for PSA.
Use PSA to enforce the baseline profile cluster-wide, relax certain Namespaces to the privileged profile, and use Kyverno podSecurity at the baseline or restricted profile.
Advantage: Combines both AdmissionConfiguration with Namespace labeling while layering in Kyverno for granular control over baseline and restricted. A Kyverno mutate rule may also be separately employed here to handle the Namespace labeling as desired.
Use both PSA and Kyverno to enforce the same profile at the same scope.
Advantage: Provides a safety net in case either technology is inadvertently/maliciously disabled or becomes unavailable.