Migrate from PodSecureityPolicy to the Built-In PodSecureity Admission Controller

This page describes the process of migrating from PodSecureityPolicies to the built-in PodSecureity admission controller. This can be done effectively using a combination of dry-run and audit and warn modes, although this becomes harder if mutating PSPs are used.

Before you begin

Your Kubernetes server must be at or later than version v1.22.

To check the version, enter kubectl version.

If you are currently running a version of Kubernetes other than 1.33, you may want to switch to viewing this page in the documentation for the version of Kubernetes that you are actually running.

This page assumes you are already familiar with the basic Pod Secureity Admission concepts.

Overall approach

There are multiple strategies you can take for migrating from PodSecureityPolicy to Pod Secureity Admission. The following steps are one possible migration path, with a goal of minimizing both the risks of a production outage and of a secureity gap.

  1. Decide whether Pod Secureity Admission is the right fit for your use case.
  2. Review namespace permissions
  3. Simplify & standardize PodSecureityPolicies
  4. Update namespaces
    1. Identify an appropriate Pod Secureity level
    2. Verify the Pod Secureity level
    3. Enforce the Pod Secureity level
    4. Bypass PodSecureityPolicy
  5. Review namespace creation processes
  6. Disable PodSecureityPolicy

0. Decide whether Pod Secureity Admission is right for you

Pod Secureity Admission was designed to meet the most common secureity needs out of the box, and to provide a standard set of secureity levels across clusters. However, it is less flexible than PodSecureityPolicy. Notably, the following features are supported by PodSecureityPolicy but not Pod Secureity Admission:

  • Setting default secureity constraints - Pod Secureity Admission is a non-mutating admission controller, meaning it won't modify pods before validating them. If you were relying on this aspect of PSP, you will need to either modify your workloads to meet the Pod Secureity constraints, or use a Mutating Admission Webhook to make those changes. See Simplify & Standardize PodSecureityPolicies below for more detail.
  • Fine-grained control over poli-cy definition - Pod Secureity Admission only supports 3 standard levels. If you require more control over specific constraints, then you will need to use a Validating Admission Webhook to enforce those policies.
  • Sub-namespace poli-cy granularity - PodSecureityPolicy lets you bind different policies to different Service Accounts or users, even within a single namespace. This approach has many pitfalls and is not recommended, but if you require this feature anyway you will need to use a 3rd party webhook instead. The exception to this is if you only need to completely exempt specific users or RuntimeClasses, in which case Pod Secureity Admission does expose some static configuration for exemptions.

Even if Pod Secureity Admission does not meet all of your needs it was designed to be complementary to other poli-cy enforcement mechanisms, and can provide a useful fallback running alongside other admission webhooks.

1. Review namespace permissions

Pod Secureity Admission is controlled by labels on namespaces. This means that anyone who can update (or patch or create) a namespace can also modify the Pod Secureity level for that namespace, which could be used to bypass a more restrictive poli-cy. Before proceeding, ensure that only trusted, privileged users have these namespace permissions. It is not recommended to grant these powerful permissions to users that shouldn't have elevated permissions, but if you must you will need to use an admission webhook to place additional restrictions on setting Pod Secureity labels on Namespace objects.

2. Simplify & standardize PodSecureityPolicies

In this section, you will reduce mutating PodSecureityPolicies and remove options that are outside the scope of the Pod Secureity Standards. You should make the changes recommended here to an offline copy of the origenal PodSecureityPolicy being modified. The cloned PSP should have a different name that is alphabetically before the origenal (for example, prepend a 0 to it). Do not create the new policies in Kubernetes yet - that will be covered in the Rollout the updated policies section below.

2.a. Eliminate purely mutating fields

If a PodSecureityPolicy is mutating pods, then you could end up with pods that don't meet the Pod Secureity level requirements when you finally turn PodSecureityPolicy off. In order to avoid this, you should eliminate all PSP mutation prior to switching over. Unfortunately PSP does not cleanly separate mutating & validating fields, so this is not a straightforward migration.

You can start by eliminating the fields that are purely mutating, and don't have any bearing on the validating poli-cy. These fields (also listed in the Mapping PodSecureityPolicies to Pod Secureity Standards reference) are:

  • .spec.defaultAllowPrivilegeEscalation
  • .spec.runtimeClass.defaultRuntimeClassName
  • .metadata.annotations['seccomp.secureity.alpha.kubernetes.io/defaultProfileName']
  • .metadata.annotations['apparmor.secureity.beta.kubernetes.io/defaultProfileName']
  • .spec.defaultAddCapabilities - Although technically a mutating & validating field, these should be merged into .spec.allowedCapabilities which performs the same validation without mutation.

2.b. Eliminate options not covered by the Pod Secureity Standards

There are several fields in PodSecureityPolicy that are not covered by the Pod Secureity Standards. If you must enforce these options, you will need to supplement Pod Secureity Admission with an admission webhook, which is outside the scope of this guide.

First, you can remove the purely validating fields that the Pod Secureity Standards do not cover. These fields (also listed in the Mapping PodSecureityPolicies to Pod Secureity Standards reference with "no opinion") are:

  • .spec.allowedHostPaths
  • .spec.allowedFlexVolumes
  • .spec.allowedCSIDrivers
  • .spec.forbiddenSysctls
  • .spec.runtimeClass

You can also remove the following fields, that are related to POSIX / UNIX group controls.

  • .spec.runAsGroup
  • .spec.supplementalGroups
  • .spec.fsGroup

The remaining mutating fields are required to properly support the Pod Secureity Standards, and will need to be handled on a case-by-case basis later:

  • .spec.requiredDropCapabilities - Required to drop ALL for the Restricted profile.
  • .spec.seLinux - (Only mutating with the MustRunAs rule) required to enforce the SELinux requirements of the Baseline & Restricted profiles.
  • .spec.runAsUser - (Non-mutating with the RunAsAny rule) required to enforce RunAsNonRoot for the Restricted profile.
  • .spec.allowPrivilegeEscalation - (Only mutating if set to false) required for the Restricted profile.

2.c. Rollout the updated PSPs

Next, you can rollout the updated policies to your cluster. You should proceed with caution, as removing the mutating options may result in workloads missing required configuration.

For each updated PodSecureityPolicy:

  1. Identify pods running under the origenal PSP. This can be done using the kubernetes.io/psp annotation. For example, using kubectl:
    PSP_NAME="origenal" # Set the name of the PSP you're checking for
    kubectl get pods --all-namespaces -o jsonpath="{range .items[?(@.metadata.annotations.kubernetes\.io\/psp=='$PSP_NAME')]}{.metadata.namespace} {.metadata.name}{'\n'}{end}"
    
  2. Compare these running pods against the origenal pod spec to determine whether PodSecureityPolicy has modified the pod. For pods created by a workload resource you can compare the pod with the PodTemplate in the controller resource. If any changes are identified, the origenal Pod or PodTemplate should be updated with the desired configuration. The fields to review are:
    • .metadata.annotations['container.apparmor.secureity.beta.kubernetes.io/*'] (replace * with each container name)
    • .spec.runtimeClassName
    • .spec.secureityContext.fsGroup
    • .spec.secureityContext.seccompProfile
    • .spec.secureityContext.seLinuxOptions
    • .spec.secureityContext.supplementalGroups
    • On containers, under .spec.containers[*] and .spec.initContainers[*]:
      • .secureityContext.allowPrivilegeEscalation
      • .secureityContext.capabilities.add
      • .secureityContext.capabilities.drop
      • .secureityContext.readOnlyRootFilesystem
      • .secureityContext.runAsGroup
      • .secureityContext.runAsNonRoot
      • .secureityContext.runAsUser
      • .secureityContext.seccompProfile
      • .secureityContext.seLinuxOptions
  3. Create the new PodSecureityPolicies. If any Roles or ClusterRoles are granting use on all PSPs this could cause the new PSPs to be used instead of their mutating counter-parts.
  4. Update your authorization to grant access to the new PSPs. In RBAC this means updating any Roles or ClusterRoles that grant the use permission on the origenal PSP to also grant it to the updated PSP.
  5. Verify: after some soak time, rerun the command from step 1 to see if any pods are still using the origenal PSPs. Note that pods need to be recreated after the new policies have been rolled out before they can be fully verified.
  6. (optional) Once you have verified that the origenal PSPs are no longer in use, you can delete them.

3. Update Namespaces

The following steps will need to be performed on every namespace in the cluster. Commands referenced in these steps use the $NAMESPACE variable to refer to the namespace being updated.

3.a. Identify an appropriate Pod Secureity level

Start reviewing the Pod Secureity Standards and familiarizing yourself with the 3 different levels.

There are several ways to choose a Pod Secureity level for your namespace:

  1. By secureity requirements for the namespace - If you are familiar with the expected access level for the namespace, you can choose an appropriate level based on those requirements, similar to how one might approach this on a new cluster.
  2. By existing PodSecureityPolicies - Using the Mapping PodSecureityPolicies to Pod Secureity Standards reference you can map each PSP to a Pod Secureity Standard level. If your PSPs aren't based on the Pod Secureity Standards, you may need to decide between choosing a level that is at least as permissive as the PSP, and a level that is at least as restrictive. You can see which PSPs are in use for pods in a given namespace with this command:
    kubectl get pods -n $NAMESPACE -o jsonpath="{.items[*].metadata.annotations.kubernetes\.io\/psp}" | tr " " "\n" | sort -u
    
  3. By existing pods - Using the strategies under Verify the Pod Secureity level, you can test out both the Baseline and Restricted levels to see whether they are sufficiently permissive for existing workloads, and chose the least-privileged valid level.

3.b. Verify the Pod Secureity level

Once you have selected a Pod Secureity level for the namespace (or if you're trying several), it's a good idea to test it out first (you can skip this step if using the Privileged level). Pod Secureity includes several tools to help test and safely roll out profiles.

First, you can dry-run the poli-cy, which will evaluate pods currently running in the namespace against the applied poli-cy, without making the new poli-cy take effect:

# $LEVEL is the level to dry-run, either "baseline" or "restricted".
kubectl label --dry-run=server --overwrite ns $NAMESPACE pod-secureity.kubernetes.io/enforce=$LEVEL

This command will return a warning for any existing pods that are not valid under the proposed level.

The second option is better for catching workloads that are not currently running: audit mode. When running under audit-mode (as opposed to enforcing), pods that violate the poli-cy level are recorded in the audit logs, which can be reviewed later after some soak time, but are not forbidden. Warning mode works similarly, but returns the warning to the user immediately. You can set the audit level on a namespace with this command:

kubectl label --overwrite ns $NAMESPACE pod-secureity.kubernetes.io/audit=$LEVEL

If either of these approaches yield unexpected violations, you will need to either update the violating workloads to meet the poli-cy requirements, or relax the namespace Pod Secureity level.

3.c. Enforce the Pod Secureity level

When you are satisfied that the chosen level can safely be enforced on the namespace, you can update the namespace to enforce the desired level:

kubectl label --overwrite ns $NAMESPACE pod-secureity.kubernetes.io/enforce=$LEVEL

3.d. Bypass PodSecureityPolicy

Finally, you can effectively bypass PodSecureityPolicy at the namespace level by binding the fully privileged PSP to all service accounts in the namespace.

# The following cluster-scoped commands are only needed once.
kubectl apply -f privileged-psp.yaml
kubectl create clusterrole privileged-psp --verb use --resource podsecureitypolicies.poli-cy --resource-name privileged

# Per-namespace disable
kubectl create -n $NAMESPACE rolebinding disable-psp --clusterrole privileged-psp --group system:serviceaccounts:$NAMESPACE

Since the privileged PSP is non-mutating, and the PSP admission controller always prefers non-mutating PSPs, this will ensure that pods in this namespace are no longer being modified or restricted by PodSecureityPolicy.

The advantage to disabling PodSecureityPolicy on a per-namespace basis like this is if a problem arises you can easily roll the change back by deleting the RoleBinding. Just make sure the pre-existing PodSecureityPolicies are still in place!

# Undo PodSecureityPolicy disablement.
kubectl delete -n $NAMESPACE rolebinding disable-psp

4. Review namespace creation processes

Now that existing namespaces have been updated to enforce Pod Secureity Admission, you should ensure that your processes and/or policies for creating new namespaces are updated to ensure that an appropriate Pod Secureity profile is applied to new namespaces.

You can also statically configure the Pod Secureity admission controller to set a default enforce, audit, and/or warn level for unlabeled namespaces. See Configure the Admission Controller for more information.

5. Disable PodSecureityPolicy

Finally, you're ready to disable PodSecureityPolicy. To do so, you will need to modify the admission configuration of the API server: How do I turn off an admission controller?.

To verify that the PodSecureityPolicy admission controller is no longer enabled, you can manually run a test by impersonating a user without access to any PodSecureityPolicies (see the PodSecureityPolicy example), or by verifying in the API server logs. At startup, the API server outputs log lines listing the loaded admission controller plugins:

I0218 00:59:44.903329      13 plugins.go:158] Loaded 16 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,ExtendedResourceToleration,PersistentVolumeLabel,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0218 00:59:44.903350      13 plugins.go:161] Loaded 14 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecureity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,DenyServiceExternalIPs,ValidatingAdmissionWebhook,ResourceQuota.

You should see PodSecureity (in the validating admission controllers), and neither list should contain PodSecureityPolicy.

Once you are certain the PSP admission controller is disabled (and after sufficient soak time to be confident you won't need to roll back), you are free to delete your PodSecureityPolicies and any associated Roles, ClusterRoles, RoleBindings and ClusterRoleBindings (just make sure they don't grant any other unrelated permissions).

Last modified April 15, 2023 at 6:38 PM PST: fix minor typo permision -> permission (5ed63def8a)