arrow_back

AHYBRID072 Enforcing Policy with Anthos Config Management Policy Controller

Sign in Join
Get access to 700+ labs and courses

AHYBRID072 Enforcing Policy with Anthos Config Management Policy Controller

Lab 1 hour universal_currency_alt 5 Credits show_chart Intermediate
info This lab may incorporate AI tools to support your learning.
Get access to 700+ labs and courses

Overview

In this lab, you learn how to use Policy Controller, a Kubernetes dynamic admission controller that checks, audits, and enforces your clusters' compliance with policies related to security, regulations, or business rules.

Policy Controller enforces your clusters' compliance with policies called constraints. In this lab, you use the constraint library, a set of templates that can be easily configured to enforce and audit security and compliance policies. For example, you can require that namespaces have at least one label so that you use GKE Usage Metering and allocate costs to different teams. Or, you might want to enforce the same requirements as provided by PodSecurityPolicies, but with the ability to audit them before disrupting a deployment with enforcements.

In addition to working with the provided platform constraint templates, you will learn how to create your own. With constraint templates, a centralized team can define how a constraint works, and delegate defining the specifics of the constraint to an individual or group with subject-matter expertise.

Policy Controller is built from the Gatekeeper open source project and is integrated into Anthos Config Management v1.1+.

Objectives

In this lab, you learn how to perform the following tasks:

  • Install Anthos Policy Controller
  • Create and enforce constraints using the Template Library provided by Google
  • Audit constraint violation
  • Create your own Template Constraints to create the custom compliance policies that your company needs

Setup and requirements

In this task, you use Qwiklabs and perform initialization steps for your lab.

For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.

  1. Sign in to Qwiklabs using an incognito window.

  2. Note the lab's access time (for example, 1:15:00), and make sure you can finish within that time.
    There is no pause feature. You can restart if needed, but you have to start at the beginning.

  3. When ready, click Start lab.

  4. Note your lab credentials (Username and Password). You will use them to sign in to the Google Cloud Console.

  5. Click Open Google Console.

  6. Click Use another account and copy/paste credentials for this lab into the prompts.
    If you use other credentials, you'll receive errors or incur charges.

  7. Accept the terms and skip the recovery resource page.

After you complete the initial sign-in steps, the project dashboard appears.

GCP Project Dashboard

  1. Click Select a project, highlight your GCP Project ID, and click OPEN to select your project.

Activate Google Cloud Shell

Google Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud.

Google Cloud Shell provides command-line access to your Google Cloud resources.

  1. In Cloud console, on the top right toolbar, click the Open Cloud Shell button.

    Highlighted Cloud Shell icon

  2. Click Continue.

It takes a few moments to provision and connect to the environment. When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. For example:

Project ID highlighted in the Cloud Shell Terminal

gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.

  • You can list the active account name with this command:
gcloud auth list

Output:

Credentialed accounts: - @.com (active)

Example output:

Credentialed accounts: - google1623327_student@qwiklabs.net
  • You can list the project ID with this command:
gcloud config list project

Output:

[core] project =

Example output:

[core] project = qwiklabs-gcp-44776a13dea667a6 Note: Full documentation of gcloud is available in the gcloud CLI overview guide .

Task 1. Complete and verify the lab setup

Note: The lab environment has already been partially configured: A GKE cluster named gke has been created and registered.
  1. Set the Zone environment variable:

    ZONE={{{ project_0.default_zone| "Zone added at lab start" }}}
  2. Set up the Cloud Shell environment for command-line access to your cluster:

    export CLUSTER_NAME=gke export PROJECT_ID=$(gcloud config get-value project) export PROJECT_NUMBER=$(gcloud projects describe ${PROJECT_ID} \ --format="value(projectNumber)") gcloud container clusters get-credentials gke --zone $ZONE --project $PROJECT_ID
  3. Enable necessary APIs:

    gcloud beta container fleet config-management enable

Task 2. Install the Anthos Policy Controller

  1. Create the Anthos Policy Controller config file:

    cat <<EOF > config-management.yaml applySpecVersion: 1 spec: policyController: # Set to true to install and enable Policy Controller enabled: true # Uncomment to prevent the template library from being installed # templateLibraryInstalled: false # Uncomment to enable support for referential constraints # referentialRulesEnabled: true # Uncomment to disable audit, adjust value to set audit interval # auditIntervalSeconds: 0 # Uncomment to log all denies and dryrun failures # logDeniesEnabled: true # Uncomment to enable mutation # mutationEnabled: true # Uncomment to exempt namespaces # exemptableNamespaces: ["namespace-name"] # Uncomment to change the monitoring backends # monitoring: # backends: # - cloudmonitoring # - prometheus # ...other fields... EOF
  2. Install the Anthos Policy Controller:

    gcloud beta container fleet config-management apply \ --membership=${CLUSTER_NAME}-connect \ --config=config-management.yaml \ --project=$PROJECT_ID
  3. Poll the Config Management service to see if the Anthos Policy Controller has been installed:

    watch gcloud beta container fleet config-management status \ --project=$PROJECT_ID

    When the output updates to match what is shown below, you can type <CTRL>+C to exit the polling and continue. It may take 4-5 minutes (but not longer) for the controller to be installed. Sometimes the Status and/or the Hierachy_Controller stays in PENDING. Ignore that and continue with the next steps.

    Output:

    Name: global/gke-connect Status: PENDING Last_Synced_Token: NA Sync_Branch: NA Last_Synced_Time: Policy_Controller: INSTALLED Hierarchy_Controller: PENDING
  4. Verifying the constraint template library installation:

    kubectl get constrainttemplates

    Desired output:

    NAME AGE k8sallowedrepos 84s k8scontainerlimits 84s k8spspallowprivilegeescalationcontainer 84s ...[OUTPUT TRUNCATED]...

Task 3. Create a constraint from the templates in the library

Let's try one of the constraints from the library.

  1. Let's specify that new namespaces must the have label geo before they can be created:

    cat <<EOF > ns-must-have-geo.yaml apiVersion: constraints.gatekeeper.sh/v1beta1 kind: K8sRequiredLabels metadata: name: ns-must-have-geo spec: match: kinds: - apiGroups: [""] kinds: ["Namespace"] parameters: labels: - key: "geo" EOF
  2. Apply the constraint using kubectl:

    kubectl apply -f ns-must-have-geo.yaml
  3. Try creating a namespace without the geo key, and confirm that it doesn't work:

    kubectl create namespace test

    You should see results that look like this:

    Error from server ([denied by ns-must-have-geo] you must provide labels: {"geo"}): admission webhook "validation.gatekeeper.sh" denied the request: [denied by ns-must-have-geo] you must provide labels: {"geo"}

    If instead you see an error that looks like this:

    Error from server (InternalError): Internal error occurred: failed calling webhook "check-ignore-label.gatekeeper.sh": Post https://gatekeeper-webhook-service.gatekeeper-system.svc:443/v1/admitlabel?timeout=3s: no endpoints available for service "gatekeeper-webhook-service"

    wait for a minute or two and try again.

  4. Try creating the same namespace with the geo label:

    cat <<EOF > test-namespace.yaml apiVersion: v1 kind: Namespace metadata: name: test labels: geo: eu-west1 EOF kubectl apply -f test-namespace.yaml

Congratulations! You successfully configured a template from the Constraints Library, deployed it, and verified that prevents users from installing non compliant resources.

Task 4. Audit the constraint policies

Policy Controller constraint objects enable you to enforce policies for your Kubernetes clusters. To help test your policies, you can add an enforcement action to your constraints. You can then view violations in constraint objects and logs. There are two enforcement actions: deny and dryrun.

  • deny is the default enforcement action. It's automatically enabled, even if you don't add an enforcement action in your constraint. Use deny to prevent a given cluster operation from occurring when there's a violation.

  • dryrun let's you monitor violations of your rules without actively blocking transactions. You can use it to test if your constraints are working as intended, prior to enabling active enforcement using the deny action. Testing constraints this way can prevent disruptions caused by an incorrectly configured constraint.

  1. Audit the constraint violations using the following kubectl command:

    kubectl get K8sRequiredLabels ns-must-have-geo -o yaml

    Notice that the type of enforcementAction is deny as you have not changed the default behavior. Notice as well that there are a total of 9 existing violations - these are namespaces that don't meet the policy, but were created before the policy was applied.

  2. Open the ns-must-have-geo.yaml constraint file in the Cloud Shell editor and save the file. Add the constraint enforcementAction to dryrun. That way you can validate policy constraints before enforcing them. Make sure that the yaml file looks as follows:

    apiVersion: constraints.gatekeeper.sh/v1beta1 kind: K8sRequiredLabels metadata: name: ns-must-have-geo spec: enforcementAction: dryrun match: kinds: - apiGroups: [""] kinds: ["Namespace"] parameters: labels: - key: "geo"
  3. Run kubectl apply to deploy the changes into the cluster:

    kubectl apply -f ns-must-have-geo.yaml
  4. Try creating another namespace without the geo label and observe that the command completes this time:

    kubectl create namespace ns-with-violations
  5. Check the violations again, to see a new violation added of type dryrun:

    kubectl get K8sRequiredLabels ns-must-have-geo -o yaml

Task 5. Writing a constraint template

In this task, you will write a custom constraint template and use it to extend Policy Controller capabilities. This is great when you cannot find a pre-written constraint template that suits your needs for extending Policy Controller.

Policy Controller policies are described by using the OPA Constraint Framework and are written in Rego. A policy can evaluate any field of a Kubernetes object.

Writing policies using Rego is a specialized skill. For this reason, a library of common constraint templates is installed by default. Most users can leverages these constraint templates when creating constraints. If you have specialized needs, you can create your own constraint templates.

Constraint templates let you separate a policy's logic from its specific requirements, and this enables reuse and delegation. You can create constraints by using constraint templates developed by third parties, such as open source projects, software vendors, or regulatory experts.

  1. Create the policy template that denies all resources whose name matches a value provided by the creator of the constraint:

    cat <<EOF > k8sdenyname-constraint-template.yaml apiVersion: templates.gatekeeper.sh/v1beta1 kind: ConstraintTemplate metadata: name: k8sdenyname spec: crd: spec: names: kind: K8sDenyName validation: openAPIV3Schema: properties: invalidName: type: string targets: - target: admission.k8s.gatekeeper.sh rego: | package k8sdenynames violation[{"msg": msg}] { input.review.object.metadata.name == input.parameters.invalidName msg := sprintf("The name %v is not allowed", [input.parameters.invalidName]) } EOF
  2. Deploy the template in the kubernetes cluster:

    kubectl apply -f k8sdenyname-constraint-template.yaml Note: If you are using a structured repo with ACM Config Sync, Google recommends that you create your constraints in the cluster/ directory.

    Once the template is deployed, a team can reference it the same way it would reference a template from Google's library to create a constraint:

    cat <<EOF > k8sdenyname-constraint.yaml apiVersion: constraints.gatekeeper.sh/v1beta1 kind: K8sDenyName metadata: name: no-policy-violation spec: parameters: invalidName: "policy-violation" EOF
  3. Deploy the constraint in the kubernetes cluster:

    kubectl apply -f k8sdenyname-constraint.yaml
  4. Try creating any resource with the name policy-violation to verify that it does not work:

    kubectl create namespace policy-violation

Congratulations! You successfully created and deployed your own template. Then you created a constraint and verified that it prevents users from installing non compliant resources.

Review

In this lab, you reviewed ACM's Policy Controller and explored some of its useful features. You created a constraint from the provided templates as well as creating your own templates. You verified that constraints can be used to enforce policies and to audit policy violations before enforcing them in a production environment.

End your lab

When you have completed your lab, click End Lab. Google Cloud Skills Boost removes the resources you’ve used and cleans the account for you.

You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.

The number of stars indicates the following:

  • 1 star = Very dissatisfied
  • 2 stars = Dissatisfied
  • 3 stars = Neutral
  • 4 stars = Satisfied
  • 5 stars = Very satisfied

You can close the dialog box if you don't want to provide feedback.

For feedback, suggestions, or corrections, please use the Support tab.

Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.

Before you begin

  1. Labs create a Google Cloud project and resources for a fixed time
  2. Labs have a time limit and no pause feature. If you end the lab, you'll have to restart from the beginning.
  3. On the top left of your screen, click Start lab to begin

Use private browsing

  1. Copy the provided Username and Password for the lab
  2. Click Open console in private mode

Sign in to the Console

  1. Sign in using your lab credentials. Using other credentials might cause errors or incur charges.
  2. Accept the terms, and skip the recovery resource page
  3. Don't click End lab unless you've finished the lab or want to restart it, as it will clear your work and remove the project

This content is not currently available

We will notify you via email when it becomes available

Great!

We will contact you via email if it becomes available

One lab at a time

Confirm to end all existing labs and start this one

Use private browsing to run the lab

Use an Incognito or private browser window to run this lab. This prevents any conflicts between your personal account and the Student account, which may cause extra charges incurred to your personal account.