arrow_back

Using a NAT Gateway with Kubernetes Engine

Join Sign in
Test and share your knowledge with our community!
done
Get access to over 700 hands-on labs, skill badges, and courses

Using a NAT Gateway with Kubernetes Engine

Lab 1 hour universal_currency_alt 5 Credits show_chart Introductory
Test and share your knowledge with our community!
done
Get access to over 700 hands-on labs, skill badges, and courses

GSP162

Google Cloud Self-Paced Labs

Overview

This lab uses the Modular NAT Gateway on Compute Engine for Terraform to automate creation of a NAT gateway managed instance group. You direct traffic from the instances by using tag-based routing, although only instances with matching tags use the NAT gateway route.

Under normal circumstances, Kubernetes Engine nodes route all egress traffic through the internet gateway associated with their node cluster. The internet gateway connection, in turn, is defined by the Compute Engine network associated with the node cluster. Each node in the cluster has an ephemeral external IP address. When nodes are created and destroyed during autoscaling, new node IP addresses are allocated automatically.

The default gateway behavior works well under normal circumstances. However, you might want to modify how ephemeral external IP addresses are allocated in order to:

  • Provide a third-party service with a consistent external IP address.
  • Monitor and filter egress traffic out of the Kubernetes Engine cluster.

In this lab, you will learn how to:

  • Create a NAT gateway instance and configure its routing details for an existing Kubernetes Engine cluster.
  • Create a custom routing rule for the NAT gateway instance.

The following diagram shows an overview of the architecture:

dd91182feb78ce53.png

Objectives

  • Use Terraform to create a NAT gateway instance.
  • Create an egress network routing rule for an existing Kubernetes Engine cluster.
  • Verify that outbound traffic from a pod is routed through the NAT gateway.

Setup

Before you click the Start Lab button

Note: Read these instructions.

Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources will be made available to you.

This Qwiklabs hands-on lab lets you do the lab activities yourself in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials that you use to sign in and access Google Cloud for the duration of the lab.

What you need

To complete this lab, you need:

  • Access to a standard internet browser (Chrome browser recommended).
  • Time to complete the lab.
Note: If you already have your own personal Google Cloud account or project, do not use it for this lab. Note: If you are using a Pixelbook, open an Incognito window to run this lab.

How to start your lab and sign in to the Console

  1. Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is a panel populated with the temporary credentials that you must use for this lab.

    Credentials panel

  2. Copy the username, and then click Open Google Console. The lab spins up resources, and then opens another tab that shows the Choose an account page.

    Note: Open the tabs in separate windows, side-by-side.
  3. On the Choose an account page, click Use Another Account. The Sign in page opens.

    Choose an account dialog box with Use Another Account option highlighted

  4. Paste the username that you copied from the Connection Details panel. Then copy and paste the password.

Note: You must use the credentials from the Connection Details panel. Do not use your Google Cloud Skills Boost credentials. If you have your own Google Cloud account, do not use it for this lab (avoids incurring charges).
  1. Click through the subsequent pages:
  • Accept the terms and conditions.
  • Do not add recovery options or two-factor authentication (because this is a temporary account).
  • Do not sign up for free trials.

After a few moments, the Cloud console opens in this tab.

Note: You can view the menu with a list of Google Cloud Products and Services by clicking the Navigation menu at the top-left. Cloud Console Menu

Activate Google Cloud Shell

Google Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud.

Google Cloud Shell provides command-line access to your Google Cloud resources.

  1. In Cloud console, on the top right toolbar, click the Open Cloud Shell button.

    Highlighted Cloud Shell icon

  2. Click Continue.

It takes a few moments to provision and connect to the environment. When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. For example:

Project ID highlighted in the Cloud Shell Terminal

gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.

  • You can list the active account name with this command:
gcloud auth list

Output:

Credentialed accounts: - @.com (active)

Example output:

Credentialed accounts: - google1623327_student@qwiklabs.net
  • You can list the project ID with this command:
gcloud config list project

Output:

[core] project =

Example output:

[core] project = qwiklabs-gcp-44776a13dea667a6 Note: Full documentation of gcloud is available in the gcloud CLI overview guide .

Creating the NAT gateway with Terraform

Compute Engine routes have a default priority of 1000, with lower numbers indicating higher priority. The Terraform module creates a Compute Engine route with priority 800, redirecting all outbound traffic from the Kubernetes Engine nodes to the NAT gateway instance instead of using the default internet gateway. The example code in the module also creates a static route with priority 700, redirecting traffic from the Kubernetes Engine nodes to the Kubernetes Engine master, which preserves normal cluster operation by splitting the egress traffic.

After the NAT gateway instance is up and running, the startup script configures IP forwarding and adds the firewall rules needed to perform address translation.

Setting up The GKE Example

  1. In Cloud Shell, create a variables.tf configuration file.
touch variables.tf

You will use this file to reference variables which will be re-used throughout the lab.

  1. On the Cloud Shell toolbar, click Open Editor. To switch between Cloud Shell and the code editor, click Open Editor or Open Terminal as required.

  2. Paste the following into your variables.tf file:

variable "project_id" { description = "The project ID to deploy to" default="{{{ project_0.project_id | PROJECT_ID }}}" } variable "network" { description = "The VPC network self link" default="default" } variable "subnet" { description = "The subnet self link" default="default" }
  1. In Cloud Shell, click on Open Terminal and create a main.tf file.
touch main.tf

This file will contain the configuration for your resources.

Create a VPC Network and Subnet

  1. In Cloud Shell, click Open Editor and paste the following code into your main.tf file:
module "test-vpc-module" { source = "terraform-google-modules/network/google" version = "~> 3.3.0" project_id = var.project_id # Replace this with your project ID in quotes network_name = "custom-network1" mtu = 1460 subnets = [ { subnet_name = "subnet-us-east-192" subnet_ip = "192.168.1.0/24" subnet_region = "us-east4" } ] }
  1. Still in your Cloud Shell Editor, click Terminal > New Terminal.

  2. In your terminal, ensure your account is activated:

gcloud auth login
  1. Paste the link in a new tab and follow the instructions using your student account. You will get a verification code to use for authentication.

  2. In your terminal, initialize terraform:

terraform init
  1. Create your network and subnet resources:
terraform apply

Authorize Cloud Shell if prompted.

If running Cloud Shell Editor in a separate tab, you may have to switch to your first, Cloud Console tab to view the authorization prompt and then re-run the apply command if it failed or timed-out.

Enter yes when prompted to confirm the Terraform deployment.

Click Check my progress to verify the objective. Create a VPC Network and Subnet

Create a Private Cluster

  1. Paste the following code at the bottom of your main.tf file:
resource "google_container_cluster" "primary" { project = var.project_id name = "nat-test-cluster" location = "us-east4-c" initial_node_count = 3 network = var.network # Replace with a reference or self link to your network, in quotes subnetwork = var.subnet # Replace with a reference or self link to your subnet, in quotes private_cluster_config { master_ipv4_cidr_block = "172.16.0.0/28" enable_private_endpoint = true enable_private_nodes = true } ip_allocation_policy { } master_authorized_networks_config { } }
  1. In your terminal, apply your Terraform config to create the cluster:
terraform apply

Enter yes when prompted.

Click Check my progress to verify the objective. Create a private cluster

Create a Firewall Rule that Allows SSH Connections

  1. Paste the following at the end of your main.tf file:
resource "google_compute_firewall" "rules" { project = var.project_id name = "allow-ssh" network = var.network allow { protocol = "tcp" ports = ["22"] } source_ranges = ["35.235.240.0/20"] }
  1. In your terminal, apply your Terraform config to create the cluster:
terraform apply

Enter yes when prompted.

Click Check my progress to verify the objective. Create a firewall rule that allows SSH connections

Create IAP SSH permissions for one of your nodes

  1. In the Cloud Console, navigate to IAM & Admin > Identity-Aware Proxy from the navigation bar.

  2. Click Enable API.

  3. Click Go to Identity-Aware Proxy.

  4. Select the SSH and TCP Resources tab.

  5. Select the checkbox next to the first node in the list under All Tunnel Resources > us-east4-c. Its name will be similar to gke-nat-test-cluster-default-pool-b50db58d-075t.

  6. In the right pane, click Add principal.

  7. In the New principals field, enter your student email address that you logged into the lab with (e.g. ).

  8. Grant yourself access to the resources through Cloud IAP's TCP forwarding feature, in the Role drop-down list, select Cloud IAP > IAP-secured Tunnel User.

  9. Click Save.

Click Check my progress to verify the objective. Create IAP SSH permissions for your node

Log in to the node and confirm that it cannot reach the internet

  1. Copy the name of the node you gave SSH permissions to from the IAP panel.

  2. Back in your terminal, assign the name to a NODE_NAME variable:

export NODE_NAME=PASTE_YOUR_NODE_NAME_HERE

You will reference that node in the future commands.

  1. Set the Project ID value for your terminal:
gcloud config set project {{{ project_0.project_id | PROJECT_ID }}}
  1. In your terminal, run this command to connect to the node:
gcloud compute ssh $NODE_NAME \ --zone us-east4-c \ --tunnel-through-iap

When asked to continue, press y.

Then, press enter twice to use an empty password.

If you get a 'not authorized' error at this point, wait a minute and try to connect again. It can take 1-2 minutes for the new permission settings to propagate.
  1. From the node prompt, attempt to connect to the internet:
curl example.com

You should get no result.To end the command, you might have to enter Ctrl+C.

Create a NAT configuration using Cloud Router

You must create the Cloud Router in the same region as the instances that use Cloud NAT. Cloud NAT is only used to place NAT information onto the VMs. It is not used as part of the actual NAT gateway.

This configuration allows all instances in the region to use Cloud NAT for all primary and alias IP ranges. It also automatically allocates the external IP addresses for the NAT gateway. For more options, see the gcloud command-line interface documentation.

  1. In your terminal, run the exit command to leave your ssh tunnel:
exit
  1. Add the following at the end of your main.tf file to create a Cloud Router:
resource "google_compute_router" "router" { project = var.project_id name = "nat-router" network = var.network region = "us-east4" }
  1. Add the following at the end of your main.tf file to create a NAT configuration:
module "cloud-nat" { source = "terraform-google-modules/cloud-nat/google" version = "~> 2.0.0" project_id = var.project_id region = "us-east4" router = google_compute_router.router.name name = "nat-config" source_subnetwork_ip_ranges_to_nat = "ALL_SUBNETWORKS_ALL_IP_RANGES" }
  1. In your terminal, run terraform init to install the cloud-nat module:
terraform init
  1. Run terraform apply to create your newly added resources:
terraform apply

Enter yes when prompted.

Click Check my progress to verify the objective. Create a NAT configuration using Cloud Router

Attempt to connect to the internet again

It might take up to three minutes for the NAT configuration to propagate, so wait at least a minute before trying to access the internet again.

  1. Reconnect to your node:
gcloud compute ssh $NODE_NAME \ --zone us-east4-c \ --tunnel-through-iap
  1. Rerun the curl command:
curl example.com

It should now successfully display the output of the example domain page!

Congratulations

Terraform_badge_125.png Solutions_Kubernetes_125.png

Finish Your Quest

This self-paced lab is part of the Qwiklabs Kubernetes Solutions and Managing Cloud Infrastructure with Terraform quests. A Quest is a series of related labs that form a learning path. Completing this Quest earns you the badge above, to recognize your achievement. You can make your badge (or badges) public and link to them in your online resume or social media account. Enroll in a Quest and get immediate completion credit if you've taken this lab. See other available Qwiklabs Quests.

Take Your Next Lab

Continue your Quest with Custom Providers with Terraform, or check out these suggestions:

Next Steps

Read more about Terraform for Google in the documentation.

Manual Last Updated November 24, 2023
Lab Last Tested November 24, 2023

Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.