
Before you begin
- Labs create a Google Cloud project and resources for a fixed time
- Labs have a time limit and no pause feature. If you end the lab, you'll have to restart from the beginning.
- On the top left of your screen, click Start lab to begin
Create and deploy manifest nginx deployment
/ 5
Update version of nginx in the deployment
/ 5
Deploy manifest file that deploys LoadBalancer service type
/ 5
Create a Canary Deployment
/ 5
In this lab, you explore the basics of using deployment manifests. Manifests are files that contain configurations required for a deployment that can be used across different Pods. Manifests are easy to change.
In this lab, you learn how to perform the following tasks:
For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.
Sign in to Qwiklabs using an incognito window.
Note the lab's access time (for example, 1:15:00
), and make sure you can finish within that time.
There is no pause feature. You can restart if needed, but you have to start at the beginning.
When ready, click Start lab.
Note your lab credentials (Username and Password). You will use them to sign in to the Google Cloud Console.
Click Open Google Console.
Click Use another account and copy/paste credentials for this lab into the prompts.
If you use other credentials, you'll receive errors or incur charges.
Accept the terms and skip the recovery resource page.
After you complete the initial sign-in steps, the project dashboard appears.
Google Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud.
Google Cloud Shell provides command-line access to your Google Cloud resources.
In Cloud console, on the top right toolbar, click the Open Cloud Shell button.
Click Continue.
It takes a few moments to provision and connect to the environment. When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. For example:
gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
Output:
Example output:
Output:
Example output:
In this task, you create a deployment manifest for a Pod inside the cluster.
You will create a deployment using a sample deployment manifest called nginx-deployment.yaml
that has been provided for you. This deployment is configured to run three Pod replicas with a single nginx container in each Pod listening on TCP port 80:
The output should look like this example.
Output:
The final output should look like the example.
Output:
Click Check my progress to verify the objective.
Sometimes, you want to shut down a Pod instance. Other times, you want ten Pods running. In Kubernetes, you can scale a specific Pod to the desired number of instances. To shut them down, you scale to zero.
In this task, you scale Pods up and down in the Google Cloud Console and Cloud Shell.
This action scales down your cluster. You should see the Pod status being updated under Managed Pods. You might have to click Refresh.
Output:
Output:
A deployment's rollout is triggered if and only if the deployment's Pod template (that is, .spec.template
) is changed, for example, if the labels or container images of the template are updated. Other updates, such as scaling the deployment, do not trigger a rollout.
In this task, you trigger deployment rollout, and then you trigger deployment rollback.
This updates the container image in your Deployment to nginx v1.9.1
.
The output should look like the example.
Output:
The output should look like the example.
Output:
Click Check my progress to verify the objective.
The output should look like the example. Your output might not be an exact match.
Output:
To roll back an object's rollout, you can use the kubectl rollout undo
command.
The output should look like the example. Your output might not be an exact match.
Output:
The output should look like the example. Your output might not be an exact match but it will show that the current revision has rolled back to nginx:1.7.9
.
Output:
In this task, you create and verify a service that controls inbound traffic to an application. Services can be configured as ClusterIP, NodePort or LoadBalancer types. In this lab, you configure a LoadBalancer.
A manifest file called service-nginx.yaml
that deploys a LoadBalancer service type has been provided for you. This service is configured to distribute inbound traffic on TCP port 60000 to port 80 on any containers that have the label app: nginx
.
This manifest defines a service and applies it to Pods that correspond to the selector. In this case, the manifest is applied to the nginx container that you deployed in task 1. This service also applies to any other Pods with the app: nginx
label, including any that are created after the service.
The output should look like the example.
Output:
http://[EXTERNAL_IP]:60000/
in a new browser tab to see the server being served through network load balancing.kubectl get services nginx
command every few seconds until the field is populated.
Click Check my progress to verify the objective.
A canary deployment is a separate deployment used to test a new version of your application. A single service targets both the canary and the normal deployments. And it can direct a subset of users to the canary version to mitigate the risk of new releases.
The manifest file nginx-canary.yaml
that is provided for you deploys a single pod running a newer version of nginx than your main deployment. In this task, you create a canary deployment using this new deployment file:
The manifest for the nginx Service you deployed in the previous task uses a label selector to target the Pods with the app: nginx
label. Both the normal deployment and this new canary deployment have the app: nginx
label. Inbound connections will be distributed by the service to both the normal and canary deployment Pods. The canary deployment has fewer replicas (Pods) than the normal deployment, and thus it is available to fewer users than the normal deployment.
Welcome to nginx
page.Welcome to nginx
page showing that the Service is automatically balancing traffic to the canary deployment.Click Check my progress to verify the objective.
The service configuration used in the lab does not ensure that all requests from a single client will always connect to the same Pod. Each request is treated separately and can connect to either the normal nginx deployment or to the nginx-canary deployment.
This potential to switch between different versions may cause problems if there are significant changes in functionality in the canary release. To prevent this you can set the sessionAffinity
field to ClientIP
in the specification of the service if you need a client's first request to determine which Pod will be used for all subsequent connections.
For example:
When you have completed your lab, click End Lab. Google Cloud Skills Boost removes the resources you’ve used and cleans the account for you.
You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.
The number of stars indicates the following:
You can close the dialog box if you don't want to provide feedback.
For feedback, suggestions, or corrections, please use the Support tab.
Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.
This content is not currently available
We will notify you via email when it becomes available
Great!
We will contact you via email if it becomes available
One lab at a time
Confirm to end all existing labs and start this one