arrow_back

Using Kubernetes Engine to Deploy Apps with Regional Persistent Disks

Join Sign in

Using Kubernetes Engine to Deploy Apps with Regional Persistent Disks

1 hour 5 Credits

GSP200

Google Cloud self-paced labs logo

Overview

In this lab you will learn how to configure a highly available application by deploying WordPress using regional persistent disks on Kubernetes Engine. Regional persistent disks provide synchronous replication between two zones, which keeps your application up and running in case there is an outage or failure in a single zone. Deploying a Kubernetes Engine Cluster with regional persistent disks will make your application more stable, secure, and reliable.

The Persistant Volume connecting a master and node for each of the two Kubernets Engine clusters - namely, us-central1-b and us-central1-f

What you'll do

  • Create a regional Kubernetes Engine cluster.
  • Create a Kubernetes StorageClass resource that is configured for replicated zones.
  • Deploy WordPress with a regional disk that uses the StorageClass.
  • Simulate a zone failure by deleting a node.
  • Verify that the WordPress app and data migrate successfully to another replicated zone.

Prerequisites

This is an advanced lab. Before taking it, you should be familiar with at least the basics of Kubernetes and WordPress. Here are some Qwiklabs that can get you up to speed:

Once you're prepared, scroll down to get your lab environment set up.

Setup

Before you click the Start Lab button

Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources will be made available to you.

This hands-on lab lets you do the lab activities yourself in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials that you use to sign in and access Google Cloud for the duration of the lab.

To complete this lab, you need:

  • Access to a standard internet browser (Chrome browser recommended).
Note: Use an Incognito or private browser window to run this lab. This prevents any conflicts between your personal account and the Student account, which may cause extra charges incurred to your personal account.
  • Time to complete the lab---remember, once you start, you cannot pause a lab.
Note: If you already have your own personal Google Cloud account or project, do not use it for this lab to avoid extra charges to your account.

How to start your lab and sign in to the Google Cloud Console

  1. Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is the Lab Details panel with the following:

    • The Open Google Console button
    • Time remaining
    • The temporary credentials that you must use for this lab
    • Other information, if needed, to step through this lab
  2. Click Open Google Console. The lab spins up resources, and then opens another tab that shows the Sign in page.

    Tip: Arrange the tabs in separate windows, side-by-side.

    Note: If you see the Choose an account dialog, click Use Another Account.
  3. If necessary, copy the Username from the Lab Details panel and paste it into the Sign in dialog. Click Next.

  4. Copy the Password from the Lab Details panel and paste it into the Welcome dialog. Click Next.

    Important: You must use the credentials from the left panel. Do not use your Google Cloud Skills Boost credentials. Note: Using your own Google Cloud account for this lab may incur extra charges.
  5. Click through the subsequent pages:

    • Accept the terms and conditions.
    • Do not add recovery options or two-factor authentication (because this is a temporary account).
    • Do not sign up for free trials.

After a few moments, the Cloud Console opens in this tab.

Note: You can view the menu with a list of Google Cloud Products and Services by clicking the Navigation menu at the top-left. Navigation menu icon

Activate Cloud Shell

Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.

  1. Click Activate Cloud Shell Activate Cloud Shell icon at the top of the Google Cloud console.

  2. Click Continue.

It takes a few moments to provision and connect to the environment. When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. The output contains a line that declares the PROJECT_ID for this session:

Your Cloud Platform project in this session is set to YOUR_PROJECT_ID

gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.

  1. (Optional) You can list the active account name with this command:

gcloud auth list

Output:

ACTIVE: * ACCOUNT: student-01-xxxxxxxxxxxx@qwiklabs.net To set the active account, run: $ gcloud config set account `ACCOUNT`
  1. (Optional) You can list the project ID with this command:

gcloud config list project

Output:

[core] project = <project_ID>

Example output:

[core] project = qwiklabs-gcp-44776a13dea667a6 Note: For full documentation of gcloud, in Google Cloud, refer to the gcloud CLI overview guide.

Task 1. Creating the Regional Kubernetes Engine Cluster

  1. Open a new Cloud Shell session.

You will first create a regional Kubernetes Engine cluster that spans three zones in the us-central1 region.

  1. Now create a standard Kubernetes Engine cluster (this will take a little while, ignore any warnings about node auto repairs):

export CLOUDSDK_CONTAINER_USE_V1_API_CLIENT=false gcloud container clusters create repd \ --machine-type=n1-standard-1 \ --region=us-central1 \ --num-nodes=1

Example output:

Creating cluster repd...done. Created [https://container.googleapis.com/v1beta1/projects/qwiklabs-gcp-e8f5f22705c770ab/zones/us-central1/clusters/repd]. To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/us-central1/repd?project=qwiklabs-gcp-e8f5f22705c770ab kubeconfig entry generated for repd. NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS repd us-central1 1.23.6-gke.1700 35.247.50.133 n1-standard-1 1.23.6-gke.1700 3 RUNNING

You just created a regional cluster (located in us-central1) with one node in three different zones.

  1. Navigate to Compute Engine from the left-hand menu to view your instances:

The Details tabbed page displaying the Cluster basics information

The gcloud command has also automatically configured the kubectl command to connect to the cluster.

Click Check my progress to verify the objective. Creating the Regional Kubernetes Engine Cluster

Task 2. Deploying the App with a Regional Disk

Now that you have your Kubernetes cluster running, you'll do the following three things:

  • Add the stable chart repository to Helm (a toolset for managing Kubernetes packages)
  • Create the Kubernetes StorageClass that is used by the regional persistent disk
  • Deploy WordPress

Add the stable chart repo to Helm

The chart package, which is installed with Helm, contains everything you need to run WordPress.

Helm is pre-installed in Cloud Shell. You only need to add the stable chart repo which holds the WordPress chart.

  1. Add the stable chart repository:

helm repo add stable https://charts.helm.sh/stable
  1. Update the repo:

helm repo update

Create the StorageClass

Next you'll create the StorageClass used by the chart to define the zones of the regional disk. The zones listed in the StorageClass will match the zones of the Kubernetes Engine cluster.

  1. Create a StorageClass for the regional disk by running:

kubectl apply -f - <<EOF kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: repd-central1-a-b-c provisioner: kubernetes.io/gce-pd parameters: type: pd-standard replication-type: regional-pd EOF

Example output:

storageclass.storage.k8s.io/repd-central1-a-b-c created

You now have a StorageClass that is capable of provisioning PersistentVolumes that are replicated across the us-central1-a, us-central1-b and us-central1-c zones.

  1. List the available storageclass with:

kubectl get storageclass

Example output:

NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE premium-rwo pd.csi.storage.gke.io Delete WaitForFirstConsumer true 85s repd-central1-a-b-c kubernetes.io/gce-pd Delete Immediate false 7s standard (default) kubernetes.io/gce-pd Delete Immediate true 67s standard-rwo pd.csi.storage.gke.io Delete WaitForFirstConsumer true 84s

Click Check my progress to verify the objective. Create StorageClass

Task 3. Create Persistent Volume Claims

In this section, you will create persistentvolumeclaims for your application.

  1. Create data-wp-repd-mariadb-0 PVC with standard StorageClass:

kubectl apply -f - <<EOF kind: PersistentVolumeClaim apiVersion: v1 metadata: name: data-wp-repd-mariadb-0 namespace: default labels: app: mariadb component: master release: wp-repd spec: accessModes: - ReadWriteOnce resources: requests: storage: 8Gi storageClassName: standard EOF
  1. Create wp-repd-wordpress PVC with repd-central1-a-b-c StorageClass:

kubectl apply -f - <<EOF kind: PersistentVolumeClaim apiVersion: v1 metadata: name: wp-repd-wordpress namespace: default labels: app: wp-repd-wordpress chart: wordpress-5.7.1 heritage: Tiller release: wp-repd spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi storageClassName: repd-central1-a-b-c EOF
  1. List the available persistentvolumeclaims with:

kubectl get persistentvolumeclaims

Example output:

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-wp-repd-mariadb-0 Bound pvc-8a10ed04-56ca-11e9-a020-42010a8a003d 8Gi RWO standard 21m wp-repd-wordpress Bound pvc-ad5ddb0b-56ca-11e9-9af5-42010a8a0047 200Gi RWO repd-central1-a-b-c 20m

Click Check my progress to verify the objective. Create Persistent Volume Claims

Task 4. Deploy WordPress

Now that we have our StorageClass configured, Kubernetes automatically attaches the persistent disk to an appropriate node in one of the available zones.

  1. Deploy the WordPress chart that is configured to use the StorageClass that you created earlier:

helm install wp-repd \ --set smtpHost= --set smtpPort= --set smtpUser= \ --set smtpPassword= --set smtpUsername= --set smtpProtocol= \ --set persistence.storageClass=repd-central1-a-b-c \ --set persistence.existingClaim=wp-repd-wordpress \ --set persistence.accessMode=ReadOnlyMany \ stable/wordpress
  1. List out available wordpress pods:

kubectl get pods

Example output:

NAME READY STATUS RESTARTS AGE wp-repd-mariadb-79444cd49b-lx8jq 1/1 Running 0 35m wp-repd-wordpress-7654c85b66-gz6nd 1/1 Running 0 35m

Click Check my progress to verify the objective. Deploy WordPress

  1. Run the following command which waits for the service load balancer's external IP address to be created:

while [[ -z $SERVICE_IP ]]; do SERVICE_IP=$(kubectl get svc wp-repd-wordpress -o jsonpath='{.status.loadBalancer.ingress[].ip}'); echo "Waiting for service external IP..."; sleep 2; done; echo http://$SERVICE_IP/admin
  1. Verify that the persistent disk was created:

while [[ -z $PV ]]; do PV=$(kubectl get pvc wp-repd-wordpress -o jsonpath='{.spec.volumeName}'); echo "Waiting for PV..."; sleep 2; done kubectl describe pv $PV
  1. Get the URL for the WordPress admin page:

echo http://$SERVICE_IP/admin
  1. Click on the link to open WordPress in a new tab in your browser.

  2. Back in Cloud Shell, get a username and password so you can log in to the app:

cat - <<EOF Username: user Password: $(kubectl get secret --namespace default wp-repd-wordpress -o jsonpath="{.data.wordpress-password}" | base64 --decode) EOF
  1. Go to the WordPress tab and log in with the username and password that was returned.

You now have a working deployment of WordPress that is backed by regional persistent disks in three zones.

Task 5. Simulating a zone failure

Next you will simulate a zone failure and watch Kubernetes move your workload to the other zone and attach the regional disk to the new node.

  1. Obtain the current node of the WordPress pod:

NODE=$(kubectl get pods -l app.kubernetes.io/instance=wp-repd -o jsonpath='{.items..spec.nodeName}') ZONE=$(kubectl get node $NODE -o jsonpath="{.metadata.labels['failure-domain\.beta\.kubernetes\.io/zone']}") IG=$(gcloud compute instance-groups list --filter="name~gke-repd-default-pool zone:(${ZONE})" --format='value(name)') echo "Pod is currently on node ${NODE}" echo "Instance group to delete: ${IG} for zone: ${ZONE}"

Example output:

Pod is currently on node gke-repd-default-pool-b8cf37cd-bc5q Instance group to delete: gke-repd-default-pool-b8cf37cd-grp for zone: us-central1-c Note: Your zone may differ.
  • You can also verify it with:

kubectl get pods -l app.kubernetes.io/instance=wp-repd -o wide

Example output:

NAME READY STATUS RESTARTS AGE IP NODE wp-repd-wordpress-7654c85b66-gz6nd 1/1 Running 0 1h 10.20.0.11 gke-repd-default-pool-b8cf37cd-bc5q

Take note of Node column. You are going to delete this node to simulate the zone failure.

  1. Now run the following to delete the instance group for the node where the WordPress pod is running, click Y to continue deleting:

gcloud compute instance-groups managed delete ${IG} --zone ${ZONE}

Kubernetes is now detecting the failure and migrates the pod to a node in another zone.

  1. Verify that both the WordPress pod and the persistent volume migrated to the node that is in the other zone:

kubectl get pods -l app.kubernetes.io/instance=wp-repd -o wide

Example output:

NAME READY STATUS RESTARTS AGE IP NODE wp-repd-wordpress-7654c85b66-xqb78 1/1 Running 0 1m 10.20.1.14 gke-repd-default-pool-9da1b683-h70h

Make sure the node that is displayed is different from the node in the previous step.

Note: This whole process can take about 10 to 15 minutes to complete.
  1. Once the new service has a Running status, open the WordPress admin page in your browser from the link displayed in the command output:

echo http://$SERVICE_IP/admin

You have attached a regional persistent disk to a node that is in a different zone.

Congratulations!

This concludes this hands-on lab with Regional Kubernetes cluster.

Finish your quest

This self-paced lab is part of the Kubernetes Solutions quest. A quest is a series of related labs that form a learning path. Completing this quest earns you a badge to recognize your achievement. You can make your badge or badges public and link to them in your online resume or social media account. Enroll in this quest and get immediate completion credit. Refer to the Google Cloud Skills Boost catalog for all available quests.

Take your next lab

Continue your quest with NGINX Ingress Controller on Google Kubernetes Engine, or check out these suggestions:

Next steps/ Learn more

Google Cloud training and certification

...helps you make the most of Google Cloud technologies. Our classes include technical skills and best practices to help you get up to speed quickly and continue your learning journey. We offer fundamental to advanced level training, with on-demand, live, and virtual options to suit your busy schedule. Certifications help you validate and prove your skill and expertise in Google Cloud technologies.

Manual Last Updated September 12, 2022

Lab Last Tested June 14, 2022

Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.