arrow_back

Debugging Apps on Google Kubernetes Engine

Join Sign in

Debugging Apps on Google Kubernetes Engine

1 hour 15 minutes 7 Credits

GSP736

Google Cloud Self-Paced Labs

Introduction

Cloud Logging, and its companion tool, Cloud Monitoring, are full featured products that are both deeply integrated into Google Kubernetes Engine. This lab teaches you how Cloud Logging works with GKE clusters and applications and some best practices for log collection through common logging use cases.

What you learn

  • How to use Cloud Monitoring to detect issues

  • How to use Cloud Logging to troubleshoot an application running on GKE

The demo application used in the lab

To use a concrete example, you will troubleshoot a sample microservices demo app deployed to a GKE cluster. In this demo app, there are many microservices and dependencies among them. You will generate traffic using a loadgenerator and then use Logging, Monitoring and GKE to notice the error (alert/metrics), identify a root cause with Logging and then fix/confirm the issue is fixed with Logging and Monitoring.

f5227a639be42af5.png

Setup and requirements

Before you click the Start Lab button

Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources will be made available to you.

This hands-on lab lets you do the lab activities yourself in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials that you use to sign in and access Google Cloud for the duration of the lab.

What you need

To complete this lab, you need:

  • Access to a standard internet browser (Chrome browser recommended).
  • Time to complete the lab.

Note: If you already have your own personal Google Cloud account or project, do not use it for this lab.

Note: If you are using a Chrome OS device, open an Incognito window to run this lab.

How to start your lab and sign in to the Google Cloud Console

  1. Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is a panel populated with the temporary credentials that you must use for this lab.

    Open Google Console

  2. Copy the username, and then click Open Google Console. The lab spins up resources, and then opens another tab that shows the Sign in page.

    Sign in

    Tip: Open the tabs in separate windows, side-by-side.

  3. In the Sign in page, paste the username that you copied from the left panel. Then copy and paste the password.

    Important: You must use the credentials from the left panel. Do not use your Google Cloud Training credentials. If you have your own Google Cloud account, do not use it for this lab (avoids incurring charges).

  4. Click through the subsequent pages:

    • Accept the terms and conditions.
    • Do not add recovery options or two-factor authentication (because this is a temporary account).
    • Do not sign up for free trials.

After a few moments, the Cloud Console opens in this tab.

Activate Cloud Shell

Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.

In the Cloud Console, in the top right toolbar, click the Activate Cloud Shell button.

Cloud Shell icon

Click Continue.

cloudshell_continue.png

It takes a few moments to provision and connect to the environment. When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. For example:

Cloud Shell Terminal

gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.

You can list the active account name with this command:

gcloud auth list

(Output)

Credentialed accounts: - <myaccount>@<mydomain>.com (active)

(Example output)

Credentialed accounts: - google1623327_student@qwiklabs.net

You can list the project ID with this command:

gcloud config list project

(Output)

[core] project = <project_ID>

(Example output)

[core] project = qwiklabs-gcp-44776a13dea667a6

Infrastructure setup

Connect to a Google Kubernetes Engine cluster and validate that it's been created correctly.

In Cloud Shell, set the zone in gcloud:

gcloud config set compute/zone us-central1-b

Set the Project ID variable:

export PROJECT_ID=$(gcloud info --format='value(config.project)')

Use the following command to see the cluster's status:

gcloud container clusters list

The cluster status will say PROVISIONING. Wait a moment and run the above command again until the status is RUNNING. This could take several minutes. Verify that the cluster named central has been created.

You can also monitor the progress in the Cloud Console by navigating to Navigation menu > Kubernetes Engine > Clusters.

Once your cluster has RUNNING status, get the cluster credentials:

gcloud container clusters get-credentials central --zone us-central1-b

(Output)

Fetching cluster endpoint and auth data. kubeconfig entry generated for central.

Verify that the nodes have been created:

kubectl get nodes

Your output should look like this:

NAME STATUS ROLES AGE VERSION gke-central-default-pool-5ff4130f-qz8v Ready <none> 24d v1.16.13-gke.1 gke-central-default--pool-5ff4130f-ssd2 Ready <none> 24d v1.16.13-gke.1 gke-central-default--pool-5ff4130f-tz63 Ready <none> 24d v1.16.13-gke.1 gke-central-default--pool-5ff4130f-zfmn Ready <none> 24d v1.16.13-gke.1

Deploy application

Next, deploy a microservices application called Hipster Shop to your cluster to create an workload you can monitor.

Run the following to clone the repo:

git clone https://github.com/xiangshen-dk/microservices-demo.git

Change to the microservices-demo directory:

cd microservices-demo

Install the app using kubectl:

kubectl apply -f release/kubernetes-manifests.yaml

Confirm everything is running correctly:

kubectl get pods

The output should look similar to the output below. Rerun the command until all pods are reporting a Running status before moving to the next step.

NAME READY STATUS RESTARTS AGE adservice-55f94cfd9c-4lvml 1/1 Running 0 20m cartservice-6f4946f9b8-6wtff 1/1 Running 2 20m checkoutservice-5688779d8c-l6crl 1/1 Running 0 20m currencyservice-665d6f4569-b4sbm 1/1 Running 0 20m emailservice-684c89bcb8-h48sq 1/1 Running 0 20m frontend-67c8475b7d-vktsn 1/1 Running 0 20m loadgenerator-6d646566db-p422w 1/1 Running 0 20m paymentservice-858d89d64c-hmpkg 1/1 Running 0 20m productcatalogservice-bcd85cb5-d6xp4 1/1 Running 0 20m recommendationservice-685d7d6cd9-pxd9g 1/1 Running 0 20m redis-cart-9b864d47f-c9xc6 1/1 Running 0 20m shippingservice-5948f9fb5c-vndcp 1/1 Running 0 20m

Click Check my progress to verify the objective. Deploy Application

Run the following to get the external IP of the application. This command will only return an IP address once the service has been deployed, so you may need to repeat the command until there's an external IP address assigned:

export EXTERNAL_IP=$(kubectl get service frontend-external | awk 'BEGIN { cnt=0; } { cnt+=1; if (cnt > 1) print $4; }')

Finally, confirm that the app is up and running:

curl -o /dev/null -s -w "%{http_code}\n" http://$EXTERNAL_IP

Your confirmation will look like this:

200

After the application is deployed, you can also go to the Cloud Console and view the status.

In the Kubernetes Engine > Workloads page you'll see that all of the pods are OK.

4699e006d643e0cb.png

Now click on Services & Ingress, verify all services are OK. Stay on this screen to set up monitoring for the application.

Open the application

Scroll down to frontend-external and click the Endpoints IP of the service.

3ceac1fa2e6341fd.png

It should open the application and you will have a page like the following:

c27b929771780db0.png

Create a logs-based metric

Now you will configure Cloud Logging to create a logs-based metric, which is a custom metric in Cloud Monitoring made from log entries. Logs-based metrics are good for counting the number of log entries and tracking the distribution of a value in your logs. In this case, you will use the logs-based metric to count the number of errors in your frontend service. You can then use the metric in both dashboards and alerting.

Return to the Cloud Console, and from the Navigation menu open Logging, then click Logs Explorer.

a1a1dee19374fba.png

In the Query builder box, add the following query:

resource.type="k8s_container" severity=ERROR labels."k8s-pod/app": "recommendationservice"

queryBuilder.png

Click Run Query.

The query you are using lets you find all errors from the frontend pod. However, you shouldn't see any results now since there are no errors yet.

To create the logs-based metric, in the Query results section, click the Actions button, then select Create Metric:

3d51c54d1c88771a.png

Name the metric Error_Rate_SLI, and click Create Metric to save the log based metric:

8c94ae5ec9394f52.png

You now see the metric listed under User-defined Metrics on the Logs-based Metrics page.

Click Check my progress to verify the objective. Create a logs-based metric

Create an alerting policy

Alerting gives timely awareness to problems in your cloud applications so you can resolve the problems quickly. Now you will use Cloud Monitoring to monitor your frontend service availability by creating an alerting policy based on the frontend errors logs-based metric that you created previously. When the condition of the alerting policy is met, Cloud Monitoring creates and displays an incident in the Cloud Console.

In the Navigation menu, open Monitoring, then click Alerting.

After the workspace is created, click Create Policy at the top.

Click Add Condition to specify the metric and condition that will be used to trigger the Alerting Policy.

In the Find resource type and metric field, search for "Error_Rate" and select logging/user/Error_Rate_SLI:

e6d18fe1598c65c7.png

Your screen should look like this:

29eb0d8bf9c355df.png

Click the Show Advanced Options link, then set Aligner as rate.

In Configuration, use 0.5 as your Threshold, for most recent value:

cloud-ops-most-recent.png

Click Add then click Next.

As expected, there are no failures, and your application is meeting its availability Service Level Objective (SLO).

Click Next again. Provide a alert name such as Error Rate SLI then click Save.

Note: You will not create a notification channel for this lab but you should do it for your applications running in production, which allows you to send notifications in ways such as email, mobile app, SMS, Pub/Sub, and webhooks.

Click Check my progress to verify the objective. Create an alerting policy

Trigger an application error

Now you will use a load generator to create some traffic for your web application. Since there is a bug that has been intentionally introduced into this version of the application, a certain amount of traffic volume will trigger errors. You will work through the steps to identify and fix the bug.

From the Navigation menu, select Kubernetes Engine, then Service & Ingress.

Find the loadgenerator-external service, then click on the endpoints link.

31ec23d3f5d42e8e.png

Alternatively, you can open a new browser tab or window, copy/paste the IP to the URL field, for example: http://[loadgenerator-external-ip]

You should now be on the Locust load generator page:

16853817b5cba2ca.png

Locust is an open-source load generator, which allows you to load test a web app. It can simulate a number of users simultaneously hitting your application endpoints at a certain rate.

Simulate 300 users hitting the app with a hatch rate of 30. Locust will add 30 users per second until it reaches 300 users.

For the host field, you will use the frontend-external. Copy the URL from the Services & Ingress page; be sure to exclude the port. For example:

90d88d8c7bcd809c.png

Click the Start swarming button. You should have about 300 users to hit the predefined URLs in a few seconds.

fc400e03937ef0fa.png

Click on the Failures tab to see that there are failures starting to occur. You can see there are a large number of 500 errors.

bb15983ca59467a4.png

Meanwhile, if you click any product from the home page, it's either noticeably slow or you receive errors like the following if you click on a product:

907b0b1ffeb28f49.png

Confirming the alert and application errors

In the Cloud Console, from the Navigation menu, click Monitoring, then Alerting. You should see an incident soon regarding logging/user/Error_Rate_SLI. If you don't see an incident right away, wait a minute or two and refresh your page. It can take up to 5 minutes for the alert to fire.

Click the link of the incident:

a45934daa93b7d3e.png

It brings you to the details page. Click the VIEW LOGS link to view the logs for the pod.

dc1b3817f8d3f421.png

You can also click the Error label in the Logs field explorer panel to only query the errors.

Alternatively, you can click into the Query preview field to show the query builder, then click the Severity dropdown, add Error to the query. Click the Add button, then click Run Query. The dropdown menu allows adding multiple severity values.

The result either way is adding severity=ERROR to your query. Once you do that, you should have all the errors for the recommendationservice pod.

266b24ced4e1f7e8.png

View the error details by expanding an error event. For example:

c9c267cc1865d963.png

Expand the textPayload. Click the error message and select Add field to summary line to have the error messages appearing as a summary field:

94adf8d4717a0b3e.png

From there, you can confirm there are indeed many errors for the RecommendationService service. Based on the error messages, it appears the RecommendationService couldn't connect to some downstream services to either get products or recommendations. However, it's still not clear what the root cause is for the errors.

900dcc7807b9044d.png

If you revisit the architecture diagram, the RecommendationService provides a list of recommendations to the Frontend services. However, both the Frontend service and the RecommendationService invoke ProductCatalogService for a list of products.

ca416c3d4491bb07.png

For the next step, you will look at the metrics of the main suspect, the ProductCatalogService, for any anomalies. Regardless, you can drill down in the logs to get some insights.

Troubleshooting using the Kubernetes dashboard & logs

One of the first places that you can look at the metrics is the Kubernetes Engine section of the Monitoring console (Navigation menu > Monitoring> Dashboards > GKE).

View the Workloads section.

Navigate to the Kubernetes Engine > Workloads > productcatalogservice. You can see the pod for the service is constantly crashing and restarting.

ccf430faaf71a1f1.png

Next, see if there is anything interesting in the logs.

There are 2 ways to easily get to your container logs:

  1. Click on the Logs tab to get a quick view of the most recent logs. Next, click the external link button in the upper right corner of the logs panel to go back to the Logs Explorer.

GKE-OiC.png

  1. Click the Container logs link on the Deployment Details page.

c0c3d037c56b6379.png

You are on the Logs Explorer page again, now with a predefined query specifically filtered for the logs from the container you were viewing in GKE.

From the Log Viewer, both the log messages and the histogram show the container is repeatedly parsing product catalogs within a short period of time. It seems very inefficient.

2535e56aae8ea188.png

At the bottom of the query results, there might also be a runtime error like the following one:

panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation

This could actually be causing the pod to crash.

To better understand the reason, search the log message in the code.

In the Cloud Shell, run the following command:

grep -nri 'successfully parsed product catalog json' src

Your output should look like the following, which has the source file name with a line number:

src/productcatalogservice/server.go:237: log.Info("successfully parsed product catalog json")

To view the source file, by clicking the Open Editor button in the Cloud Shell menu, then Open in New Window (if you see the Unable to load code editor because third-party cookies are disabled error, click the eye at the top of the Chrome page).

a3250c575390919d.png

Click the file microservices-demo/src/productcatalogservice/server.go, scroll down to line 237, and you will find the readCatalogFile method logs this message:

93c9981272a13ab4.png

With a little more effort, you can see that if the boolean variable reloadCatalog is true, the service reloads and parses the product catalog each time it's invoked, which seems unnecessary.

If you search the reloadCatalog variable in the code, you can see it's controlled by the environment variable ENABLE\_RELOAD and writes a log message for its state.

cf7083d2031e533.png

Check the logs again by adding this message to your query and determine if there are any entries that exist.

Return to the tab where Logs Explorer is open and add the following line to the query:

jsonPayload.message:"catalog reloading"

So the full query in your query builder is:

resource.type="k8s_container" resource.labels.location="us-central1-b" resource.labels.cluster_name="central" resource.labels.namespace_name="default" labels.k8s-pod/app="productcatalogservice" jsonPayload.message:"catalog reloading"

Click Run Query again and find an "Enable catalog reloading" message in the container log. This confirms that the catalog reloading feature is enabled.

b6c5202d81c2f1cb.png

At this point you can be certain the frontend error is caused by the overhead to load the catalog for every request. When you increased the load, the overhead caused the service to fail and generate the error.

Fix the issue and verify the result

Based on the code and what you're seeing in the logs, you can try to fix the issue by disabling catalog reloading. Now you will remove the ENABLE_RELOAD environment variable for the product catalog service. Once you make the variable changes, then you can redeploy the application and verify that the changes have addressed the observed issue.

Click the Open Terminal button to return to the Cloud Shell terminal if it has closed.

Run the following command:

grep -A1 -ni ENABLE_RELOAD release/kubernetes-manifests.yaml

The output will show the line number of the environment variable in the manifest file:

373: - name: ENABLE_RELOAD 374- value: "1"

Delete those two lines to disable the reloading by running:

sed -i -e '373,374d' release/kubernetes-manifests.yaml

Then reapply the manifest file:

kubectl apply -f release/kubernetes-manifests.yaml

You will notice only the productcatalogservice is configured. The other services are unchanged.

Return to the Deployment detail page (Navigation menu> Kubernetes Engine > Workloads > productcatalogservice), and wait until the pod runs successfully. Wait 2-3 minutes or until you can confirm it stops crashing.

a5ec7a95d8b4c404.png

If you click the Container logs link again, you will see the repeating successfully parsing the catalog json messages are gone:

743fb0a69ccb98e2.png

If you go back to the webapp URL and click the products on the home page, it's also much more responsive and you shouldn't encounter any HTTP errors.

Go back to the load generator, click the Reset Stats button in the right top. The failure percentage is reset and you should not see it increasing anymore.

213688aa3238649f.png

All above checks indicate that the issue is fixed. If you are still seeing the 500 error, wait another couple of minutes and try clicking on a product again.

Congratulations

You used Cloud Logging and Cloud Monitoring to find an error in an intentionally misconfigured version of the microservices demo app. This is a similar troubleshooting process that you would use to narrow down issues for your GKE apps in a production environment.

First, you deployed the app to GKE and then set up a metric and alert for frontend errors. Next, you generated load and then noticed that the alert was triggered. From the alert, you narrowed down the issue to particular services using Cloud Logging. Then, you used Cloud Monitoring and the GKE UI to look at the metrics for the GKE services. To fix the issue, you then deployed an updated configuration to GKE and confirmed that the fix addressed the errors in the logs.

CloudOpsGKE_125x135.png

Finish Your Quest

This self-paced lab is part of the Qwiklabs Google Cloud's Operations Suite on GKE, Quest. A Quest is a series of related labs that form a learning path. Completing this Quest earns you the badge above, to recognize your achievement. You can make your badge (or badges) public and link to them in your online resume or social media account. Enroll in a Quest and get immediate completion credit if you've taken this lab. See other available Qwiklabs Quests.

Take Your Next Lab

Continue your Quest with Cloud Logging on Kubernetes Engine, or check out these suggestions:

Next Steps / Learn More

  • This lab is based on this blog post on using Logging for your apps running on GKE.
  • The follow-up post on how DevOps teams can use Cloud Monitoring and Logging to find issues quickly might also be interesting to read.

Google Cloud Training & Certification

...helps you make the most of Google Cloud technologies. Our classes include technical skills and best practices to help you get up to speed quickly and continue your learning journey. We offer fundamental to advanced level training, with on-demand, live, and virtual options to suit your busy schedule. Certifications help you validate and prove your skill and expertise in Google Cloud technologies.

Manual Last Updated: August 09, 2021
Lab Last Tested: August 09, 2021

Copyright 2021 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.