arrow_back

Log Analysis

Accédez à plus de 700 ateliers et cours

Log Analysis

Atelier 1 heure 30 minutes universal_currency_alt 5 crédits show_chart Intermédiaire
info Cet atelier peut intégrer des outils d'IA pour vous accompagner dans votre apprentissage.
Accédez à plus de 700 ateliers et cours

Overview

In this exercise, you generate log entries from an application, filter and analyze logs in Cloud Logging, and export logs to a BigQuery log sink.

Objectives

In this lab, you learn how to perform the following tasks:

  • Set up and deploy a test application.
  • Explore the log entries generated by the test application.
  • Create and use a logs-based metric.
  • Export application logs to BigQuery.

Setup and requirements

In this task, you use Qwiklabs and perform initialization steps for your lab.

For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.

  1. Sign in to Qwiklabs using an incognito window.

  2. Note the lab's access time (for example, 1:15:00), and make sure you can finish within that time.
    There is no pause feature. You can restart if needed, but you have to start at the beginning.

  3. When ready, click Start lab.

  4. Note your lab credentials (Username and Password). You will use them to sign in to the Google Cloud Console.

  5. Click Open Google Console.

  6. Click Use another account and copy/paste credentials for this lab into the prompts.
    If you use other credentials, you'll receive errors or incur charges.

  7. Accept the terms and skip the recovery resource page.

After you complete the initial sign-in steps, the project dashboard appears.

The Project Dashboard which includes tiles such as Project Info, APIs, Resources,and Billing

Activate Google Cloud Shell

Google Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud.

Google Cloud Shell provides command-line access to your Google Cloud resources.

  1. In Cloud console, on the top right toolbar, click the Open Cloud Shell button.

    Highlighted Cloud Shell icon

  2. Click Continue.

It takes a few moments to provision and connect to the environment. When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. For example:

Project ID highlighted in the Cloud Shell Terminal

gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.

  • You can list the active account name with this command:
gcloud auth list

Output:

Credentialed accounts: - @.com (active)

Example output:

Credentialed accounts: - google1623327_student@qwiklabs.net
  • You can list the project ID with this command:
gcloud config list project

Output:

[core] project =

Example output:

[core] project = qwiklabs-gcp-44776a13dea667a6 Note: Full documentation of gcloud is available in the gcloud CLI overview guide .

Task 1. Set up a test application, deploy it, and set up a load test VM

To experiment with logging, log exporting, and Error Reporting, we will deploy the HelloLoggingNodeJS application we used in a previous exercise to Cloud Run.

  1. Make sure the Cloud Build, Compute Engine, and Cloud Run APIs are enabled, as it is needed in later steps:
gcloud services enable cloudbuild.googleapis.com \ run.googleapis.com \ compute.googleapis.com \ cloudprofiler.googleapis.com
  1. Once the APIs successfully enable, clone the https://github.com/haggman/HelloLoggingNodeJS.git repo:
git clone https://github.com/haggman/HelloLoggingNodeJS.git

This repository contains a basic Node.js application used for testing.

  1. Change into the HelloLoggingNodeJS folder and open the index.js in the Cloud Shell editor:
cd HelloLoggingNodeJS edit index.js
  1. If an error indicates that the code editor could not be loaded because third-party cookies are disabled, click Open in New Window and switch to the new tab.

  2. Take a few minutes to peruse the code. You may recognize this code from some of the examples in the lecture module.

  • Notice that it loads the Google Error Reporting library as well as the debugging library.
  • Scroll lower and there are several methods that generate logs, errors, and error logs.
  • Pay special attention to /random-error. It generates an error approximately every 1000 requests. In the next step you change this to 20 requests to increase the frquency of seeing an error.
  1. To increase the uncaught exception /random-error from 1 in 1000 to 1 in 20. Open the index.js in the Cloud Shell editor.

  2. In the index.js file, replace line 125 - 135 with the following:

//Generates an uncaught exception every 20 requests app.get('/random-error', (req, res) => { error_rate = parseInt(req.query.error_rate) || 20 let errorNum = (Math.floor(Math.random() * error_rate) + 1); if (errorNum==1) { console.log("Called /random-error, and it's about to error"); doesNotExist(); } console.log("Called /random-error, and it worked"); res.send("Worked this time."); });
  1. In the editor, also take a look at the rebuildService.sh file. This file uses Cloud Build to create a Docker container, then it creates (or updates) a Cloud Run application using the container. The Cloud Run service supports anonymous access, has a couple of labels on it for stage and department, and only allows a maximum of 5 concurrent connections to any one execution instance.

  2. Return to the Cloud Shell window. If the Cloud Shell is not visible, click Open Terminal.

  3. Build the container and start the Cloud Run application by executing the file:

sh rebuildService.sh

It takes a minute or two for this to complete.

  1. In Cloud Shell, the final message contains the URL to your new service. Click the link to test the application in a new tab. It returns a simple Hello World! response.

  2. In the Cloud Shell terminal window, create a new URL environmental variable and set its value to the URL to your application:

URL=$(gcloud run services list --platform managed --format="value(URL)" | grep hello-logging)
  1. Test that the variable was created correctly by echoing its value:
echo $URL
  1. Use a bash while loop to generate load on the application's random-error route. Make sure you see the mostly "Worked this time" messages start to appear. If they don't, then double check that your application is running and that the URL property is properly set:
while true; \ do curl -s $URL/random-error \ -w '\n' ;sleep .1s;done

Task 2. Explore the log files for a test application

You have an application now deployed to Cloud Run which, depending on the URL, can simply log, log through Winston, or generate error reports. Take a few minutes to explore those logs.

  1. In the Google Cloud Console, use the Navigation menu (Navigation menu icon) to navigate to Logging > Log Explorer.

  2. Enable Show query and erase if any query is there by default.

  3. Use the Resource drop-down menu to select the Cloud Run Revision > hello-logging Cloud Run service. Don't forget to Apply your selection, or it won't actually filter by that resource.

  4. Click Run query.

Most likely you see nothing but successful 200 messages. How can you find the errors? One way is to exclude all of the 200 responses and see what's left.

  1. To do that, click one of the 200 status codes and select Hide matching entries.

While not perfect, you are able to find log messages tied to the error, and the stack traces generated by the error happening.

  1. Use the Log fields to filter the display to only messages with a Severity of Error.

Now you see the responses when an error was thrown.

  1. Expand and explore one of the entries. Note which log file contains the 500 errors, and when you are finished, Clear the Error severity filter.

  2. To see the stack traces, which is what a developer might be most interested in, use the Log fields to filter on the run.googleapis.com/stderr standard error log.

That does show the exceptions, but where does each one start?

  1. It is nice to see both the stderr log with the stack traces, and the requests log with the 500 status errors, to do so:
  • Click the Query builderand select Log name.
  • Select both the Cloud Run requests and stderr logs, and then Apply them to the query.
  • Select Run Query.
Note: You need to wait for 10-12 minutes for the 500 status errors to appear.
  1. Take a moment to explore the displayed logs. Now you see the response returning the error to the client request, and the subsequent stack trace sent to stderr.

Task 3. Create and use a logs-based metric

You just looked at the log file for the Cloud Run service and examined an error entry, but what if you were doing some custom logging, not necessarily error related, and wanted to create a custom metric based on it? In this portion of the exercise, you change the loop to call the /score route, and then create a metric from the scores that are coming through.

In this task, you:

  • Generate some load on the /score endpoint.
  • Explore the logs generated by the load.
  • Tweak the code to put the message in a more usable format.
  • Create a new logs-based metric.

Generate some load on the /score endpoint

  1. Switch to or reopen Cloud Shell.

  2. Use CTRL+C to break your test loop.

  3. Modify the while loop to call the /score route, and restart the loop. Verify that the new messages are displaying random scores:

while true; \ do curl -s $URL/score \ -w '\n' ;sleep .1s;done

Explore the logs generated by the load

  1. Switch to your Google Cloud Console and reopen the Logging > Logs Explorer page.

  2. Clear any existing query and use the Resource to display your Cloud Run Revision > hello-logging logs.

  3. Click Apply.

  4. Select Run Query.

All of the 200 status code entries should be from this latest test run. If not, refresh your logs view by clicking Jump to Now.

  1. Expand one of the 200 status code entries. How can you tell if it's from a /score request? Why isn't the score displaying in the log entry?

  2. Use the Logs field explorer to filter on the run.googleapis.com/stdout log. Now you should see all of the messages printed by the code itself. Why would it be difficult to build a metric displaying the scores for this entry?

It would be better if the messages were generated as structured JSON, rather than unstructured text. That way you could easily access and extract just the scores in our logs-based metric.

  1. In the Cloud Shell terminal window, use CTRL+C to break the test while loop.

Tweak the log format to make the score easier to access

  1. Open the index.js file in the Cloud Shell editor and locate the code for the /score route (around line 90).

  2. Replace the /score route with the below code:

//Basic NodeJS app built with the express server app.get('/score', (req, res) => { //Random score, the contaierID is a UUID unique to each //runtime container (testing was done in Cloud Run). //funFactor is a random number 1-100 let score = Math.floor(Math.random() * 100) + 1; let output = { message: '/score called', score: score, containerID: containerID, funFactor: funFactor }; console.log(JSON.stringify(output)); //Basic message back to browser res.send(`Your score is a ${score}. Happy?`); });

Notice how the message contents are now properties of the output object, and how the printed message is the JSON object stringified.

Since you modified the code, make sure you didn't introduce any errors by starting the application locally in Cloud Shell.

  1. In the Cloud Shell terminal window, install the dependencies and start the application. Make sure you are in the HelloLoggingNodeJS folder:
export GCLOUD_PROJECT=$DEVSHELL_PROJECT_ID cd ~/HelloLoggingNodeJS/ npm i npm start
  1. If you see a "Hello world listening on port 8080 message," use CTRL+C to stop the application and move on to the next step. If you see any errors, fix them and make sure the application starts before continuing.

  2. Rebuild and redeploy the application by rerunning rebuildService.sh:

sh rebuildService.sh
  1. Wait for the application to finish rebuilding and redeploying, then restart the test loop:
while true; \ do curl -s $URL/score \ -w '\n' ;sleep .1s;done

Make sure you see the score messages appearing once again.

  1. Switch to your browser tab showing the Logs Explorer and display the latest logs by clicking Jump to Now.

You should still have the entries filtered to display the Cloud Run stdout logs for hello-logging.

  1. Expand one of the entries and examine the new format.

Create a score logs-based metric

  1. Click Create Metric and set the following fields with the mentioned values:
  • Metric Type: Distribution
  • Log metric name: score_averages
  • Units: 1
  • Field name: jsonPayload.score
  1. Click ADVANCED link and set the following fields with the mentioned values:
  • Type : Linear
  • Start value: 0
  • Number of buckets: 20
  • Bucket width: 5
  1. Click Create Metric.

  2. Use the Navigation menu (Navigation menu icon) to switch Monitoring > Dashboards. You may have to wait while the workspace is created.

  3. Click +Create Dashboard.

  4. For New Dashboard Name, type Score Fun.

  5. Click Line chart.

  6. Set the Chart Title to Score Info.

  7. Under Resource & Metric section, select Cloud Run Revision > Logs-Based Metric > logging/user/score_averages.

  8. Click Apply.

  9. The aligner was set by default to 50th percentile, if not, set it to 50th percentile (median).

The Score Fun dashboard

Task 4. Export application logs to BigQuery

Exporting logs to BigQuery not only allows them to be stored for longer, but it also gives the ability to analyze them with SQL and the sheer power of BigQuery. In this task, you configure an export for the Cloud Run logs to BigQuery, and use it to find the error messages generated by the random-error application route.

In this task, you:

  • Configure a log export to BigQuery.

Configure an Export Sink to BigQuery

  1. Switch to the Cloud Shell window with the while loop sending load to our application, and use CTRL+C to break the loop.

  2. Modify the while to once again send load to the random-error route. Re-execute the loop:

while true; \ do curl -s $URL/random-error \ -w '\n' ;sleep .1s;done
  1. Use the Google Cloud Console Navigation menu to navigate to Logging > Logs Explorer.

  2. Delete any existing query and execute a query filtering the Resource to Cloud Run Revision > hello-logging.

  3. Click Apply.

  4. Expand an entry or two and verify that they relate to the /random-error requests.

  5. Create a sink using More actions > Create Sink.

  6. Configure the Sink name as hello-logging-sink.

  7. Click Next.

  8. Set the sink service to BigQuery dataset.

  9. Select Create new BigQuery dataset for Bigquery dataset.

  10. Name the Dataset ID as hello_logging_logs.

  11. Click CREATE DATASET.

  12. Click Next.

  13. For Choose logs to include in sink, click Next.

  14. Click CREATE SINK. When you Create Sink, notice how it creates a service account which is used to write to the new BigQuery dataset.

  15. Use the Navigation menu to switch to BigQuery.

  16. In the Explorer section, expand your projects node and the hello_logging_logs dataset.

  17. Click the requests table.

  18. In the tabbed dialog below the query window, click the Preview tab and explore the information in this log. This contains general request-oriented information. What URL was requested, when, and from where.

  19. Click the Schema tab and take a moment to investigate the schema of the table generated. Notice that nested and repeated data structures are used for columns like resource and httpRequest.

  20. Select the stderr table. Standard out and Standard error are commonly used by applications to differentiate between simple log messages and messages that are related to an error. Node.js applications will dump any uncaught exception/error related information to standard error.

Note: Wait for 5~10 minutes, if the stderr table doesn't appear.
  1. Once again, Preview the data in the table. Now you are seeing the random errors as they impact different parts of the request call stack. The entries with a textPayload that contain ReferenceError are the real errors itself.

  2. Click the Schema tab and investigate the actual structure of the table. Once again, you see the mix of standard, and nested, repeated fields.

  3. Create and execute a query to pull just the textPayloads that start with ReferenceError:

  • Start the query by clicking the Query button.
  • Modify the SELECT so it pulls the textPayload.
  • Add the where clause.
  • Replace [project-id] with your GCP project ID and [date] with your date specified in the table name:
SELECT textPayload FROM `[project-id].hello_logging_logs.run_googleapis_com_stderr_[date]` WHERE textPayload LIKE 'ReferenceError%'
  1. To do a count, modify the query to count these entries:
  • Execute again and check the number you are receiving.
  • Replace [project-id] with your GCP project ID and [date] with your date specified in the table name:
SELECT count(textPayload) FROM `[project-id].hello_logging_logs.run_googleapis_com_stderr_[date]` WHERE textPayload LIKE 'ReferenceError%'
  1. To check the error percentage, build a query that compares the total requests to the ReferenceError% requests.
  • Replace [project-id] with your GCP project ID and [date] with your date specified in the table name.
  • Is the error about 1/1000?
SELECT errors / total_requests FROM ( SELECT ( SELECT COUNT(*) FROM `[project-id].hello_logging_logs.run_googleapis_com_requests_[date]`) AS total_requests, ( SELECT COUNT(textPayload) FROM `[project-id].hello_logging_logs.run_googleapis_com_stderr_[date]` WHERE textPayload LIKE 'ReferenceError%') AS errors)

Congratulations!

In this exercise, you used a test application to generate logs, created logs-based metrics using data from those logs, and exported logs to BigQuery. Nice job.

End your lab

When you have completed your lab, click End Lab. Google Cloud Skills Boost removes the resources you’ve used and cleans the account for you.

You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.

The number of stars indicates the following:

  • 1 star = Very dissatisfied
  • 2 stars = Dissatisfied
  • 3 stars = Neutral
  • 4 stars = Satisfied
  • 5 stars = Very satisfied

You can close the dialog box if you don't want to provide feedback.

For feedback, suggestions, or corrections, please use the Support tab.

Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.

Avant de commencer

  1. Les ateliers créent un projet Google Cloud et des ressources pour une durée déterminée.
  2. Les ateliers doivent être effectués dans le délai imparti et ne peuvent pas être mis en pause. Si vous quittez l'atelier, vous devrez le recommencer depuis le début.
  3. En haut à gauche de l'écran, cliquez sur Démarrer l'atelier pour commencer.

Utilisez la navigation privée

  1. Copiez le nom d'utilisateur et le mot de passe fournis pour l'atelier
  2. Cliquez sur Ouvrir la console en navigation privée

Connectez-vous à la console

  1. Connectez-vous à l'aide des identifiants qui vous ont été attribués pour l'atelier. L'utilisation d'autres identifiants peut entraîner des erreurs ou des frais.
  2. Acceptez les conditions d'utilisation et ignorez la page concernant les ressources de récupération des données.
  3. Ne cliquez pas sur Terminer l'atelier, à moins que vous n'ayez terminé l'atelier ou que vous ne vouliez le recommencer, car cela effacera votre travail et supprimera le projet.

Ce contenu n'est pas disponible pour le moment

Nous vous préviendrons par e-mail lorsqu'il sera disponible

Parfait !

Nous vous contacterons par e-mail s'il devient disponible

Un atelier à la fois

Confirmez pour mettre fin à tous les ateliers existants et démarrer celui-ci

Utilisez la navigation privée pour effectuer l'atelier

Ouvrez une fenêtre de navigateur en mode navigation privée pour effectuer cet atelier. Vous éviterez ainsi les conflits entre votre compte personnel et le compte temporaire de participant, qui pourraient entraîner des frais supplémentaires facturés sur votre compte personnel.