
Before you begin
- Labs create a Google Cloud project and resources for a fixed time
- Labs have a time limit and no pause feature. If you end the lab, you'll have to restart from the beginning.
- On the top left of your screen, click Start lab to begin
In this exercise, you generate log entries from an application, filter and analyze logs in Cloud Logging, and export logs to a BigQuery log sink.
In this lab, you learn how to perform the following tasks:
In this task, you use Qwiklabs and perform initialization steps for your lab.
For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.
Sign in to Qwiklabs using an incognito window.
Note the lab's access time (for example, 1:15:00
), and make sure you can finish within that time.
There is no pause feature. You can restart if needed, but you have to start at the beginning.
When ready, click Start lab.
Note your lab credentials (Username and Password). You will use them to sign in to the Google Cloud Console.
Click Open Google Console.
Click Use another account and copy/paste credentials for this lab into the prompts.
If you use other credentials, you'll receive errors or incur charges.
Accept the terms and skip the recovery resource page.
After you complete the initial sign-in steps, the project dashboard appears.
Google Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud.
Google Cloud Shell provides command-line access to your Google Cloud resources.
In Cloud console, on the top right toolbar, click the Open Cloud Shell button.
Click Continue.
It takes a few moments to provision and connect to the environment. When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. For example:
gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
Output:
Example output:
Output:
Example output:
To experiment with logging, log exporting, and Error Reporting, we will deploy the HelloLoggingNodeJS application we used in a previous exercise to Cloud Run.
https://github.com/haggman/HelloLoggingNodeJS.git
repo:This repository contains a basic Node.js application used for testing.
HelloLoggingNodeJS
folder and open the index.js
in the Cloud Shell editor:If an error indicates that the code editor could not be loaded because third-party cookies are disabled, click Open in New Window and switch to the new tab.
Take a few minutes to peruse the code. You may recognize this code from some of the examples in the lecture module.
/random-error
. It generates an error approximately every 1000 requests. In the next step you change this to 20 requests to increase the frquency of seeing an error.To increase the uncaught exception /random-error
from 1
in 1000
to 1
in 20
. Open the index.js
in the Cloud Shell editor.
In the index.js
file, replace line 125 - 135
with the following:
In the editor, also take a look at the rebuildService.sh
file. This file uses Cloud Build to create a Docker container, then it creates (or updates) a Cloud Run application using the container. The Cloud Run service supports anonymous access, has a couple of labels on it for stage and department, and only allows a maximum of 5 concurrent connections to any one execution instance.
Return to the Cloud Shell window. If the Cloud Shell is not visible, click Open Terminal.
Build the container and start the Cloud Run application by executing the file:
It takes a minute or two for this to complete.
In Cloud Shell, the final message contains the URL to your new service. Click the link to test the application in a new tab. It returns a simple Hello World! response.
In the Cloud Shell terminal window, create a new URL environmental variable and set its value to the URL to your application:
random-error
route. Make sure you see the mostly "Worked this time" messages start to appear. If they don't, then double check that your application is running and that the URL property is properly set:You have an application now deployed to Cloud Run which, depending on the URL, can simply log, log through Winston, or generate error reports. Take a few minutes to explore those logs.
In the Google Cloud Console, use the Navigation menu () to navigate to Logging > Log Explorer.
Enable Show query and erase if any query is there by default.
Use the Resource drop-down menu to select the Cloud Run Revision > hello-logging Cloud Run service. Don't forget to Apply your selection, or it won't actually filter by that resource.
Click Run query.
Most likely you see nothing but successful 200
messages. How can you find the errors? One way is to exclude all of the 200 responses and see what's left.
While not perfect, you are able to find log messages tied to the error, and the stack traces generated by the error happening.
Log fields
to filter the display to only messages with a Severity
of Error.Now you see the responses when an error was thrown.
Expand and explore one of the entries. Note which log file contains the 500 errors, and when you are finished, Clear the Error severity filter.
To see the stack traces, which is what a developer might be most interested in, use the Log fields
to filter on the run.googleapis.com/stderr standard error log.
That does show the exceptions, but where does each one start?
Query builder
and select Log name.You just looked at the log file for the Cloud Run service and examined an error entry, but what if you were doing some custom logging, not necessarily error related, and wanted to create a custom metric based on it? In this portion of the exercise, you change the loop to call the /score
route, and then create a metric from the scores that are coming through.
In this task, you:
Switch to or reopen Cloud Shell.
Use CTRL+C to break your test loop.
Modify the while
loop to call the /score
route, and restart the loop. Verify that the new messages are displaying random scores:
Switch to your Google Cloud Console and reopen the Logging > Logs Explorer page.
Clear any existing query and use the Resource to display your Cloud Run Revision > hello-logging logs.
Click Apply.
Select Run Query.
All of the 200 status code entries should be from this latest test run. If not, refresh your logs view by clicking Jump to Now.
Expand one of the 200 status code entries. How can you tell if it's from a /score request? Why isn't the score displaying in the log entry?
Use the Logs field explorer
to filter on the run.googleapis.com/stdout log.
Now you should see all of the messages printed by the code itself. Why would it be difficult to build a metric displaying the scores for this entry?
It would be better if the messages were generated as structured JSON, rather than unstructured text. That way you could easily access and extract just the scores in our logs-based metric.
while
loop.Open the index.js
file in the Cloud Shell editor and locate the code for the /score
route (around line 90).
Replace the /score
route with the below code:
Notice how the message contents are now properties of the output
object, and how the printed message is the JSON object stringified.
Since you modified the code, make sure you didn't introduce any errors by starting the application locally in Cloud Shell.
If you see a "Hello world listening on port 8080 message," use CTRL+C to stop the application and move on to the next step. If you see any errors, fix them and make sure the application starts before continuing.
Rebuild and redeploy the application by rerunning rebuildService.sh
:
Make sure you see the score messages appearing once again.
You should still have the entries filtered to display the Cloud Run stdout
logs for hello-logging
.
Click Create Metric.
Use the Navigation menu () to switch Monitoring > Dashboards. You may have to wait while the workspace is created.
Click +Create Dashboard.
For New Dashboard Name, type Score Fun.
Click Line chart.
Set the Chart Title
to Score Info.
Under Resource & Metric
section, select Cloud Run Revision > Logs-Based Metric > logging/user/score_averages.
Click Apply.
The aligner was set by default to 50th percentile
, if not, set it to 50th percentile (median).
Exporting logs to BigQuery not only allows them to be stored for longer, but it also gives the ability to analyze them with SQL and the sheer power of BigQuery. In this task, you configure an export for the Cloud Run logs to BigQuery, and use it to find the error messages generated by the random-error
application route.
In this task, you:
Switch to the Cloud Shell window with the while
loop sending load to our application, and use CTRL+C to break the loop.
Modify the while
to once again send load to the random-error
route. Re-execute the loop:
Use the Google Cloud Console Navigation menu to navigate to Logging > Logs Explorer.
Delete any existing query and execute a query filtering the Resource to Cloud Run Revision > hello-logging.
Click Apply.
Expand an entry or two and verify that they relate to the /random-error
requests.
Create a sink using More actions > Create Sink.
Configure the Sink name
as hello-logging-sink.
Click Next.
Set the sink service to BigQuery dataset.
Select Create new BigQuery dataset for Bigquery dataset.
Name the Dataset ID
as hello_logging_logs.
Click CREATE DATASET.
Click Next.
For Choose logs to include in sink
, click Next.
Click CREATE SINK. When you Create Sink, notice how it creates a service account which is used to write to the new BigQuery dataset.
Use the Navigation menu to switch to BigQuery.
In the Explorer
section, expand your projects node and the hello_logging_logs dataset.
Click the requests table.
In the tabbed dialog below the query window, click the Preview tab and explore the information in this log. This contains general request-oriented information. What URL was requested, when, and from where.
Click the Schema tab and take a moment to investigate the schema of the table generated. Notice that nested and repeated data structures are used for columns like resource
and httpRequest
.
Select the stderr table. Standard out and Standard error are commonly used by applications to differentiate between simple log messages and messages that are related to an error. Node.js applications will dump any uncaught exception/error related information to standard error.
Once again, Preview the data in the table. Now you are seeing the random errors as they impact different parts of the request call stack. The entries with a textPayload
that contain ReferenceError
are the real errors itself.
Click the Schema tab and investigate the actual structure of the table. Once again, you see the mix of standard, and nested, repeated fields.
Create and execute a query to pull just the textPayloads
that start with ReferenceError
:
SELECT
so it pulls the textPayload.where
clause.[project-id]
with your GCP project ID and [date]
with your date specified in the table name:[project-id]
with your GCP project ID and [date]
with your date specified in the table name:ReferenceError%
requests.[project-id]
with your GCP project ID and [date]
with your date specified in the table name.In this exercise, you used a test application to generate logs, created logs-based metrics using data from those logs, and exported logs to BigQuery. Nice job.
When you have completed your lab, click End Lab. Google Cloud Skills Boost removes the resources you’ve used and cleans the account for you.
You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.
The number of stars indicates the following:
You can close the dialog box if you don't want to provide feedback.
For feedback, suggestions, or corrections, please use the Support tab.
Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.
This content is not currently available
We will notify you via email when it becomes available
Great!
We will contact you via email if it becomes available
One lab at a time
Confirm to end all existing labs and start this one