正在加载…
未找到任何结果。

在 Google Cloud 控制台中运用您的技能

Serverless Data Processing with Dataflow: Develop Pipelines

访问 700 多个实验和课程

Serverless Data Processing with Dataflow - Using Dataflow SQL for Batch Analytics (Java)

实验 2 个小时 universal_currency_alt 5 积分 show_chart 高级
info 此实验可能会提供 AI 工具来支持您学习。
访问 700 多个实验和课程

Overview

In this lab, you:

  • Write a pipeline that uses SQL to aggregate site traffic by user.

Setup and requirements

For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.

  1. Sign in to Qwiklabs using an incognito window.

  2. Note the lab's access time (for example, 1:15:00), and make sure you can finish within that time.
    There is no pause feature. You can restart if needed, but you have to start at the beginning.

  3. When ready, click Start lab.

  4. Note your lab credentials (Username and Password). You will use them to sign in to the Google Cloud Console.

  5. Click Open Google Console.

  6. Click Use another account and copy/paste credentials for this lab into the prompts.
    If you use other credentials, you'll receive errors or incur charges.

  7. Accept the terms and skip the recovery resource page.

Check project permissions

Before you begin your work on Google Cloud, you need to ensure that your project has the correct permissions within Identity and Access Management (IAM).

  1. In the Google Cloud console, on the Navigation menu (), select IAM & Admin > IAM.

  2. Confirm that the default compute Service Account {project-number}-compute@developer.gserviceaccount.com is present and has the editor role assigned. The account prefix is the project number, which you can find on Navigation menu > Cloud Overview > Dashboard.

Note: If the account is not present in IAM or does not have the editor role, follow the steps below to assign the required role.
  1. In the Google Cloud console, on the Navigation menu, click Cloud Overview > Dashboard.
  2. Copy the project number (e.g. 729328892908).
  3. On the Navigation menu, select IAM & Admin > IAM.
  4. At the top of the roles table, below View by Principals, click Grant Access.
  5. For New principals, type:
{project-number}-compute@developer.gserviceaccount.com
  1. Replace {project-number} with your project number.
  2. For Role, select Project (or Basic) > Editor.
  3. Click Save.

Setting up your IDE

For the purposes of this lab, you will mainly be using a Theia Web IDE hosted on Google Compute Engine. It has the lab repo pre-cloned. There is java langauge server support, as well as a terminal for programmatic access to Google Cloud APIs via the gcloud command line tool, similar to Cloud Shell.

  1. To access your Theia IDE, copy and paste the link shown in Google Cloud Skills Boost to a new tab.
Note: You may need to wait 3-5 minutes for the environment to be fully provisioned, even after the url appears. Until then you will see an error in the browser.

The lab repo has been cloned to your environment. Each lab is divided into a labs folder with code to be completed by you, and a solution folder with a fully workable example to reference if you get stuck.

  1. Click on the File Explorer button to look:

You can also create multiple terminals in this environment, just as you would with cloud shell:

You can see with by running gcloud auth list on the terminal that you're logged in as a provided service account, which has the exact same permissions are your lab user account:

If at any point your environment stops working, you can try resetting the VM hosting your IDE from the GCE console like this:

Aggregating site traffic by user with SQL

In this lab, you rewrite your previous BatchUserTraffic pipeline so that it performs the following:

  1. Reads the day’s traffic from a file in Cloud Storage.
  2. Converts each event into a CommonLog object.
  3. Uses SQL instead of Java transforms to sum the number of hits for each unique user ID and perform additional aggregations.
  4. Writes the resulting data to BigQuery.
  5. Has an additional branch that writes the raw data to BigQuery for later analysis.

Task 1. Prepare environment

As in the prior labs, the first step is to generate data for the pipeline to process. You will open the lab environment and generate the data as before:

Open the appropriate lab

  1. Create a new terminal in your IDE environment, if you haven't already, and copy and paste the following command:
# Change directory into the lab cd 4_SQL_Batch_Analytics/labs # Download dependencies mvn clean dependency:resolve export BASE_DIR=$(pwd)
  1. Set up the data environment:
# Create GCS buckets and BQ dataset cd $BASE_DIR/../.. source create_batch_sinks.sh # Generate event dataflow source generate_batch_events.sh # Change to the directory containing the practice version of the code cd $BASE_DIR
  1. Ensure that the Data Catalog APIs is enabled:
gcloud services enable datacatalog.googleapis.com

Click Check my progress to verify the objective. Set up the data environment

Task 2. Add SQL dependencies to the pipeline

  1. Open up BatchUserTrafficSQLPipeline.java in your IDE, which can be found in 4_SQL_Batch_Analytics/labs/src/main/java/com/mypackage/pipeline.

This pipeline already contains the necessary code to accept command-line options for the input path and one output table name, as well as code to read in events from Google Cloud Storage, parse those events, and write results to BigQuery. However, some important parts are missing.

As in the previous lab, the next step in the pipeline is to aggregate the events by each unique user_id and count pageviews for each. This time, however, you will perform the aggregation using SQL using SqlTransform instead of Java-based transforms.

Before you implement this, you will need to add a SQL dependency to your pipeline.

  1. To complete this task, open up the pom.xml file for this pipeline located at 4_SQL_Batch_Analytics/labs/ and add the following dependency:
<dependency> <groupId>org.apache.beam</groupId> <artifactId>beam-sdks-java-extensions-sql</artifactId> <version>${beam.version}</version> </dependency>
  1. Run the following command in the terminal of your IDE to download the dependencies through maven:
mvn clean dependency:resolve
  1. In BatchUserTrafficSQLPipeline.java, ensure that the following imports are at the top of the file:
import org.apache.beam.sdk.extensions.sql.impl.BeamSqlPipelineOptions; import org.apache.beam.sdk.extensions.sql.SqlTransform;

Task 3. Write a SQL statement to aggregate site traffic by user

Beam SQL can be implemented in the Apache Calcite dialect.

The SqlTransform.query(queryString) method is the only API to create a PTransform from a string representation of the SQL query. You can apply this PTransform either to a single PCollection or a PCollectionTuple that holds multiple PCollections (more on PColllectionTuples later).

When it is being applied to a single PCollection, it can be referenced via the table name PCOLLECTION in the query:

PCollection<Row> filteredNames = testApps.apply( SqlTransform.query( "SELECT appId, description, rowtime " + "FROM PCOLLECTION " + "WHERE id=1"));

The resulting output is a Row object with associated schema that can be further mutated in SQL Transforms or other Java PTransforms, or stored in a sink.

  • To complete this task, add a SQLTransform to your pipeline, supplying a SQL string to aggregate the number of hits by user_id and naming this 'pageviews'.

You can also perform additional aggregations if you wish. For reference, this was the PTransform previously used:

aggregateField("user_id", Count.combineFn(), "pageviews") .aggregateField("num_bytes", Sum.ofIntegers(), "total_bytes") .aggregateField("num_bytes", Max.ofIntegers(), "max_num_bytes") .aggregateField("num_bytes", Min.ofIntegers(), "min_num_bytes"))

Some helpful SQL aggregate functions:

COUNT(*) MAX(field_to_find_max_of) SUM(field_to_sum) MIN(field_to_find_min_of)

If you get stuck, reference the solution for hints.

Implement a branch to store raw data

You want to store all the raw results in BigQuery for further SQL aggregations later in the UI.

  • To complete this task, reconfigure your pipeline with a branch that writes out the raw CommonLog objects directly to BigQuery, to a table name referenced by a command line option rawTableName.

You can do this as before by terminating the first branch of the pipeline with a semicolon and starting each branch with logs.apply();. Don't forget to add this new command line to the Pipeline options along with inputPath and aggregateTableName. Remember also to change the type hint on BigQueryIO.<Object>write().

Task 4. Execute the pipeline

  1. Return to the terminal and execute your pipeline:
# Set up environment variables export PROJECT_ID=$(gcloud config get-value project) export REGION='{{{project_0.default_region|Region}}}' export BUCKET=gs://${PROJECT_ID} export PIPELINE_FOLDER=${BUCKET} export MAIN_CLASS_NAME=com.mypackage.pipeline.BatchUserTrafficSQLPipeline export RUNNER=DataflowRunner export INPUT_PATH=${PIPELINE_FOLDER}/events.json export AGGREGATE_TABLE_NAME=${PROJECT_ID}:logs.user_traffic export RAW_TABLE_NAME=${PROJECT_ID}:logs.raw cd $BASE_DIR mvn compile exec:java \ -Dexec.mainClass=${MAIN_CLASS_NAME} \ -Dexec.cleanupDaemonThreads=false \ -Dexec.args=" \ --project=${PROJECT_ID} \ --region=${REGION} \ --stagingLocation=${PIPELINE_FOLDER}/staging \ --tempLocation=${PIPELINE_FOLDER}/temp \ --runner=${RUNNER} \ --inputPath=${INPUT_PATH} \ --aggregateTableName=${AGGREGATE_TABLE_NAME} \ --rawTableName=${RAW_TABLE_NAME}"
  1. Navigate to Navigation Menu > Dataflow to see the status of your pipeline.

  2. Once your pipeline has finished, go to the BigQuery UI to query the two resulting tables. Make sure that logs.raw exists and has data populated, as you will need that later in the lab.

Click Check my progress to verify the objective. Aggregating site traffic by user with SQL

End your lab

When you have completed your lab, click End Lab. Google Cloud Skills Boost removes the resources you’ve used and cleans the account for you.

You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.

The number of stars indicates the following:

  • 1 star = Very dissatisfied
  • 2 stars = Dissatisfied
  • 3 stars = Neutral
  • 4 stars = Satisfied
  • 5 stars = Very satisfied

You can close the dialog box if you don't want to provide feedback.

For feedback, suggestions, or corrections, please use the Support tab.

Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.

上一步 下一步

准备工作

  1. 实验会创建一个 Google Cloud 项目和一些资源,供您使用限定的一段时间
  2. 实验有时间限制,并且没有暂停功能。如果您中途结束实验,则必须重新开始。
  3. 在屏幕左上角,点击开始实验即可开始

此内容目前不可用

一旦可用,我们会通过电子邮件告知您

太好了!

一旦可用,我们会通过电子邮件告知您

一次一个实验

确认结束所有现有实验并开始此实验

使用无痕浏览模式运行实验

请使用无痕模式或无痕式浏览器窗口运行此实验。这可以避免您的个人账号与学生账号之间发生冲突,这种冲突可能导致您的个人账号产生额外费用。
预览