Loading...
No results found.

Apply your skills in Google Cloud console

Building Batch Data Pipelines on Google Cloud

Get access to 700+ labs and courses

Serverless Data Analysis with Dataflow: Side Inputs (Python)

Lab 1 hour 30 minutes universal_currency_alt 5 Credits show_chart Advanced
info This lab may incorporate AI tools to support your learning.
Get access to 700+ labs and courses

Overview

In this lab, you learn how to load data into BigQuery and run complex queries. Next, you will execute a Dataflow pipeline that can carry out Map and Reduce operations, use side inputs and stream into BigQuery.

Objective

In this lab, you learn how to use BigQuery as a data source into Dataflow, and how to use the results of a pipeline as a side input to another pipeline.

  • Read data from BigQuery into Dataflow
  • Use the output of a pipeline as a side-input to another pipeline

Setup

For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.

  1. Sign in to Qwiklabs using an incognito window.

  2. Note the lab's access time (for example, 1:15:00), and make sure you can finish within that time.
    There is no pause feature. You can restart if needed, but you have to start at the beginning.

  3. When ready, click Start lab.

  4. Note your lab credentials (Username and Password). You will use them to sign in to the Google Cloud Console.

  5. Click Open Google Console.

  6. Click Use another account and copy/paste credentials for this lab into the prompts.
    If you use other credentials, you'll receive errors or incur charges.

  7. Accept the terms and skip the recovery resource page.

Check project permissions

Before you begin your work on Google Cloud, you need to ensure that your project has the correct permissions within Identity and Access Management (IAM).

  1. In the Google Cloud console, on the Navigation menu (), select IAM & Admin > IAM.

  2. Confirm that the default compute Service Account {project-number}-compute@developer.gserviceaccount.com is present and has the editor role assigned. The account prefix is the project number, which you can find on Navigation menu > Cloud Overview > Dashboard.

Note: If the account is not present in IAM or does not have the editor role, follow the steps below to assign the required role.
  1. In the Google Cloud console, on the Navigation menu, click Cloud Overview > Dashboard.
  2. Copy the project number (e.g. 729328892908).
  3. On the Navigation menu, select IAM & Admin > IAM.
  4. At the top of the roles table, below View by Principals, click Grant Access.
  5. For New principals, type:
{project-number}-compute@developer.gserviceaccount.com
  1. Replace {project-number} with your project number.
  2. For Role, select Project (or Basic) > Editor.
  3. Click Save.

Task 1. Preparation

Assign the Dataflow Developer role

If the account does not have the Dataflow Developer role, follow the steps below to assign the required role.

  1. On the Navigation menu, click IAM & Admin > IAM.

  2. Select the default compute Service Account {project-number}-compute@developer.gserviceaccount.com.

  3. Select the Edit option (the pencil on the far right).

  4. Click Add Another Role.

  5. Click inside the box for Select a Role. In the Type to filter selector, type and choose Dataflow Developer.

  6. Click Save.

Ensure that the Dataflow API is successfully enabled

  1. On the Google Cloud Console title bar, click Activate Cloud Shell. If prompted, click Continue.

  2. Run the following commands to ensure that the Dataflow API is enabled cleanly in your project. If prompted, click Authorize:

gcloud services disable dataflow.googleapis.com gcloud services enable dataflow.googleapis.com

Open the SSH terminal and connect to the training VM

You will be running all code from a curated training VM.

  1. In the Console, on the Navigation menu (), click Compute Engine > VM instances.

  2. Locate the line with the instance called training-vm.

  3. On the far right, under Connect, click on SSH to open a terminal window. If prompted, click Authorize.

  4. In this lab, you will enter CLI commands on the training-vm.

Download Code Repository

  • Next you will download a code repository for use in this lab. In the training-vm SSH terminal enter the following:
git clone https://github.com/GoogleCloudPlatform/training-data-analyst

Create a Cloud Storage bucket

Follow these instructions to create a bucket.

  1. In the Console, on the Navigation menu, click Cloud Storage > Buckets.
  2. Click + Create.
  3. Specify the following, and leave the remaining settings as their defaults:
Property Value (type value or select option as specified)
Name
Location type > Region
  1. Click Create.

  2. If you get the Public access will be prevented prompt, select Enforce public access prevention on this bucket and click Confirm.

  3. In the training-vm SSH terminal enter the following to create three environment variables. One named "BUCKET", another named "PROJECT", and the last named "REGION". Verify that each exists with the echo command:

BUCKET="{{{project_0.project_id|project_place_holder_text}}}" echo $BUCKET PROJECT="{{{project_0.project_id|project_place_holder_text}}}" echo $PROJECT REGION="{{{project_0.startup_script.gcp_region|region_place_holder_text}}}" echo $REGION

Task 2. Try using BigQuery query

  1. In the console, on the Navigation menu (), click BigQuery.
  2. If prompted click Done.
  3. Click "+" (SQL Query) and type the following query:
SELECT content FROM `cloud-training-demos.github_repos.contents_java` LIMIT 10
  1. Click on Run.

What is being returned?

The BigQuery table cloud-training-demos.github_repos.contents_java contains the content (and some metadata) of all the Java files present in GitHub in 2016.

  1. To find out how many Java files this table has, type the following query and click Run:
SELECT COUNT(*) FROM `cloud-training-demos.github_repos.contents_java`

How many files are there in this dataset?

Is this a dataset you want to process locally or on the cloud?

Task 3. Explore the pipeline code

  1. Return to the training-vm SSH terminal and navigate to the directory /training-data-analyst/courses/data_analysis/lab2/python and view the file JavaProjectsThatNeedHelp.py.

View the file with Nano. Do not make any changes to the code. Press Ctrl+X to exit Nano.

cd ~/training-data-analyst/courses/data_analysis/lab2/python nano JavaProjectsThatNeedHelp.py

Refer to this diagram as you read the code. The pipeline looks like this:

  1. Answer the following questions:
  • Looking at the class documentation at the very top, what is the purpose of this pipeline?
  • Where does the content come from?
  • What does the left side of the pipeline do?
  • What does the right side of the pipeline do?
  • What does ToLines do? (Hint: look at the content field of the BigQuery result)
  • Why is the result of ReadFromBQ stored in a named PCollection instead of being directly passed to another step?
  • What are the two actions carried out on the PCollection generated from ReadFromBQ?
  • If a file has 3 FIXMEs and 2 TODOs in its content (on different lines), how many calls for help are associated with it?
  • If a file is in the package com.google.devtools.build, what are the packages that it is associated with?
  • popular_packages and help_packages are both named PCollections and both used in the Scores (side inputs) step of the pipeline. Which one is the main input and which is the side input?
  • What is the method used in the Scores step?
  • What Python data type is the side input converted into in the Scores step?
Note: The Java version of this program is slightly different from the Python version. The Java SDK supports AsMap and the Python SDK doesn't. It supports AsDict instead. In Java, the PCollection is converted into a View as a preparatory step before it is used. In Python, the PCollection conversion occurs in the step where it is used.

Task 4. Execute the pipeline

  1. The program requires BUCKET, PROJECT, and REGION values and whether you want to run the pipeline locally using --DirectRunner or on the cloud using --DataFlowRunner.

  2. Execute the pipeline locally by typing the following into the training-vm SSH terminal:

python3 JavaProjectsThatNeedHelp.py --bucket $BUCKET --project $PROJECT --region $REGION --DirectRunner Note: Please ignore the warning if any, such as 'BeamDeprecationWarning', and move forward.
  1. Once the pipeline has finished executing, On the Navigation menu (), click Cloud Storage > Buckets and click on your bucket. You will find the results in the javahelp folder. Click on the Result object to examine the output.

  2. Execute the pipeline on the cloud by typing the following into the training-vm SSH terminal:

python3 JavaProjectsThatNeedHelp.py --bucket $BUCKET --project $PROJECT --region $REGION --DataFlowRunner Note: Please ignore the warning if any, such as 'BeamDeprecationWarning', and move forward.
  1. Return to the browser tab for Console. On the Navigation menu (), click View All Products, and select Dataflow from the Analytics section.

  2. Click on your job to monitor progress.

  3. Once the pipeline has finished executing, On the Navigation menu () click Cloud Storage > Buckets and click on your bucket. You will find the results in the javahelp folder. Click on the Result object to examine the output. The file name will be the same but you will notice that the file creation time is more recent.

Click Check my progress to verify the objective. Execute the pipeline

End your lab

When you have completed your lab, click End Lab. Google Cloud Skills Boost removes the resources you’ve used and cleans the account for you.

You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.

The number of stars indicates the following:

  • 1 star = Very dissatisfied
  • 2 stars = Dissatisfied
  • 3 stars = Neutral
  • 4 stars = Satisfied
  • 5 stars = Very satisfied

You can close the dialog box if you don't want to provide feedback.

For feedback, suggestions, or corrections, please use the Support tab.

Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.

Previous Next

Before you begin

  1. Labs create a Google Cloud project and resources for a fixed time
  2. Labs have a time limit and no pause feature. If you end the lab, you'll have to restart from the beginning.
  3. On the top left of your screen, click Start lab to begin

This content is not currently available

We will notify you via email when it becomes available

Great!

We will contact you via email if it becomes available

One lab at a time

Confirm to end all existing labs and start this one

Use private browsing to run the lab

Use an Incognito or private browser window to run this lab. This prevents any conflicts between your personal account and the Student account, which may cause extra charges incurred to your personal account.
Preview