arrow_back

Creating Reusable Pipelines in Cloud Data Fusion

Join Sign in

Creating Reusable Pipelines in Cloud Data Fusion

1 hour 30 minutes 7 Credits

GSP810

Google Cloud Self-Paced Labs logo

Overview

In this lab you will learn how to build a reusable pipeline that reads data from Cloud Storage, performs data quality checks, and writes to Cloud Storage.

Objectives

  • Use the Argument Setter plugin to allow the pipeline to read different input in every run.

  • Use the Argument Setter plugin to allow the pipeline to perform different quality checks in every run.

  • Write the output data of each run to Cloud Storage.

Setup

For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.

  1. Sign in to Qwiklabs using an incognito window.

  2. Note the lab's access time (for example, 02:00:00), and make sure you can finish within that time. There is no pause feature. You can restart if needed, but you have to start at the beginning.

  3. When ready, click Start lab.

  1. Note your lab credentials (Username and Password). You will use them to sign in to the Google Cloud Console.

  2. Click Open Google Console.

  3. Click Use another account and copy/paste credentials for this lab into the prompts. If you use other credentials, you'll receive errors or incur charges.

  4. Accept the terms and skip the recovery resource page.

Log in to Google Cloud Console

  1. Using the browser tab or window you are using for this lab session, copy the Username from the Connection Details panel and click the Open Google Console button.
Note: If you are asked to choose an account, click Use another account.
  1. Paste in the Username, and then the Password as prompted.
  2. Click Next.
  3. Accept the terms and conditions.

Since this is a temporary account, which will last only as long as this lab:

  • Do not add recovery options

  • Do not sign up for free trials

  1. Once the console opens, view the list of services by clicking the Navigation menu (Navigation menu icon) at the top-left.

Navigation menu

Activate Cloud Shell

Cloud Shell is a virtual machine that contains development tools. It offers a persistent 5-GB home directory and runs on Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources. gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab completion.

  1. Click the Activate Cloud Shell button (Activate Cloud Shell icon) at the top right of the console.

  2. Click Continue. It takes a few moments to provision and connect to the environment. When you are connected, you are also authenticated, and the project is set to your PROJECT_ID.

Sample commands

  • List the active account name:

gcloud auth list

(Output)

Credentialed accounts: - <myaccount>@<mydomain>.com (active)

(Example output)

Credentialed accounts: - google1623327_student@qwiklabs.net
  • List the project ID:

gcloud config list project

(Output)

[core] project = <project_ID>

(Example output)

[core] project = qwiklabs-gcp-44776a13dea667a6 Note: Full documentation of gcloud is available in the gcloud CLI overview guide.

Check project permissions

Before you begin working on Google Cloud, you must ensure that your project has the correct permissions within Identity and Access Management (IAM).

  1. In the Google Cloud console, on the Navigation menu (Navigation menu icon), click IAM & Admin > IAM.

  2. Confirm that the default compute Service Account {project-number}-compute@developer.gserviceaccount.com is present and has the editor role assigned. The account prefix is the project number, which you can find on Navigation menu > Cloud overview.

Default compute service account

If the account is not present in IAM or does not have the editor role, follow the steps below to assign the required role.

  1. In the Google Cloud console, on the Navigation menu, click Cloud overview.

  2. From the Project info card, copy the Project number.

  3. On the Navigation menu, click IAM & Admin > IAM.

  4. At the top of the IAM page, click Add.

  5. For New principals, type:

{project-number}-compute@developer.gserviceaccount.com

Replace {project-number} with your project number.

  1. For Select a role, select Basic (or Project) > Editor.

  2. Click Save.

Add necessary permissions for your Cloud Data Fusion instance

  1. In the Cloud Console, from the Navigation menu select Data Fusion > Instances. You should see a Cloud Data Fusion instance already setup and ready for use.

Next, you will grant permissions to the service account associated with the instance, using the following steps.

  1. Click on the instance name. On the Instance details page copy the Service Account to your clipboard.

instance.png

  1. In the Cloud Console navigate to the IAM & admin > IAM.

  2. On the IAM Permissions page, click Add.

  3. In the New Principals field paste the service account.

  4. Click into the Select a role field and start typing Cloud Data Fusion API Service Agent, then select it.

00-4-DataFusion-API-Service.png

  1. Click Save.

Click Check my progress to verify the objective. Add Cloud Data Fusion API Service Agent role to service account

Grant service account user permission

  1. In the console, on the Navigation menu, click IAM & admin > IAM.

  2. Select the Include Google-provided role grants checkbox.

  3. Scroll down the list to find the Google-managed Cloud Data Fusion service account that looks like service-{project-number}@gcp-sa-datafusion.iam.gserviceaccount.com and then copy the service account name to your clipboard.

Google-managed Cloud Data Fusion service account listing

  1. Next, navigate to the IAM & admin > Service Accounts.

  2. Click on the default compute engine account that looks like {project-number}-compute@developer.gserviceaccount.com, and select the Permissions tab on the top navigation.

  3. Click on the Grant Access button.

  4. In the New Principals field, paste the service account you copied earlier.

  5. In the Role dropdown menu, select Service Account User.

  6. Click Save.

Setup Cloud Storage bucket

Next, you will create a Cloud Storage bucket in your project to be used later to store results when your pipeline runs.

  1. In Cloud Shell, execute the following commands to create a new bucket:

export BUCKET=$GOOGLE_CLOUD_PROJECT gsutil mb gs://$BUCKET

The created bucket has the same name as your Project ID.

Click Check my progress to verify the objective. Setup Cloud Storage bucket

Navigate the Cloud Data Fusion UI

  1. In the Cloud Console, from the Navigation Menu, click on Data Fusion, then click the View Instance link next to your Data Fusion instance. Select your lab credentials to sign in, if required.

  2. If prompted to take a tour of the service click on No, Thanks. You should now be in the Cloud Data Fusion UI.

Deploy the Argument Setter plugin

  1. In the Cloud Data Fusion web UI, click Hub in the upper right.

00-0-0-02-Hub.png

  1. Click the Argument setter action plugin and click Deploy.

  2. In the Deploy window that opens, click Finish.

  3. Click Create a pipeline. The Pipeline Studio page opens.

Read from Cloud Storage

  1. In the left panel of the Pipeline Studio page, using the Source drop-down menu, select Google Cloud Storage.

  2. Hover over the Cloud Storage node and click the Properties button that appears.

00-00-1-configure-gcs.png

  1. In the Reference name field, enter a name.

  2. In the Path field, enter ${input.path}. This macro controls what the Cloud Storage input path will be in the different pipeline runs.

  3. In the Format field, select text.

  4. In the right Output Schema panel, remove the offset field from the output schema by clicking the trash icon in the offset field row.

00-00-2-gcs-properties.png

  1. Click the X button to exit the Properties dialog box.

Transform your data

  1. In the left panel of the Pipeline Studio page, using the Transform drop-down menu, select Wrangler.

  2. In the Pipeline Studio canvas, drag an arrow from the Cloud Storage node to the Wrangler node.

00-00-3-transform1.png

  1. Hover over the Wrangler node and click the Properties button that appears.

  2. In the Input field name, type body.

  3. In the Recipe field, enter ${directives}. This macro controls what the transform logic will be in the different pipeline runs.

00-00-4-transform2.png

  1. Click the X button to exit the Properties dialog box.

Write to Cloud Storage

  1. In the left panel of the Pipeline Studio page, using the Sink drop-down menu, select Cloud Storage.

  2. In the Pipeline Studio canvas, drag an arrow from the Wrangler node to the Cloud Storage node you just added.

00-00-5-sink1.png

  1. Hover over the Cloud Storage sink node and click the Properties button that appears.

  2. In the Reference name field, enter a name.

  3. In the Path field, enter the path of your Cloud Storage bucket you created earlier.

00-00-6-sink2.png

  1. In the Format field, select json

  2. Click the X button to exit the Properties menu.

Set the macro arguments

  1. In the left panel of the Pipeline Studio page, using the Conditions and Actions drop-down menu, select the Argument Setter plugin.

  2. In the Pipeline Studio canvas, drag an arrow from the Argument Setter node to the Cloud Storage source node.

00-00-7-Argsetter1.png

  1. Hover over the Argument Setter node and click the Properties button that appears.

  2. In the URL field, add the following:

https://storage.googleapis.com/reusable-pipeline-tutorial/args.json

00-00-7-Argsetter2.png

The URL corresponds to a publicly accessible object in Cloud Storage that contains the following content:

{ "arguments" : [ { "name": "input.path", "value": "gs://reusable-pipeline-tutorial/user-emails.txt" }, { "name": "directives", "value": "send-to-error !dq:isEmail(body)" } ] }

The first of the two arguments is the value for input.path. The path gs://reusable-pipeline-tutorial/user-emails.txt is a publicly accessible object in Cloud Storage that contains the following test data:

alice@example.com bob@example.com craig@invalid@example.com

The second argument is the value for directives. The value send-to-error !dq:isEmail(body) configures Wrangler to filter out any lines that are not a valid email address. For example, craig@invalid@example.com is filtered out.

  1. Click the X button to exit the Properties menu.

Deploy and run your pipeline

  1. In the top bar of the Pipeline Studio page, click Name your pipeline. Give your pipeline a name (like Reusable-Pipeline), and then click Save.

Reusable-Pipeline-Name.png

  1. Click Deploy on the top right of the Pipeline Studio page. This will deploy your pipeline.

  2. Once deployed, click the drop-down menu on the Run button. Notice the boxes for the input.path and directives arguments. This notifies Cloud Data Fusion that the pipeline will set values for these required arguments during runtime provided through the Argument Setter plugin. Click Run.

  3. Wait for the pipeline run to complete and the status to change to Succeeded.

ReusablePipelineRun.png

Click Check my progress to verify the objective. Build and execute your pipeline

Congratulations!

In this lab, you learned how to use the Argument Setter plugin to create a reusable pipeline, which can take in different input arguments with every run.

CloudDataFusion_advanced_125x135.png

Continue Your Quest

This self-paced lab is part of the Qwiklabs Building Advanced Codeless Pipelines on Cloud Data Fusion Quest. A Quest is a series of related labs that form a learning path. Completing this Quest earns you the badge above, to recognize your achievement. You can make your badge (or badges) public and link to them in your online resume or social media account. Enroll in this Quest and get immediate completion credit if you've taken this lab. See other available Qwiklabs Quests.

Take Your Next Lab

Continue your Quest with Redacting Confidential Data within your Pipelines in Cloud Data Fusion

Manual Last Updated October 21, 2021
Lab Last Tested October 20, 2021

©2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.