arrow_back

Configuring MongoDB Atlas with BigQuery Dataflow Templates

Join Sign in
Test and share your knowledge with our community!
done
Get access to over 700 hands-on labs, skill badges, and courses

Configuring MongoDB Atlas with BigQuery Dataflow Templates

Lab 1 hour 30 minutes universal_currency_alt 5 Credits show_chart Intermediate
Test and share your knowledge with our community!
done
Get access to over 700 hands-on labs, skill badges, and courses

This lab was developed with our partner, MongoDB. Your personal information may be shared with MongoDB, the lab sponsor, if you have opted in to receive product updates, announcements, and offers in your Account Profile

Note: This lab requires a partner account. Please follow the lab instructions to create your account before starting the lab.

GSP1105

Google Cloud self-paced labs logo

Overview

In this lab you configure MongoDB Atlas, create a M0 Cluster, make the cluster accessible from the outside world, and create an admin user. You will use the Sample_mflix database and movies collection on MongoDB Atlas that consists of details on Movies like cast, release date etc. You then create a dataflow pipeline to load the data from MongoDB cluster to Google BigQuery. Thereafter, you query the data on BigQuery using Google Cloud console and create a ML model from the data.

Objectives

In this lab, you learn how to:

  • Create a BigQuery dataset
  • Create a Dataflow pipeline using Dataflow UI
  • Create a ML model from the BigQuery dataset

Prerequisites

Setup and requirements

Before you click the Start Lab button

Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources will be made available to you.

This hands-on lab lets you do the lab activities yourself in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials that you use to sign in and access Google Cloud for the duration of the lab.

To complete this lab, you need:

  • Access to a standard internet browser (Chrome browser recommended).
Note: Use an Incognito or private browser window to run this lab. This prevents any conflicts between your personal account and the Student account, which may cause extra charges incurred to your personal account.
  • Time to complete the lab---remember, once you start, you cannot pause a lab.
Note: If you already have your own personal Google Cloud account or project, do not use it for this lab to avoid extra charges to your account.

How to start your lab and sign in to the Google Cloud console

  1. Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is the Lab Details panel with the following:

    • The Open Google Cloud console button
    • Time remaining
    • The temporary credentials that you must use for this lab
    • Other information, if needed, to step through this lab
  2. Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).

    The lab spins up resources, and then opens another tab that shows the Sign in page.

    Tip: Arrange the tabs in separate windows, side-by-side.

    Note: If you see the Choose an account dialog, click Use Another Account.
  3. If necessary, copy the Username below and paste it into the Sign in dialog.

    {{{user_0.username | "Username"}}}

    You can also find the Username in the Lab Details panel.

  4. Click Next.

  5. Copy the Password below and paste it into the Welcome dialog.

    {{{user_0.password | "Password"}}}

    You can also find the Password in the Lab Details panel.

  6. Click Next.

    Important: You must use the credentials the lab provides you. Do not use your Google Cloud account credentials. Note: Using your own Google Cloud account for this lab may incur extra charges.
  7. Click through the subsequent pages:

    • Accept the terms and conditions.
    • Do not add recovery options or two-factor authentication (because this is a temporary account).
    • Do not sign up for free trials.

After a few moments, the Google Cloud console opens in this tab.

Note: To view a menu with a list of Google Cloud products and services, click the Navigation menu at the top-left. Navigation menu icon

Task 1. Create Atlas cluster

To get access to the sample_data you need to create a MongoDB Atlas cluster. For this lab you deploy a free cluster. Please note that this cluster can only be used for PoVs.

Create MongoDB Atlas free cluster

  1. On the Deploy your cloud database select Shared.

The Deploy a cloud database page, which includes the Create button in the Free tile.

  1. Select the Google Cloud in Provider, then select the nearest region to you.

  2. Name the cluster as Sandbox and click Create Deployment.

MongoDB Cluster

The Atlas tabbed page, which includes one cluster in the Sandbox.

  1. If you are presented with a "Connect to Sandbox" page, dismiss it by clicking Cancel.

Load sample data

  1. Navigate to your Clusters view. In the left navigation pane in Atlas, click Database under Deployment.

  2. Your cluster should be automatically populated with sample data. Click the Connect button.

  3. View your sample data by clicking your cluster's Browse Collections button. You will see the following databases in your cluster (databases that start with sample_):

The Collections tabbed page, which displays the sample_mflix.movies data.

Create Database user

To have authenticated access on the MongoDB Sandbox cluster from Google Console, you need to create database users.

  1. Click Database Access from the left pane on the Atlas dashboard, under Security.

  2. Choose to Add New Database User using the green button. Leave the authentication method as Password and enter the username appUser and password appUser123.

  3. Under Database User Privileges, click the Add Built-in Role, and add the role Read and write to any database. Finally, press the green Add User button on the bottom of the page to create the user.

    The Add New Database dialog, which includes the aforementioned fields.

Grant network access

  1. Go to Network Access on the left panel. Click the + ADD IP ADDRESS button. Then click ALLOW ACCESS FROM ANYWHERE and click Confirm.
Note: In this lab, you allow access from 0.0.0.0/0. This is not recommended for a production setup, where the recommendation will be to use VPC Peering and private IPs.

The Network Access page, which displays one IP address along with its Active status and two possible actions; Edit and Delete.

Task 2. Create BigQuery table

In this lab, you need a BigQuery dataset before you run the Dataflow pipeline. To create a BigQuery dataset, navigate to BigQuery from Google Cloud console, either by searching for BigQuery in the search bar on top or from the Navigation menu by navigating to BigQuery under the ANALYTICS task.

Create BigQuery dataset

  1. Open BigQuery in the Google Cloud console.

  2. In the Explorer panel, select the project where you want to create the dataset.

  3. Click on the three vertical dots next to your project ID under Explorer section, click View actions option and then click Create dataset.

    Create Dataset displayed in the Actions menu.

  4. On the Create dataset page:

  • For Dataset ID, enter sample_mflix

  • For Location type, select Region and under region select for the dataset. After a dataset is created, the location can't be changed.

  • For Default table expiration, leave the selection empty.

  • Click Create dataset.

  • Verify the dataset is listed under your project name on BigQuery explorer.

  1. Expand the dataset options and click Create table.
  • Name the table movies and click Create table.
Create BigQuery dataset

Task 3. Dataflow Pipeline creation

Now, create a Dataflow pipeline from the Dataflow UI.

Run Dataflow pipeline for MongoDB to BigQuery

  1. To create a Dataflow pipeline with Dataflow, go to Dataflow Create job from the templates page.

  2. In the Job name field, enter a unique job name: mongodb-to-bigquery-batch.

  3. For Regional endpoint, select .

The dataflow template, which includes the aforementioned fields.

  1. From the Dataflow template drop-down menu, select the MongoDB to BigQuery template under Process Data in Bulk (batch).
Note: Do not select MongoDB to BigQuery (CDC) under Process Data Continuously (stream).
  1. In Required parameter enter the following parameters:
  • MongoDB Connection URI

    • Click the Connect button on your Atlas Sandbox cluster.

    Connect Button

    • Under Connect your application, click Drivers

    The Connect your application displayed in the Connect to Sandbox dialog box.

    Note: If you are getting slightly different UI then click on Connect > MongoDB Drivers.
    • Copy the connection string and click Close.

    The Copy button in the connection string text box, and the Close button.

    • Paste the URI in MongoDB Connection URI in your dataflow UI. Replace the <password> with appUser123.
  • Mongo Database: sample_mflix

    You are using the sample_mflix database from the sample dataset for this lab.

  • Mongo Collection: movies

  • Leave NONE for user Option.

  • BigQuery Output table:

    • Click the BROWSE button, select movies. Your output table should look like - .sample_mflix.movies
  1. Expand Optional Parameters and de-select Use default machine type. Choose e2-standard-2 from the list of options.

  2. Click Run Job.

View the Dataflow Job running on dataflow on the Cloud Console. The Job Graph tabbed page, which includes the Dataflow Job status Running, along with the job logs.

Once the Job run is completed all the job graphs will turn Green.
Dataflow job graphs, all of which have turned green to indicate its completion.

For more details on the values passed refer to the MongoDB Dataflow Templates.

Note: This job takes 5-10 minutes. Check your progress once your job has fully completed. Create Dataflow Pipeline from MongoDB to BigQuery

Task 4. Build Model using BQML

  1. Once the pipeline runs successfully, open the BigQuery Explorer.

  2. Expand the project, click on the dataset sample_mflix, double click on the movies table. The Preview tabbed page, which lists the movies and its specs.

  3. Run the below queries one at a time.

    a. Query the first 1000 rows from the table:

SELECT * FROM `{{{ project_0.project_id | "PROJECT_ID" }}}.sample_mflix.movies` LIMIT 1000;

b. Create a movies-unpacked table from parsed imported data:

# Parse source_data and create movies-unpacked CREATE OR REPLACE TABLE `{{{ project_0.project_id | "PROJECT_ID" }}}.sample_mflix.movies-unpacked` AS( SELECT timestamp, JSON_EXTRACT(source_data, "$.year") as year, JSON_EXTRACT(source_data, "$.cast") as movie_cast, SAFE_CAST(JSON_EXTRACT(source_data, "$.imdb.rating")AS FLOAT64) as imdb_rating, JSON_EXTRACT(source_data, "$.title") as title, JSON_EXTRACT(source_data, "$.imdb.votes") as imdb_votes, JSON_EXTRACT(source_data, "$.type") as type, JSON_EXTRACT(source_data, "$.rated") as rated, SAFE_CAST(JSON_EXTRACT(source_data, "$.tomatoes.viewer.rating") AS FLOAT64) as tomato_rating, JSON_EXTRACT(source_data, "$.tomatoes.viewer.numReviews") as tomato_numReviews, JSON_EXTRACT(source_data, "$.tomatoes.viewer.meter") as tomato_meter from `{{{ project_0.project_id | "PROJECT_ID" }}}.sample_mflix.movies`);

c. Query and visualize the table movies-unpacked:

# Query newly created table SELECT * FROM `{{{ project_0.project_id | "PROJECT_ID" }}}.sample_mflix.movies-unpacked` LIMIT 1000;

d. Create model samplemflix.rating using the KMEANS clustering algorithm:

# Clustering base on imdb rating and tomato rating CREATE or REPLACE MODEL `sample_mflix.ratings` OPTIONS( MODEL_TYPE='KMEANS', NUM_CLUSTERS=5, KMEANS_INIT_METHOD='KMEANS++', DISTANCE_TYPE = 'EUCLIDEAN' ) AS SELECT AVG(imdb_rating) as avg_imdb_rating, AVG(tomato_rating) as avg_tomato_rating,rated FROM `{{{ project_0.project_id | "PROJECT_ID" }}}.sample_mflix.movies-unpacked` group by rated;
  1. Finally, navigate to the Model created under your Model section as shown below.

    The highlighted Selected features section, which includes five rows of feature data.

Create the BQML model

Congratulations!

In this lab, you have provisioned your first MongoDB Atlas cluster, loaded a sample dataset and you have setup a BigQuery dataset. Also you have used Dataflow UI to build a MongoDB to BigQuery pipeline and perform clustering on data using the k-means clustering algorithm.

You can deploy this model in VertexAI to manage your data models and scale your data model, which is out of scope for this lab.

Next steps/Learn more

Google Cloud training and certification

...helps you make the most of Google Cloud technologies. Our classes include technical skills and best practices to help you get up to speed quickly and continue your learning journey. We offer fundamental to advanced level training, with on-demand, live, and virtual options to suit your busy schedule. Certifications help you validate and prove your skill and expertise in Google Cloud technologies.

Manual Last Updated September 20, 2023

Lab Last Tested September 20, 2023

Copyright 2024 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.