Migrate for Compute Engine

Join Sign in

Migrate for Compute Engine

1 hour 30 minutes 5 Credits


Google Cloud self-paced labs logo


In this lab you use Migrate for Compute Engine to migrate a VM instance (EC2) that exists on AWS to Google Cloud. This will be a "lift and shift" operation. When completed, the VM instance that was running on AWS will be running on Google Cloud.

Building a Virtual Private Network between AWS and Google Cloud

Migrate for Compute Engine requires a Virtual Private Network (VPN) between the Google Cloud environment and the environment from which you are sourcing the VM. There can be many steps involved in creating a VPN between AWS and Google Cloud which involve the exchange of IP addresses, keys, and many other definitions. Rather than configuring these by hand and introducing all the associated opportunities for errors, this lab provides you with a Terraform script. Terraform is an infrastructure as code tool used to provision environments. Terraform supports both Google Cloud and AWS and can configure both environments against each other. When you run the Terraform script, it will perform the following tasks:

  • Create an AWS EC2 instance

  • Create a Google Cloud Compute Engine

  • Create an AWS network

  • Create a custom VPC in Google Cloud

  • Create the AWS side of the VPN connection

  • Create the Google Cloud side of the VPN connection

  • Enable appropriate firewall rules for the Google Cloud VPC network


Before you click the Start Lab button

Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources will be made available to you.

This hands-on lab lets you do the lab activities yourself in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials that you use to sign in and access Google Cloud for the duration of the lab.

To complete this lab, you need:

  • Access to a standard internet browser (Chrome browser recommended).
Note: Use an Incognito or private browser window to run this lab. This prevents any conflicts between your personal account and the Student account, which may cause extra charges incurred to your personal account.
  • Time to complete the lab---remember, once you start, you cannot pause a lab.
Note: If you already have your own personal Google Cloud account or project, do not use it for this lab to avoid extra charges to your account.

How to start your lab and sign in to the Google Cloud Console

  1. Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is the Lab Details panel with the following:

    • The Open Google Console button
    • Time remaining
    • The temporary credentials that you must use for this lab
    • Other information, if needed, to step through this lab
  2. Click Open Google Console. The lab spins up resources, and then opens another tab that shows the Sign in page.

    Tip: Arrange the tabs in separate windows, side-by-side.

    Note: If you see the Choose an account dialog, click Use Another Account.
  3. If necessary, copy the Username from the Lab Details panel and paste it into the Sign in dialog. Click Next.

  4. Copy the Password from the Lab Details panel and paste it into the Welcome dialog. Click Next.

    Important: You must use the credentials from the left panel. Do not use your Google Cloud Skills Boost credentials. Note: Using your own Google Cloud account for this lab may incur extra charges.
  5. Click through the subsequent pages:

    • Accept the terms and conditions.
    • Do not add recovery options or two-factor authentication (because this is a temporary account).
    • Do not sign up for free trials.

After a few moments, the Cloud Console opens in this tab.

Note: You can view the menu with a list of Google Cloud Products and Services by clicking the Navigation menu at the top-left. Navigation menu icon

Activate Cloud Shell

Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.

  1. Click Activate Cloud Shell Activate Cloud Shell icon at the top of the Google Cloud console.

  2. Click Continue.

It takes a few moments to provision and connect to the environment. When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. The output contains a line that declares the PROJECT_ID for this session:

Your Cloud Platform project in this session is set to YOUR_PROJECT_ID

gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.

  1. (Optional) You can list the active account name with this command:

gcloud auth list


ACTIVE: * ACCOUNT: To set the active account, run: $ gcloud config set account `ACCOUNT`
  1. (Optional) You can list the project ID with this command:

gcloud config list project


[core] project = <project_ID>

Example output:

[core] project = qwiklabs-gcp-44776a13dea667a6 Note: For full documentation of gcloud, in Google Cloud, refer to the gcloud CLI overview guide.

Make sure you are logged out of your personal or corporate gmail.

Task 1. Prepare your working environment

  1. Set your Project ID:

gcloud config set project $DEVSHELL_PROJECT_ID
  1. In Cloud Shell, retrieve the Terraform scripts by copying them and unzipping the files:

gsutil cp gs://spls/gsp854/autonetdeploy-multicloudvpn2.tar . tar -xvf autonetdeploy-multicloudvpn2.tar
  1. Change into the directory with the scripts:

cd autonetdeploy-multicloudvpn

Create Google Cloud Access credentials

  1. In a new tab, open Create Service Account key page.

  2. In the Service Accounts page, select your Project ID.

  3. Select three dots unders Actions column of Qwiklabs User Service Account and select Manage keys.

  4. Click ADD KEY and select Create new key.

  5. Select JSON as the Key type and click CREATE.

This will automatically download a JSON file of the key onto your local machine.

  1. Close the key window.

  2. Move this file from your local machine by clicking the More icon (three vertical dots) in the Cloud Shell ribbon, then select Upload.

The More menu with the Upload option highlighted.

  1. Navigate to the JSON file you downloaded and click Open. The file is placed in the home (~) directory.

  2. Next, use the ./ script provided to create the ~/.config/gcloud/credentials_autonetdeploy.json file. This script also creates terraform/terraform.tfvars with a reference to the new credentials.

  3. Run the following, replacing [YOUR-CREDENTIALS]with the name of the JSON file you just downloaded:


Set AWS access credentials

For this lab, you will use the AWS access credentials that are created for the lab environment. These credentials are split into an Access Key and a Secret Access Key which are both displayed in the resources panel for this lab.

  1. Run this command, replacing [YOUR_ACCESS_KEY] with your AWS Access Key from your resources panel, to store your Access Key in an environment variable:

  1. Run this command, replacing [YOUR_SECRET_KEY] with your AWS Secret Key from your resources panel, to store your Secret Key in an environment variable:

  1. To set your credentials, run the following in Cloud Shell:

  1. Run the following script to update the project value in your configuration files for Deployment Manager and Terraform:


Generate key-pairs

  1. Next, use ssh-keygen to generate a new key pair with your current account. For this lab, it's okay to use an empty passphrase:

ssh-keygen -t rsa -f ~/.ssh/vm-ssh-key -C $(whoami)
  1. When asked for passphrase, press Enter twice to leave it blank.

  2. Run the following to restrict access to your private key - this is a best practice:

chmod 400 ~/.ssh/vm-ssh-key

Import key to Google Cloud

  • In Cloud Shell, register your public key with Google Cloud:

gcloud compute config-ssh --ssh-key-file=~/.ssh/vm-ssh-key

You will see this statement which is ok to ignore:

WARNING: No host aliases were added to your SSH configs because you do not have any running instances. Try running this command again after running some instances.

Download key

  1. Get the filepath to your public key:

readlink -f ~/.ssh/
  1. Copy the file path from the output.

  2. To download the public key file from Cloud Shell, click the More icon, and then click Download.

  3. Now, paste in the file path to the key copied from the readlink command.

  4. Click Download.

This will download the key file to your local machine.

Import key to AWS

  1. Click the Open AWS Console button listed in the panel of resources for this lab. This will take you to the AWS console login page:

AWS console login page

  1. At the login page, enter your AWS Username value for the IAM user name and enter your AWS Password value for the Password. Both of these can be found in the resources panel at the top left part of this page.

  2. Click Sign In.

  3. In the AWS console, look in the upper right corner next to your user name. If you are not signed in to the N. Virginia region, use the dropdown menu to select it.

  4. In the AWS Management Console, click on All Services > EC2.

  5. Then click on Key pairs located in the Resources panel.

  6. In the top right corner, click Actions > Import Key Pair.

  7. Name the key pair vm-ssh-key.

  8. Click Browse and navigate to where the downloaded key pair file is.

  9. Select and click Open. The key pair will be added to your Import Settings page.

  10. Click Import key pair.

Deploy with Terraform

  1. Back in Cloud Shell, navigate to the autonetdeploy-multicloudvpn directory:

cd ~/autonetdeploy-multicloudvpn/
  1. Run the one-time terraform init command to install the Terraform providers for this deployment:

pushd ./terraform && terraform init && popd > /dev/null
  1. Run the terraform plan command to verify your credentials:

pushd ./terraform && terraform plan && popd > /dev/null

If you don't see red error text, your authentication is working properly.

  1. Now, navigate to the terraform directory:

pushd terraform
  1. Use the terraform validate command to validate the syntax of your configuration files:

terraform validate

This validation check is simpler than those performed as part of the plan and apply commands in subsequent steps. The validate command does not authenticate with any providers.

  1. Use the terraform apply command to create your deployment:

terraform apply -auto-approve

This will take about 10 minutes to complete.

What is happening: you are creating a Virtual Private Network (VPN) between Google Cloud and AWS. This requires resource definitions on both AWS and Google Cloud to be created that refer to each other. Items include IP addresses, routing information, shared keys and much more.

Once complete, you have both an AWS environment and a Google Cloud environment configured. An EC2 (VM) instance has also been created on AWS. This will be the VM instance that you will be migrating to Google Cloud.

Next, customize the EC2 instance so that you can easily see that, once migrated, it is the same VM on Google Cloud that was on AWS.

Click Check my progress to verify the objective. Build a Virtual Private Network between AWS and Google Cloud

Task 2. Configure AWS EC2 instance for Migration

  1. On the AWS tab, visit the EC2 running instance by selecting Instances in the left hand navigation menu.

  2. Select the instance and find the public IP address:

The Instances page with the Public IPv4 address highlighted

You will use your public key to ssh into your AWS instance.

  1. Run the following, replacing AWS_INSTANCE_EXTERNAL_IP with your instance's public IP:

ssh -i ~/.ssh/vm-ssh-key ubuntu@[AWS_INSTANCE_EXTERNAL_IP]
  1. You will get a message asking to confirm the authenticity of the host; type yes.

  2. Once logged in, run the following commands:

sudo bash -c "apt-get update" sudo bash -c "apt-get install apache2 -y" echo "Hello World" > MyText.txt

When the Linux image runs on Google Cloud, it expects to find kernel drivers for the Migrate for Compute Engine mapped disks. These must be downloaded and installed prior to the migration up to Google Cloud.

  1. The driver installation must be performed upon the EC2 machine:

curl -LO sudo dpkg -i velostrata-prep-0.9-3.deb sudo apt-get update && sudo apt-get install -f -y

You have now completed setting up the EC2 instance.

  1. Log out of the AWS VM instance by typing exit.

Task 3. Set up Migration Service Accounts

  • In Cloud Shell, go back to the home directory and run this script to create the service accounts you will assign to Migrate for Compute Engine:

cd ~/autonetdeploy-multicloudvpn/ sh

Click Check my progress to verify the objective. Set up Migration Service Accounts

Task 4. Set up Migration Manager

At this point you would normally go to the Console and Install Migrate for Compute Engine from the marketplace using the following two screens. This lab uses a special version of Migrate for Compute Engine which has lower resource needs and will run faster.

Note: The above is what should normally happen but since we are running in a QwikLab environment, we have an issue.

A script has been provided that will create a Compute Engine instance that works in QwikLabs.

  1. We can run this using:


You can ignore the warning:

WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see:

This will take a few minutes to run.

  1. Once finished, take note of the External IP field in the details of the velo-mgr vm. You will use it in the following command.

Click Check my progress to verify the objective. Set up Migration Manager

Task 5. Migrate

Test your velo-mgr vm has completed its start up and can be connected to. After creation, the Velostrata Manager often takes a minute or two to be launched and connectable.

  1. In your Cloud Shell, run this command until you successfully get a response replacing [VELO-EXTERNAL-IP] with your velo-mgr vm's External IP:

curl -k https://[VELO-EXTERNAL-IP]
  1. Access the Migrate for Compute Engine manager by copying the velo-mgr External IP and pasting it in a new tab in your browser.
Note: Migrate for Compute Engine uses a self signed SSL certificate and your browser will most likely attempt to block your manager. You can get past this on Chrome by clicking Advanced > Proceed to on the warning page that comes up when your VM finishes loading.

Warning: Your connection is not private

  1. Log in with the following information:

  • username: apiuser

  • password: velo1234

  1. In the initial setup screen, enable Stackdriver for both Logging and Metrics and click OK.

The Velostrata Manager Setup dialog box with the Enable Stackdriver logging and Enable Stackdriver metrics options enabled.

First, tell Migrate for Compute Engine about the AWS VM, which is your source.

  1. Click the Source Cloud icon:

The Source Cloud icon highlighted

  1. Click the Cloud Credentials tab:

The Cloud Credentials tab highlighted

  1. Click the Create button:

The Create button highlighted

  1. Fill in the form with the following information:

  • Cloud Provider: AWS

  • Credentials Name: a name of your choosing (eg. aws-credentials)

  • Region: US East (N. Virginia)

  • Access Key: The AWS Access key (listed on the resources panel on the left side of this page

  • Secret Key: The AWS Secret key (listed on the resources panel on the left side of this page)

  1. Click Ok to complete.

  2. Now click the Cloud Details tab:

The Cloud Details tab highlighted

  1. Click the Create button:

The Create button highlighted

  1. Fill in the form with the following information:

  • Cloud Provider: AWS

  • Name: AWS

  • Credentials: Select the credentials created previously from dropdown menu

  • Region: US East (N. Virginia)

  • VPC: Select aws-vpc | vpc-xxx from the dropdown menu

  • Security Group: default

  • Worker subnet for availability zone: | subnet-xxx

  1. Click Ok.

  2. Click the Home button:

The Home button highlighted

Now tell Migrate for Compute Engine about Google Cloud, which is your target.

  1. Click the Target Cloud icon:

The Target Cloud icon

  1. On the Cloud Extensions tab, click the Create button:

The Create button highlighted

  • Project: Select your qwiklabs-gcp-xxx

  • Region: us-central1

  • VPC: gcp-network

  • Default Destination Project for Workloads: qwiklabs-gcp-xxx

  • Default Service Account for Workloads: migration-cloud-extension

  1. Now expand the next sections to complete the form:

Cloud Extension

  • Cloud Extension Name: ext1
  • Service Account for Edge Nodes: migration-cloud-extension
  • Cloud Extension Size: Small


  • Node A Zone: us-central1-a

  • Node B Zone: us-central1-b

  • Node A Subnet:

  • Node B Subnet:

  • Default Workload Subnet:

  1. Click the Ok button.

You should now see ext1 in the Creating state.

Wait until the creation is complete and it's in an Active state to continue. This will take a couple of minutes.

  1. Click the Home button:

The Home button highlighted

  1. Click the Migration Waves icon:

The  Migration Waves icon

  1. Click Generate Runbook:

The Generate Runbook button

  • Source: AWS

  • Source Cloud Details: AWS

  • Filter by Source Tags: Name: Name, Value: *

  • Target Cloud Extension: ext1

  • Target Network: checked

  1. Click Create.

A .csv file will be downloaded. You will edit this file to configure the migration settings.

  1. In a new tab, open Google Sheets. Make sure you're logged in with your lab credentials.

  2. On the Google Sheets page, click Blank to start a new spreadsheet.

  3. Next, click File > Import.

  4. Click on the Upload tab and then drag your Velostrata Runbook.csv file into the window.

  5. In the dialog that follows, select Comma as the Separator type.

  6. Click Import data.

The Import file window displaying the populated Import location and Separator type dropdown menu fields, and Import data button

Edit the runbook

The Velostrata_Runbook spreadsheet displaying several columns and one row of data

  1. Change the RunGroup column value from -1 to 1.

  2. Set the TargetInstanceType column to be n1-standard-1.

  3. Now, save the file back to your local file system: File > Download > Comma-separated values operation.

Create a new wave

  1. Back in your Migrate for Compute Engine Manager (Velostrata Migration) tab, click the New Wave button.

The New Wave button highlighted

  • Wave Name: wave1
  • Runbook CSV: Your edited CSV file
Note: Take care to select the CSV file that contains the edited changes and not the original one that was generated (assuming that you now have two files).

The New Wave dialog box with the Wave name field and Runbook CSV file uploaded

  1. Click Save.

Validate the Wave

  1. Click on the wave1 line and it will become selected.

  2. Click the Action dropdown menu and select Validate from within the menu.

  3. Click Yes in the Run Validation dialog box.

The Run Validation dialog box with Yes and No buttons

The status will change to Validating and then, after a few seconds it will change again to Passed.

The Runbook Validation Status displays as Passed on the Waves page

  1. With the wave1 line still selected, click the Action pulldown menu and select the New Job entry.

  2. In the New Job dialog, select Full Migration operation and click Start.

The status will now change to Full Migration (Running).

The migration is now progressing and you must wait for the corresponding Compute Engine to become available.

The migration proceeds in two major phases:

  • The first is the start up of the Compute Engine by bringing in enough of the original VM to start.

  • From there, the remainder of the VM disk will be streamed in the background.

  1. Click on the Monitor icon:

The Monitor icon

You now see a record line for the VM being migrated. This should take about 10 minutes.

  1. While you're waiting, go look at the AWS instance, and see that the machine you're migrating has been stopped.

  2. Use the refresh button to see the most up to date information.

  3. Also look in the Cloud Console under Compute Engine - eventually you'll see the aws-vm-us-east-1 machine from AWS appear.

When the status changes from empty or Moving To Target Cloud to either:

  • Cache on demand
  • Migrating
  • Preparing to Detach

then the VM is now available on Google Cloud and is ready to be used.

When the VM has been 100% migrated, the status will change to Fully Migrated.

You don't need to wait for this status, continue to the next section.

Click Check my progress to verify the objective. Migrate

Task 6. Test the new Google Cloud machine

  1. In the Cloud Console, go select Compute Engine > VM Instances from the Navigation Menu.

You should see the aws-vm-us-east-1 vm. Now you need to assign it a Public IP to enable SSH connection.

  1. To attach an external IP address, follow these steps:
  • Click on the machine name to open the VM instance details page.

  • Click the Edit button.

  • Scroll down to Network Interfaces, then click the dropdown icon.

  • Under External IP select Ephemeral then click Done.

  • At the bottom of the page click Save.

  • Click VM instances again, and now see the Public IP assigned.

  1. Now, in your Cloud Shell, run this command, replacing PUBLIC IP with the public IP of your migrated VM:

ssh -i ~/.ssh/vm-ssh-key ubuntu@[PUBLIC_IP]
  1. Type yes.

This will log you into the migrated VM running on Google Cloud.

  1. Run the ls command to see the file that was created earlier on the AWS EC2 instance.

Click Check my progress to verify the objective. Test the new Google Cloud machine


You have successfully migrated the AWS VM instance to a corresponding Google Cloud Compute Engine.

Finish your quest

This self-paced lab is part of the VM Migration quest. A quest is a series of related labs that form a learning path. Completing this quest earns you a badge to recognize your achievement. You can make your badge or badges public and link to them in your online resume or social media account. Enroll in a quest and get immediate completion credit. Refer to the Google Cloud Skills Boost catalog for all available quests.

Take your next lab

Continue your quest with VM Migration: Planning, or check out:

Google Cloud training and certification

...helps you make the most of Google Cloud technologies. Our classes include technical skills and best practices to help you get up to speed quickly and continue your learning journey. We offer fundamental to advanced level training, with on-demand, live, and virtual options to suit your busy schedule. Certifications help you validate and prove your skill and expertise in Google Cloud technologies.

Manual Last Updated: September 2, 2022

Lab Last Tested: June 6, 2022

Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.