arrow_back

Migrate to Containers: Windows

Join Sign in

Migrate to Containers: Windows

1 hour 30 minutes 1 Credit

GSP816

Google Cloud selp-paced labs logo

Overview

Anthos is an open source application platform that enables you to modernize your existing applications on your hybrid or multi-cloud environment. Anthos is built on open source technologies pioneered by Google — including Kubernetes, Istio, and Knative — and enables consistency between on-premises and cloud environments.

When workloads are upgraded to containers, IT departments can eliminate OS-level maintenance and security patching for VMs and automate policy and security updates at scale. Monitoring across on-premises and cloud environments is done through a single interface in the Cloud Console.

Migrate to Containers is a tool to containerize existing VM-based applications to run on Google Kubernetes Engine (GKE) or Anthos. By leveraging the GKE ecosystem, Migrate to Containers provides a fast and simple way to move to modernized orchestration and application management without requiring access to source code, rewriting, or re-architecting applications. Migrate to Containers supports migrations of VMs to containers on Google Kubernetes Engine on these Windows operating systems:

  • Microsoft Windows Server 2008R2 or higher.
  • Microsoft IIS 7 or higher web applications.
  • ASP.NET and .NET Framework version 3.5 or higher.

In this lab you will use Migrate to Containers to migrate a Compute Engine VM running a Windows image into a Kubernetes Cluster using a processing cluster.

Setup

Before you click the Start Lab button

Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources will be made available to you.

This hands-on lab lets you do the lab activities yourself in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials that you use to sign in and access Google Cloud for the duration of the lab.

To complete this lab, you need:

  • Access to a standard internet browser (Chrome browser recommended).
Note: Use an Incognito or private browser window to run this lab. This prevents any conflicts between your personal account and the Student account, which may cause extra charges incurred to your personal account.
  • Time to complete the lab---remember, once you start, you cannot pause a lab.
Note: If you already have your own personal Google Cloud account or project, do not use it for this lab to avoid extra charges to your account.

How to start your lab and sign in to the Google Cloud Console

  1. Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is the Lab Details panel with the following:

    • The Open Google Console button
    • Time remaining
    • The temporary credentials that you must use for this lab
    • Other information, if needed, to step through this lab
  2. Click Open Google Console. The lab spins up resources, and then opens another tab that shows the Sign in page.

    Tip: Arrange the tabs in separate windows, side-by-side.

    Note: If you see the Choose an account dialog, click Use Another Account.
  3. If necessary, copy the Username from the Lab Details panel and paste it into the Sign in dialog. Click Next.

  4. Copy the Password from the Lab Details panel and paste it into the Welcome dialog. Click Next.

    Important: You must use the credentials from the left panel. Do not use your Google Cloud Skills Boost credentials. Note: Using your own Google Cloud account for this lab may incur extra charges.
  5. Click through the subsequent pages:

    • Accept the terms and conditions.
    • Do not add recovery options or two-factor authentication (because this is a temporary account).
    • Do not sign up for free trials.

After a few moments, the Cloud Console opens in this tab.

Note: You can view the menu with a list of Google Cloud Products and Services by clicking the Navigation menu at the top-left. Navigation menu icon

Activate Cloud Shell

Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.

  1. Click Activate Cloud Shell Activate Cloud Shell icon at the top of the Google Cloud console.

  2. Click Continue.

It takes a few moments to provision and connect to the environment. When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. The output contains a line that declares the PROJECT_ID for this session:

Your Cloud Platform project in this session is set to YOUR_PROJECT_ID

gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.

  1. (Optional) You can list the active account name with this command:

gcloud auth list

Output:

ACTIVE: * ACCOUNT: student-01-xxxxxxxxxxxx@qwiklabs.net To set the active account, run: $ gcloud config set account `ACCOUNT`
  1. (Optional) You can list the project ID with this command:

gcloud config list project

Output:

[core] project = <project_ID>

Example output:

[core] project = qwiklabs-gcp-44776a13dea667a6 Note: For full documentation of gcloud, in Google Cloud, refer to the gcloud CLI overview guide.

Prepare your working environment

Set your Project ID:

gcloud config set project $DEVSHELL_PROJECT_ID

Click Authorize: authorize_cloud_shell.png

Set your Project ID to an environment variable:

export PROJECT_ID=$DEVSHELL_PROJECT_ID

A Compute Engine VM named source-vm has been created for you with a simple website. This will serve as the source VM that will be migrated. Next you will create a firewall for the instance and verify that you can access the website.

Create a firewall rule to allow the HTTP:

gcloud compute firewall-rules create default-allow-http --direction=INGRESS --priority=1000 --network=default --action=ALLOW --rules=tcp:80 --source-ranges=0.0.0.0/0 --target-tags=http-server

In the Cloud Console navigate to Compute Engine > VM instances and locate the row for source-vm and copy the External IP address.

Click Check my progress to verify the objective. Create a firewall rule

Paste the instance's IP address to your browser address bar. Prefix it with http://.

You should now see the "Hello World" page. If you get an error message, wait a minute or 2, then refresh the page.

To migrate the VM, first stop it from running.

In the Cloud Console navigate to Compute Engine > VM instances, check the checkbox to the left of the source-vm then click the STOP button on the top.

zone.png

Confirm the shutdown by clicking Stop in the pop-up window. You can continue to the next section while the VM is shutting down.

Note: The VM must be stopped for you to migrate it. You can start it again after you create the migration below, by using the UI.

Create a processing cluster

In the following steps you'll create a GKE cluster in the Cloud that you'll use as a processing cluster. This is where you'll install Migrate to Containers and execute the migration.

In Cloud Shell use the following command to create a new Kubernetes cluster to use as a processing center:

gcloud container clusters create migration-processing --zone={{{project_0.default_zone | (zone)}}} --enable-ip-alias --num-nodes 1 --machine-type e2-standard-4 --enable-stackdriver-kubernetes

This will take a couple of minutes to complete.

Sample output:

Created [https://container.googleapis.com/v1/projects/qwiklabs-gcp-01-86dba53b18ec/zones/{{{project_0.default_zone | (zone)}}}/clusters/migration-processing]. kubeconfig entry generated for migration-processing. NAME: migration-processing LOCATION: {{{project_0.default_zone | (zone)}}} MASTER_VERSION: 1.22.8-gke.201 MASTER_IP: 34.132.74.119 MACHINE_TYPE: e2-standard-4 NODE_VERSION: 1.22.8-gke.201 NUM_NODES: 1 STATUS: RUNNING

Create the Windows node pool used to process the migration:

gcloud container node-pools create node-pool-processing \ --cluster=migration-processing \ --zone={{{project_0.default_zone | (zone)}}} \ --image-type=WINDOWS_LTSC \ --num-nodes=1 \ --scopes "cloud-platform" \ --machine-type=e2-standard-4

This will take about 5 minutes to build In Compute Engine > VM instances you can see the resources getting added.

Go to Navigation menu > Kubernetes Engine > Clusters to see the cluster getting added.

Click Check my progress to verify the objective. Create a processing cluster

Now connect to the cluster:

gcloud container clusters get-credentials migration-processing \ --zone={{{project_0.default_zone | (zone)}}}

Install Migrate to Containers

To allow Migrate to Containers to access Container Registry and Cloud Storage you need to create a service account with the storage.admin role.

In Cloud Shell create the m4a-install service account:

gcloud iam service-accounts create m4a-install \ --project=$PROJECT_ID

Grant the storage.admin role to the service account:

gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:m4a-install@$PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/storage.admin"

Download the key file for the service account:

gcloud iam service-accounts keys create m4a-install.json \ --iam-account=m4a-install@$PROJECT_ID.iam.gserviceaccount.com \ --project=$PROJECT_ID

Set up Migrate to Containers components on your processing cluster by using the migctl command-line tool included with Migrate to Containers:

migctl setup install --json-key=m4a-install.json --gcp-project $PROJECT_ID --gcp-region {{{project_0.startup_script.region | region}}}

Validate the Migrate to Containers installation. Use the migctl doctor command to confirm a successful deployment:

migctl doctor

It may take more than a minute before the command returns the following success result.

Sample Output:

[✓] Deployment

Re-run the command until you see the successful deployment.

Click Check my progress to verify the objective. Install Migrate to Containers

Migrate the VM

Now you'll create a migration plan with migration details, then use it to migrate the VM.

Create service account for the migration

To use Compute Engine as a migration source, you must first create a service account with the compute.viewer and compute.storageAdmin roles:

In Cloud Shell create the m4a-ce-src service account:

gcloud iam service-accounts create m4a-ce-src \ --project=$PROJECT_ID

Grant the compute.viewer role to the service account:

gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:m4a-ce-src@$PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/compute.viewer"

Grant the compute.storageAdmin role to the service account:

gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:m4a-ce-src@$PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/compute.storageAdmin"

Download the key file for the service account:

gcloud iam service-accounts keys create m4a-ce-src.json \ --iam-account=m4a-ce-src@$PROJECT_ID.iam.gserviceaccount.com \ --project=$PROJECT_ID

Click Check my progress to verify the objective. Creating service account

Create the migration source

The migration source specifies the platform hosting the VM to migrate, such as AWS, Azure, VMware, or Compute Engine. In this example, you specify the ce option to the command to create a source for VM running on Compute Engine.

Run the following to create the migration source:

migctl source create ce my-ce-src --project $PROJECT_ID --json-key=m4a-ce-src.json

After you add the source, check the status of your source name my-ce-src:

migctl source status my-ce-src

Expected output:

State: READY

Create a Migration

You begin migrating VMs by creating a migration. This results in a migration plan object.

A migration is the central object with which you perform migration actions, monitor migration activities and status with the migctl tool or in the Cloud Console. The migration object is implemented as a Kubernetes Custom Resource Definition (CRD).

Next you will create a migration by running the migctl tool.

These examples use the Image value for the intent flag, which is the only option supported for Windows. This flag configures Migrate to Containers to create the Dockerfile and other buildable artifacts required for a Windows migration.

  1. Create the migration, where --vm-id specifies the name of the Compute Engine instance as shown in the Cloud Console:

migctl migration create my-migration --source my-ce-src --vm-id source-vm --type windows-iis-container
  1. Monitor the progress:

migctl migration status my-migration

It should take about 3 minutes for the migration to be created. Rerun the command until the migration has Completed.

Optional: Review the migration plan

Download the migration plan to review it:

migctl migration get my-migration

Open the my-migration.yaml file in your preferred text editor or the Cloud Shell code editor to review. If you make changes, upload the new plan with:

migctl migration update my-migration --file my-migration.yaml

Execute the Migration

Use the migctl migration generate-artifacts command to generate target container artifacts as part of processing a VM for migration:

migctl migration generate-artifacts my-migration

When you generate artifacts for Windows workloads, Migrate to Containers writes the artifacts to a zip file and then uploads the zip file to a Cloud Storage bucket. This zip file contains a Dockerfile and several directories and files that are extracted from the source and used by the Dockerfile.

Run the following to check on the migration's progress:

migctl migration status my-migration

When the migration is complete you will see Completed as the Status.

Click Check my progress to verify the objective. Execute the Migration

Build a Windows container image

When you generate artifacts for Windows workloads, the artifacts are copied into a Cloud Storage bucket as an intermediate location that you can download. This folder contains a Dockerfile and several directories and files that are extracted from the source that you then use to build the Windows container.

After the migration has completed, use migctl migration get-artifacts to download the generated artifacts:

migctl migration get-artifacts my-migration --output-directory artifacts

Prepare the environment for building Windows Server multi-arch images

To build the migrated container image you'll use gke-windows-builder. This is a Cloud Build image that allows building windows multi-arch container images.

This flow requires some setup steps:

Enable the Compute Engine, Cloud Build and Artifact Registry APIs for your project. The gke-windows-builder is invoked using Cloud Build, and the resulting multi-arch container images are pushed to Artifact Registry. Compute Engine is required for the builder to create and manage Windows Server VMs.

gcloud services enable compute.googleapis.com cloudbuild.googleapis.com \ artifactregistry.googleapis.com cloudbuild.googleapis.com

Assign roles. These roles are required for the builder to create the Windows Server VMs, to copy the workspace to a Cloud Storage bucket, to configure the networks to build the Docker image and to push resulting image to Artifact Repository:

export MEMBER=$(gcloud projects describe $PROJECT_ID --format 'value(projectNumber)')@cloudbuild.gserviceaccount.com gcloud projects add-iam-policy-binding $PROJECT_ID \ --member=serviceAccount:$MEMBER --role='roles/compute.instanceAdmin' gcloud projects add-iam-policy-binding $PROJECT_ID \ --member=serviceAccount:$MEMBER --role='roles/iam.serviceAccountUser' gcloud projects add-iam-policy-binding $PROJECT_ID \ --member=serviceAccount:$MEMBER --role='roles/compute.networkViewer' gcloud projects add-iam-policy-binding $PROJECT_ID \ --member=serviceAccount:$MEMBER --role='roles/storage.admin' gcloud projects add-iam-policy-binding $PROJECT_ID \ --member=serviceAccount:$MEMBER --role='roles/artifactregistry.writer'

Add a firewall rule named allow-winrm-ingress to allow WinRM to connect to Windows Server VMs to run a Docker build:

gcloud compute firewall-rules create allow-winrm-ingress --allow=tcp:5986 --direction=INGRESS

Build the container image

Navigate to the artifact folder:

cd artifacts/migrated-image-yei8n

Create a file named cloudbuild.yaml.

nano cloudbuild.yaml

Enter the following contents into cloudbuild.yaml:

timeout: 3600s steps: - name: 'us-docker.pkg.dev/gke-windows-tools/docker-repo/gke-windows-builder:latest' args: - --container-image-name - 'gcr.io/$PROJECT_ID/myimagename:v1.0.0' - --versions - 'ltsc2019'

This file is your build config file. At build time, Container Builder automatically replaces $PROJECT_ID with your project ID. Start the build with the following command:

gcloud builds submit --config cloudbuild.yaml .

You'll see logs like the following example. The last line in the log shows that the build succeeded:

... 2020/09/14 11:34:25 Instance: 35.184.178.49 shut down successfully PUSH DONE ----------------------------------------------------------------------------------- ID CREATE_TIME DURATION SOURCE IMAGES STATUS ec333452-1301-47e8-90e2-716aeb2f5650 2020-09-14T11:21:43+00:00 12M43S gs://PROJECT_ID_cloudbuild/source/1600082502.509759-b949721a922d462c94a75da9be9f1181.tgz - SUCCESS

You've just built the image using the build config file and pushed the image to Artifact Registry at gcr.io/myproject/myimagename:v1.0.0.

Deploy the container image

With your container image in Container Registry, you can deploy the image to a GKE cluster. See Deploying a Windows Server application for more. As part of deploying the container, you have to create a deployment.yaml file in Cloud Shell.

Using nano in Cloud Shell or the Cloud Shell code editor, create a file called deployment.yaml.

nano deployment.yaml

Paste the content below into the file, replacing myproject with your Project ID :

apiVersion: apps/v1 kind: Deployment metadata: name: myworkload labels: app: myworkload spec: replicas: 1 selector: matchLabels: app: myworkload template: metadata: labels: app: myworkload spec: nodeSelector: kubernetes.io/os: windows containers: - name: myworkload-container image: gcr.io/myproject/myimagename:v1.0.0 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: my-service spec: selector: app: myworkload ports: - protocol: TCP port: 80 targetPort: 80 type: LoadBalancer

Save the file.

Now apply the deployment file:

kubectl apply -f deployment.yaml

It may take a few minutes for the deployment to finish.

Now check for an external IP address:

kubectl get service

Sample output:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.32.0.1 <none> 443/TCP 57m my-service LoadBalancer 10.32.13.63 34.133.99.103 80:30070/TCP 6m1s

It will take about 5 minutes for the pod to start running. Run the following to monitor their status:

kubectl get pods

When the status is Running, you'll see an external IP address for the my-service you added.

Test the migration

Test the migration by opening a browser tab and visiting the web page at the external IP address of my-service (be sure to use HTTP rather than HTTPS).

For example:

http://<my-service-external-IP>

Click Check my progress to verify the objective. Deploying container image and testing migration

Congratulations!

You migrated a Compute Engine instance with a Windows image to a Kubernetes pod using Migrate to Containers. This same technique can be used for other sources of VMs including VMWare vSphere, AWS, and Azure.

Next Steps / Learn More

Google Cloud training and certification

...helps you make the most of Google Cloud technologies. Our classes include technical skills and best practices to help you get up to speed quickly and continue your learning journey. We offer fundamental to advanced level training, with on-demand, live, and virtual options to suit your busy schedule. Certifications help you validate and prove your skill and expertise in Google Cloud technologies.

Manual Last Updated: June 17, 2022
Lab Last Tested: June 17, 2022

Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.