arrow_back

Deploying a Multi-Cluster Gateway Across GKE Clusters

Login Gabung
Test and share your knowledge with our community!
done
Get access to over 700 hands-on labs, skill badges, and courses

Deploying a Multi-Cluster Gateway Across GKE Clusters

Lab 1 jam 30 menit universal_currency_alt 7 Kredit show_chart Advanced
Test and share your knowledge with our community!
done
Get access to over 700 hands-on labs, skill badges, and courses

GSP1245

Google Cloud Self-Paced Labs

Overview

Multi-cluster Gateways (MCG) make managing application networking across many clusters and teams easy and secure, which is useful in terms of scalability. GatewayClasses defines a cluster-scoped resource that's a template for creating load balancers in a cluster. In GKE, the gke-l7-gxlb-mc and gke-l7-rilb-mc GatewayClasses deploy multi-cluster Gateways that provide HTTP routing, traffic splitting, traffic mirroring, health-based failover, and more across different GKE clusters, Kubernetes Namespaces, and regions.

Multi-cluster Services (MCS) is an API standard for Services that span clusters. Its GKE controller also provides service-discovery across GKE clusters. The multi-cluster Gateway controller uses MCS API resources to group Pods into a Service that is addressable across or spans multiple clusters.

In this lab, you learn how to enable, use and deploy the multi-cluster Google Kubernetes Engine (GKE) Gateway controller. This Google-hosted controller provisions external and internal load balancers, which balance traffic across multiple Kubernetes clusters.

Diagram depicting ther roles for Gateway and HttpRoute

Objectives

In this lab, you learn how to perform the following tasks:

  • Enable GKE Enterprise
  • Create and Register GKE clusters to the fleet
  • Enable and configure Multi-cluster Services (MCS)
  • Enable and configure Multi-cluster Gateways (MCG)
  • Deploy a distributed application and balance traffic across clusters

Setup and requirements

Before you click the Start Lab button

Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources will be made available to you.

This Qwiklabs hands-on lab lets you do the lab activities yourself in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials that you use to sign in and access Google Cloud for the duration of the lab.

What you need

To complete this lab, you need:

  • Access to a standard internet browser (Chrome browser recommended).
  • Time to complete the lab.

Note: If you already have your own personal Google Cloud account or project, do not use it for this lab.

Note: If you are using a Pixelbook, open an Incognito window to run this lab.

How to start your lab and sign in to the Google Cloud Console

  1. Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is a panel populated with the temporary credentials that you must use for this lab.

    Open Google Console

  2. Copy the username, and then click Open Google Console. The lab spins up resources, and then opens another tab that shows the Sign in page.

    Sign in

    Tip: Open the tabs in separate windows, side-by-side.

  3. In the Sign in page, paste the username that you copied from the Connection Details panel. Then copy and paste the password.

    Important: You must use the credentials from the Connection Details panel. Do not use your Qwiklabs credentials. If you have your own Google Cloud account, do not use it for this lab (avoids incurring charges).

  4. Click through the subsequent pages:

    • Accept the terms and conditions.
    • Do not add recovery options or two-factor authentication (because this is a temporary account).
    • Do not sign up for free trials.

After a few moments, the Cloud Console opens in this tab.

Activate Cloud Shell

Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.

In the Cloud Console, in the top right toolbar, click the Activate Cloud Shell button.

Cloud Shell icon

Click Continue.

cloudshell_continue.png

It takes a few moments to provision and connect to the environment. When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. For example:

Cloud Shell Terminal

gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.

You can list the active account name with this command:

gcloud auth list

(Output)

Credentialed accounts: - <myaccount>@<mydomain>.com (active)

(Example output)

Credentialed accounts: - google1623327_student@qwiklabs.net

You can list the project ID with this command:

gcloud config list project

(Output)

[core] project = <project_ID>

(Example output)

[core] project = qwiklabs-gcp-44776a13dea667a6

Enable GKE Enterprise

Begin by enabling the GKE Enterprise.

  1. In the Google Cloud console, select Navigation Menu (Navigation menu) > Kubernetes Engine > Overview.

  2. Click the Fleet Dashboard tab.

  3. Click the Learn about GKE Enterprise button. From here, you can see a description of the various features available in GKE Enterprise.

To learn more about the features included with GKE Enterprise, view the details in the Features and Benefits tab.

For a side-by-side comparison of the GKE Standard edition and GKE Enterprise features, click the Compare Plans tab. You can also view a hypothetical monthly cost scenario, should you enable GKE Enterprise.

  1. Click the Enable GKE Enterprise button.

  2. Click Edit Fleet Name or Cluster List.

  3. On the Fleet registration page, enter gke-enterprise-fleet in the Fleet name field. The fleet name cannot be changed after initial creation.

  4. Click Save to save the fleet name.

  5. Click Confirm to enable GKE Enterprise.

Note: As an alternative to the console-based method, you can enable GKE enterprise by enabling the Anthos API using gcloud, and create an empty fleet using the following command:

gcloud container fleet create --display-name=gke-enterprise-fleet
  1. Click Close to complete the operation.

Task 1. Deploy clusters

In this task, you deploy three GKE clusters across two different regions in your project. All clusters are registered to the same fleet, allowing multi-cluster Gateways and Services to operate across them. Deploy three GKE clusters into and .

GKE Clusters architecture diagram, depicting the different regions within the project

  1. In Cloud Shell, create two GKE clusters in named cluster1 and cluster2 with the --async flag to avoid waiting for the other clusters to provision:
gcloud container clusters create cluster1 \ --zone={{{primary_project.default_zone_1|ZONE1}}} \ --enable-ip-alias \ --machine-type=e2-standard-4 \ --num-nodes=1 \ --workload-pool={{{primary_project.project_id|PROJECT_ID}}}.svc.id.goog \ --release-channel=regular \ --project={{{primary_project.project_id|PROJECT_ID}}} --async gcloud container clusters create cluster2 \ --zone={{{primary_project.default_zone_1|ZONE}}} \ --enable-ip-alias \ --machine-type=e2-standard-4 \ --num-nodes=1 \ --workload-pool={{{primary_project.project_id|PROJECT_ID}}}.svc.id.goog \ --release-channel=regular \ --project={{{primary_project.project_id|PROJECT_ID}}} --async
  1. Create a GKE cluster in named cluster3:
gcloud container clusters create cluster3 \ --zone={{{primary_project.default_zone_2|ZONE2}}} \ --enable-ip-alias \ --machine-type=e2-standard-4 \ --num-nodes=1 \ --workload-pool={{{primary_project.project_id|PROJECT_ID}}}.svc.id.goog \ --release-channel=regular \ --project={{{primary_project.project_id|PROJECT_ID}}} Note: It can take up to 8 minutes to provision the GKE clusters.
  1. Ensure all clusters are running:
gcloud container clusters list

Configure cluster credentials

Next, configure cluster credentials with memorable names. This makes it easier to switch between clusters when deploying resources across several clusters.

  1. Fetch the credentials for cluster cluster1, cluster2, and cluster3:
gcloud container clusters get-credentials cluster1 --zone={{{primary_project.default_zone_1|ZONE}}} --project={{{primary_project.project_id|PROJECT_ID}}} gcloud container clusters get-credentials cluster2 --zone={{{primary_project.default_zone_1|ZONE}}} --project={{{primary_project.project_id|PROJECT_ID}}} gcloud container clusters get-credentials cluster3 --zone={{{primary_project.default_zone_2|ZONE}}} --project={{{primary_project.project_id|PROJECT_ID}}}

This stores the credentials locally so that you can use your kubectl client to access the cluster API servers. By default, an auto-generated name is created for the credential.

  1. Rename the cluster contexts so they are easier to reference later:
kubectl config rename-context gke_{{{primary_project.project_id|PROJECT_ID}}}_{{{primary_project.default_zone_1|ZONE}}}_cluster1 cluster1 kubectl config rename-context gke_{{{primary_project.project_id|PROJECT_ID}}}_{{{primary_project.default_zone_1|ZONE}}}_cluster2 cluster2 kubectl config rename-context gke_{{{primary_project.project_id|PROJECT_ID}}}_{{{primary_project.default_zone_2|ZONE}}}_cluster3 cluster3
  1. Enable the multi-cluster Gateway API on the cluster1 cluster:
gcloud container clusters update cluster1 --gateway-api=standard --region={{{primary_project.default_zone_1|ZONE}}} Note: It can take up to 5 minutes to enable the multi-cluster Gateway API.

Click Check my progress to verify the objective.

Set up your environment for multi-cluster Gateways

Task 2. Register clusters to the fleet

Now it's time to register these clusters.

  1. Register the clusters to a fleet:
gcloud container fleet memberships register cluster1 \ --gke-cluster {{{primary_project.default_zone_1|ZONE}}}/cluster1 \ --enable-workload-identity \ --project={{{primary_project.project_id|PROJECT_ID}}} gcloud container fleet memberships register cluster2 \ --gke-cluster {{{primary_project.default_zone_1|ZONE}}}/cluster2 \ --enable-workload-identity \ --project={{{primary_project.project_id|PROJECT_ID}}} gcloud container fleet memberships register cluster3 \ --gke-cluster {{{primary_project.default_zone_2|ZONE}}}/cluster3 \ --enable-workload-identity \ --project={{{primary_project.project_id|PROJECT_ID}}}
  1. Confirm that the clusters have successfully registered with a fleet:
gcloud container fleet memberships list --project={{{primary_project.project_id|PROJECT_ID}}}

The output should resemble the following:

NAME: cluster3 UNIQUE_ID: fe623d20-27bf-463c-bba3-5ff7288411e5 LOCATION: {{{primary_project.default_region_2|LOCATION}}} NAME: cluster2 UNIQUE_ID: 32cfac32-f624-41f9-bf2f-399b6c0cbbe3 LOCATION: {{{primary_project.default_region_1|LOCATION}}} NAME: cluster1 UNIQUE_ID: 3aaa25a8-1e1a-4ee7-a4bf-dfeffadcb770 LOCATION: {{{primary_project.default_region_1|LOCATION}}}

Click Check my progress to verify the objective.

Register clusters in a fleet

Task 3. Enable Multi-cluster Services (MCS)

In this task, you enable MCS in your fleet for the registered clusters. The MCS controller listens for import/export Services, so that Kubernetes Services are routable across clusters, and traffic can be distributed across them.

MCS Controller architecture diagram.

  1. Enable multi-cluster Services in your fleet for the registered clusters:
gcloud container fleet multi-cluster-services enable \ --project {{{primary_project.project_id|PROJECT_ID}}}

This enables the MCS controller for the three clusters that are registered to your fleet, so that it can start listening to and exporting Services.

  1. Confirm that MCS is enabled for the registered clusters:
gcloud container fleet multi-cluster-services describe --project={{{primary_project.project_id|PROJECT_ID}}}

You should see the memberships for the three registered clusters.

Note: It can take up to 5 minutes for all of the clusters to show. Wait and retry until you see a similar output.

The output is similar to the following:

createTime: '2024-04-08T10:42:08.727374460Z' membershipStates: projects/652767849412/locations/{{{primary_project.default_region_2|LOCATION}}}/memberships/cluster3: state: code: OK description: Firewall successfully updated updateTime: '2024-04-08T10:45:05.098464061Z' projects/652767849412/locations/{{{primary_project.default_region_1|LOCATION}}}/memberships/cluster1: state: code: OK description: Firewall successfully updated updateTime: '2024-04-08T10:45:07.883462984Z' projects/652767849412/locations/{{{primary_project.default_region_1|LOCATION}}}/memberships/cluster2: state: code: OK description: Firewall successfully updated updateTime: '2024-04-08T10:45:06.552764591Z' name: projects/qwiklabs-gcp-03-dafc3e3432a4/locations/global/features/multiclusterservicediscovery resourceState: state: ACTIVE spec: {} updateTime: '2024-04-08T10:45:09.012060855Z'

Click Check my progress to verify the objective.

Enable Multi-cluster Services

Task 4. Install Gateway API CRDs and enable the MCG controller

Before using Gateway resources in GKE, you must install the Gateway API CustomResource Definitions (CRDs) in your config cluster, and enable the MCG controller. The config cluster is the GKE cluster in which your Gateway and Route resources are deployed. It is a central place that controls routing across your clusters. You will use cluster1 as your config cluster.

Config cluster and MCG Controller architecture diagram

  1. Deploy Gateway resources into the cluster1 cluster:
kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v0.5.0" \ | kubectl apply -f - --context=cluster1
  1. Enable the Multi-cluster Gateway controller for the cluster1 cluster:
gcloud container fleet ingress enable \ --config-membership=cluster1 \ --project={{{primary_project.project_id|PROJECT_ID}}} \ --location={{{primary_project.default_region_1|LOCATION}}}
  1. Confirm that the global Gateway controller is enabled for the registered clusters:
gcloud container fleet ingress describe --project={{{primary_project.project_id|PROJECT_ID}}}

Your output should look similar to the following:

createTime: '2024-04-08T10:48:48.932255711Z' membershipStates: projects/652767849412/locations/{{{primary_project.default_region_2|LOCATION}}}/memberships/cluster3: state: code: OK updateTime: '2024-04-08T10:49:54.240369901Z' projects/652767849412/locations/{{{primary_project.default_region_1|LOCATION}}}/memberships/cluster1: state: code: OK updateTime: '2024-04-08T10:49:54.240369031Z' projects/652767849412/locations/{{{primary_project.default_region_1|LOCATION}}}/memberships/cluster2: state: code: OK updateTime: '2024-04-08T10:49:54.240370591Z' name: projects/qwiklabs-gcp-03-dafc3e3432a4/locations/global/features/multiclusteringress resourceState: state: ACTIVE spec: multiclusteringress: configMembership: projects/qwiklabs-gcp-03-dafc3e3432a4/locations/{{{primary_project.default_region_1|LOCATION}}}/memberships/cluster1 state: state: code: OK description: Ready to use updateTime: '2024-04-08T10:49:54.040880431Z' updateTime: '2024-04-08T10:50:08.051148347Z'
  1. List the GatewayClasses:
kubectl get gatewayclasses --context=cluster1

Your output should look similar to the following:

NAME CONTROLLER ACCEPTED AGE gke-l7-global-external-managed networking.gke.io/gateway True 23m gke-l7-global-external-managed-mc networking.gke.io/gateway True 74s gke-l7-gxlb networking.gke.io/gateway True 23m gke-l7-gxlb-mc networking.gke.io/gateway True 75s gke-l7-regional-external-managed networking.gke.io/gateway True 23m gke-l7-regional-external-managed-mc networking.gke.io/gateway True 74s gke-l7-rilb networking.gke.io/gateway True 23m gke-l7-rilb-mc networking.gke.io/gateway True 75s

Click Check my progress to verify the objective. Install Gateway API CRDs

Task 5. Deploy the demo application

Deploy the application, now that the MCG controller is enabled.

  1. Create the store deployment and namespace in the cluster3 and cluster2:
cat <<EOF > store-deployment.yaml kind: Namespace apiVersion: v1 metadata: name: store --- apiVersion: apps/v1 kind: Deployment metadata: name: store namespace: store spec: replicas: 2 selector: matchLabels: app: store version: v1 template: metadata: labels: app: store version: v1 spec: containers: - name: whereami image: gcr.io/google-samples/whereami:v1.2.1 ports: - containerPort: 8080 EOF kubectl apply -f store-deployment.yaml --context=cluster2 kubectl apply -f store-deployment.yaml --context=cluster3

The config cluster cluster1 can also host workloads, but in this lab, you only run the Gateway controllers and configuration on it.

  1. Create the Service and ServiceExports for the cluster2 cluster:
cat <<EOF > store-west-service.yaml apiVersion: v1 kind: Service metadata: name: store namespace: store spec: selector: app: store ports: - port: 8080 targetPort: 8080 --- kind: ServiceExport apiVersion: net.gke.io/v1 metadata: name: store namespace: store --- apiVersion: v1 kind: Service metadata: name: store-west-2 namespace: store spec: selector: app: store ports: - port: 8080 targetPort: 8080 --- kind: ServiceExport apiVersion: net.gke.io/v1 metadata: name: store-west-2 namespace: store EOF kubectl apply -f store-west-service.yaml --context=cluster2
  1. Create the Service and ServiceExports for the cluster3 cluster:
cat <<EOF > store-east-service.yaml apiVersion: v1 kind: Service metadata: name: store namespace: store spec: selector: app: store ports: - port: 8080 targetPort: 8080 --- kind: ServiceExport apiVersion: net.gke.io/v1 metadata: name: store namespace: store --- apiVersion: v1 kind: Service metadata: name: store-east-1 namespace: store spec: selector: app: store ports: - port: 8080 targetPort: 8080 --- kind: ServiceExport apiVersion: net.gke.io/v1 metadata: name: store-east-1 namespace: store EOF kubectl apply -f store-east-service.yaml --context=cluster3
  1. Ensure that the service exports have been successfully created:
kubectl get serviceexports --context cluster2 --namespace store kubectl get serviceexports --context cluster3 --namespace store

Your output should look similar to the following:

# cluster2 cluster NAME AGE store 2m40s store-west-2 2m40s # cluster3 cluster NAME AGE store 2m25s store-east-1 2m25s

This demonstrates that the store Service contains store Pods across both clusters while the store-west-2 and store-east-1 Services only contain store Pods on their respective clusters. These overlapping Services are used to target the Pods across multiple clusters or a subset of Pods on a single cluster.

Click Check my progress to verify the objective.

Deploy the demo application

Task 6. Deploy the Gateway and HttpRoute

Platform administrators manage and deploy Gateways to centralize security policies such as TLS, a security protocol that provides privacy and data integrity for Internet communications.

Service Owners in different teams deploy HttpRoutes in their own namespace so that they can independently control their routing logic.

Once the applications have been deployed, you can configure a Gateway using the Gateway. This Gateway creates an external Application Load Balancer configured to distribute traffic across your target clusters.

Gateway and HttpRoute are resources deployed in the config cluster, which in this case is the cluster1 cluster. HTTPRoute, specifically, is a Gateway API type for specifying routing behavior of HTTP requests from a Gateway listener to an API object.

  1. Deploy the Gateway in the cluster1 config cluster:
cat <<EOF > external-http-gateway.yaml kind: Namespace apiVersion: v1 metadata: name: store --- kind: Gateway apiVersion: gateway.networking.k8s.io/v1beta1 metadata: name: external-http namespace: store spec: gatewayClassName: gke-l7-gxlb-mc listeners: - name: http protocol: HTTP port: 80 allowedRoutes: kinds: - kind: HTTPRoute EOF kubectl apply -f external-http-gateway.yaml --context=cluster1
  1. Deploy the HTTPRoute in the cluster1 config cluster:
cat <<EOF > public-store-route.yaml kind: HTTPRoute apiVersion: gateway.networking.k8s.io/v1beta1 metadata: name: public-store-route namespace: store labels: gateway: external-http spec: parentRefs: - name: external-http rules: - matches: - path: type: PathPrefix value: /west backendRefs: - group: net.gke.io kind: ServiceImport name: store-west-2 port: 8080 - matches: - path: type: PathPrefix value: /east backendRefs: - group: net.gke.io kind: ServiceImport name: store-east-1 port: 8080 - backendRefs: - group: net.gke.io kind: ServiceImport name: store port: 8080 EOF kubectl apply -f public-store-route.yaml --context=cluster1

Notice that the default requests are being sent to the closest backend defined by the default rule. In case the path /west is in the request, the request is routed to the service in cluster2. If the request's path matches /east, the request is routed to the cluster3 cluster.

Routing architecture diagram depicting the requests being rerouted.

  1. View the status of the Gateway you just created in cluster1:
kubectl describe gateway external-http --context cluster1 --namespace store Status: Addresses: Type: IPAddress Value: 35.190.90.199 Conditions: Last Transition Time: 2024-04-08T11:11:44Z Message: The OSS Gateway API has deprecated this condition, do not depend on it. Observed Generation: 1 Reason: Scheduled Status: True Type: Scheduled Last Transition Time: 2024-04-08T11:11:44Z Message: Observed Generation: 1 Reason: Accepted Status: True Type: Accepted Last Transition Time: 2024-04-08T11:11:44Z Message: Observed Generation: 1 Reason: Programmed Status: True Type: Programmed Last Transition Time: 2024-04-08T11:11:44Z Message: The OSS Gateway API has altered the "Ready" condition semantics and reserved it for future use. GKE Gateway will stop emitting it in a future update, use "Programmed" instead. Observed Generation: 1 Reason: Ready Status: True Type: Ready Last Transition Time: 2024-04-08T11:11:44Z Message: Observed Generation: 1 Reason: Healthy Status: True Type: networking.gke.io/GatewayHealthy Listeners: Attached Routes: 0 Conditions: Last Transition Time: 2024-04-08T11:11:44Z Message: Observed Generation: 1 Reason: Programmed Status: True Type: Programmed Last Transition Time: 2024-04-08T11:11:44Z Message: The OSS Gateway API has altered the "Ready" condition semantics and reserved it for future use. GKE Gateway will stop emitting it in a future update, use "Programmed" instead. Observed Generation: 1 Reason: Ready Status: True Type: Ready Name: http Supported Kinds: Group: gateway.networking.k8s.io Kind: HTTPRoute Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ADD 3m9s mc-gateway-controller store/external-http Normal SYNC 68s (x13 over 3m1s) mc-gateway-controller store/external-http Normal UPDATE 54s (x3 over 3m9s) mc-gateway-controller store/external-http Normal SYNC 54s mc-gateway-controller SYNC on store/external-http was a success

Sometimes there are transient errors shown in the Events section. Wait until it shows the SYNC on store/external-http was a success message.

Note: It can take up to 10 minutes for the Gateway to fully deploy and serve traffic.
  1. It takes some time for the external IP to be created. To ensure that it is, run the following command until you see the external IP:
kubectl get gateway external-http -o=jsonpath="{.status.addresses[0].value}" --context cluster1 --namespace store | xargs echo -e

If it doesn't return an external IP, wait a few minutes and run it again.

  1. Once the Gateway has deployed successfully, retrieve the external IP address from external-http Gateway:
EXTERNAL_IP=$(kubectl get gateway external-http -o=jsonpath="{.status.addresses[0].value}" --context cluster1 --namespace store) echo $EXTERNAL_IP

Make sure that the IP is not empty.

  1. Send traffic to the root path of the domain:
curl http://${EXTERNAL_IP}

This load balances traffic to the store ServiceImport which is across clusters cluster2 and cluster3. The load balancer sends your traffic to the closest region to you, and you might not see responses from the other region.

Your output should look similar to the following:

{ "cluster_name": "cluster2", "host_header": "35.190.90.199", "node_name": "gke-cluster2-default-pool-3f8a6304-22g0.{{{primary_project.default_zone_1|LOCATION}}}.c.qwiklabs-gcp-03-dafc3e3432a4.internal", "pod_name": "store-5b8c6b7df-kmp9g", "pod_name_emoji": "🧑🏽‍🚒", "project_id": "qwiklabs-gcp-03-dafc3e3432a4", "timestamp": "2024-04-08T11:18:39", "zone": "{{{primary_project.default_zone_1|LOCATION}}}" }

If you see the default backend - 404 or curl: (52) Empty reply from server message, the Gateway is not ready yet. Wait a couple of minutes and try again.

  1. Next, send traffic to the /west path to access the application located in the cluster2 cluster:
curl http://${EXTERNAL_IP}/west

This routes traffic to the store-west-2 ServiceImport which only has Pods running on the cluster2 cluster. A cluster-specific ServiceImport, like store-west-2, enables an application owner to explicitly send traffic to a specific cluster, rather than letting the load balancer make the decision.

The output confirms that the request was served by Pod from the cluster2 cluster:

{ "cluster_name": "cluster2", "host_header": "35.190.90.199", "node_name": "gke-cluster2-default-pool-3f8a6304-22g0.{{{primary_project.default_zone_1|LOCATION}}}.c.qwiklabs-gcp-03-dafc3e3432a4.internal", "pod_name": "store-5b8c6b7df-m5lb9", "pod_name_emoji": "👩🏾‍❤️‍💋‍👩🏻", "project_id": "qwiklabs-gcp-03-dafc3e3432a4", "timestamp": "2024-04-08T11:19:46", "zone": "{{{primary_project.default_zone_1|LOCATION}}}" }
  1. Finally, send traffic to the /east path to access the application located in the cluster3 cluster:
curl http://${EXTERNAL_IP}/east

The output confirms that the request was served by Pod from the cluster3 cluster:

{ "cluster_name": "cluster3", "host_header": "35.190.90.199", "node_name": "gke-cluster3-default-pool-d455657f-srg8.{{{primary_project.default_region_2|LOCATION}}}-c.c.qwiklabs-gcp-03-dafc3e3432a4.internal", "pod_name": "store-5b8c6b7df-bl8ps", "pod_name_emoji": "🚍", "project_id": "qwiklabs-gcp-03-dafc3e3432a4", "timestamp": "2024-04-08T11:20:23", "zone": "{{{primary_project.default_zone_2|ZONE}}}" }

Click Check my progress to verify the objective.

Deploy the Gateway and HTTPRoute

Congratulations!

In this lab, you registered the created GKE clusters to a fleet, enabled and configured the MCS and MCG controllers, deployed gateways and HttpRoute in the config cluster, and ran a distributed application across multiple clusters with a unique load balancer routing the traffic to the right pods. Through these steps, you've learned about the features and capabilities of GKE Enterprise and its scalability.

Manual Last Updated May 03, 2024

Lab Last Tested April 08, 2024

Copyright 2024 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.