In this lab, you learn to install
Cloud Service Mesh (CSM) on
Google Kubernetes Engine. Cloud Service Mesh is a managed service based on
Istio, the leading open source service mesh.
A service mesh gives you a framework for connecting, securing,
and managing microservices. It provides a networking layer on top of Kubernetes
with features such as advanced load balancing capabilities, service-to-service
authentication, and monitoring without requiring any changes in service
code.
Cloud Service Mesh has a suite of additional features and tools that help you
observe and manage secure, reliable services in a unified way. In this lab you
also learn how to use some of these features:
Service metrics and logs for HTTP(S) traffic within your mesh's GKE
cluster are automatically ingested to Google Cloud.
Preconfigured service dashboards give you the information you need to
understand your services.
In-depth telemetry lets you dig deep into your metrics and logs,
filtering and slicing your data on a wide variety of attributes.
Service-to-service relationships at a glance help you understand who
connects to which service and the services that each service depends on.
Service-level objectives (SLOs) provide insights into the health of
your services. You can easily define an SLO and alert on your own standards
of service health.
Cloud Service Mesh is the easiest and richest way to implement an Istio-based
service mesh on your GKE Enterprise clusters.
Objectives
In this lab, you learn how to perform the following tasks:
Install Cloud Service Mesh, with tracing enabled and configured to use Cloud
Trace as the backend.
Deploy Bookinfo, an Istio-enabled multi-service application.
Enable external access using an Istio Ingress Gateway.
Use the Bookinfo application.
Evaluate service performance using Cloud Trace features within Google Cloud.
Create and monitor service-level objectives (SLOs).
Leverage the Cloud Service Mesh Dashboard to understand service performance.
Setup and requirements
For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.
Sign in to Qwiklabs using an incognito window.
Note the lab's access time (for example, 1:15:00), and make sure you can finish within that time.
There is no pause feature. You can restart if needed, but you have to start at the beginning.
When ready, click Start lab.
Note your lab credentials (Username and Password). You will use them to sign in to the Google Cloud Console.
Click Open Google Console.
Click Use another account and copy/paste credentials for this lab into the prompts.
If you use other credentials, you'll receive errors or incur charges.
Accept the terms and skip the recovery resource page.
After you complete the initial sign-in steps, the project dashboard appears.
Click Select a project, highlight your GCP Project ID, and click
Open to select your project.
Activate Google Cloud Shell
Google Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud.
Google Cloud Shell provides command-line access to your Google Cloud resources.
In Cloud console, on the top right toolbar, click the Open Cloud Shell button.
Click Continue.
It takes a few moments to provision and connect to the environment. When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. For example:
gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
You can list the active account name with this command:
[core]
project = qwiklabs-gcp-44776a13dea667a6
Note:
Full documentation of gcloud is available in the
gcloud CLI overview guide
.
Note:
The lab environment has already been partially configured: a GKE cluster named gke has been created and registered.
Task 1. Install Cloud Service Mesh with tracing enabled
A Google Kubernetes Engine (GKE) cluster named gke has already been created
and registered. You will install Cloud Service Mesh onto this cluster and
override the standard configuration to enable the optional tracing components.
Configure cluster access for kubectl and verify the cluster
To set environment variables for use in scripts, in Cloud Shell, run the
following commands:
Set the Name environment variable.
CLUSTER_NAME=gke
Set the Zone and Region environment variables.
CLUSTER_ZONE={{{ project_0.default_zone| "Zone added at lab start" }}}
CLUSTER_REGION={{{ project_0.default_region| "Region added at lab start" }}}
Set the Project ID environment variable.
PROJECT_ID={{{ project_0.project_id | "PROJECT ID added at lab start" }}}
Verify the config map has been enabled:
kubectl get configmap
Output:
NAME DATA AGE
kube-root-ca.crt 1 48m
Congratulations!
You now have a GKE cluster with
Cloud Service Mesh installed. Kubernetes Metrics are being recorded to
Cloud Monitoring, logs are being recorded to Cloud Logging, and distributed
trace information is being sent to Cloud Trace.
Task 2. Install the microservices-demo application on the cluster
Online Boutique is a cloud-native microservices demo application. Online
Boutique consists of a 10-tier microservices application. The application is a
web-based ecommerce app where users can browse items, add them to the cart,
and purchase them.
Google uses this application to demonstrate use of technologies like
Kubernetes/GKE, Istio/ASM, Google Operations Suite, gRPC and OpenCensus. This
application works on any Kubernetes cluster (such as a local one) and on
Google Kubernetes Engine. It’s easy to deploy with little to no configuration.
For more information about the application, refer to the
github repo.
Return to Kubernetes click on Workloads and then under Networking click Gateways,Services & Ingress pages, and verify that
the new deployments and services have been created on the gke cluster.
Note: You can filter these pages by cluster,
object type, and namespace to make it easier to parse the information
presented.
Take a couple of minutes to investigate the demo application using the console and UI.
Note:
When the workloads show an OK status, use the IP address associated with the
frontend-external service (for either cluster) to do one of the following:
Look on the Services & Ingress page within the console.
Open a new tab and enter the IP address for the frontend-external service.
Click through various pages to get a sense of the application.
Task 3. Review Google Cloud's operations suite functionality
When you install a GKE or an Anthos cluster, you can enable cluster logs and
metrics to be collected and forwarded to Cloud Logging and Cloud Monitoring.
That gives you visibility about the cluster, the nodes, the pods and even the
containers in that cluster. However, GKE and Anthos don't monitor the
communication between microservices.
With Cloud Service Mesh, because every request goes through an Envoy proxy,
microservice telemetry information can be collected and inspected. Envoy proxy
extensions then send that telemetry to Google Cloud, where you can inspect it.
Use Cloud Trace dashboards to investigate requests and their latencies and
obtain a breakdown from all services involved in a request.
In the Google Cloud Console, on the Navigation menu, click Trace.
A trace graph displays service requests made within the demo application.
Click on a
dot that displays higher up in the graph (representing a higher overall)
request time.
How long did the request take?
When did the request occur?
What service was being called?
What other services were called during execution of this request?
Where was most of the time spent in processing this request?
Task 4. Deploy a canary release that has high latency
In this task, you deploy a new version of a service that has an issue
which causes high latency. In subsequent tasks, you use the observability
tools to diagnose and resolve.
In Cloud Shell, clone the repository that has the configuration files you
need for this part of the lab:
kubectl apply -f ~/istio-samples/istio-canary-gke/canary/vs-split-traffic.yaml
Note: You are creating:
A DestinationRule to set routing of requests between the service versions
A new deployment of the product catalog service that has high latency
A VirtualService to split product catalog traffic 75% to v1 and 25% to v2
You can open each of the configuration files in the Cloud Shell editor to
better understand the definition of each new resource.
Task 5. Define your service level objective
When you are not using Cloud Service Mesh, you can define SLOs with the
Service Monitoring
API. When you are using Cloud Service Mesh, as with the gke cluster, you
can define and monitor SLOs via the Cloud Service Mesh dashboard.
In the Google Cloud Console, on the Navigation menu, click Kubernetes Engine to open the GKE Dashboard.
Notice that there is one cluster registered in the fleet.
On the side pannel, under Features click Service Mesh to go to the Cloud Service Mesh dashboard.
A summary of service performance, including SLO information, is displayed. You
will define a new SLO for the product catalog service.
In the Services list, click productcatalogservice.
In the menu pane, click Health.
Click +CreateSLO.
In the Set your SLI slideout, for metric, select Latency.
Select Request-based as the method of evaluation.
Click Continue.
Set Latency Threshold to 1000, and click Continue.
Set Period type to Calendar.
Set Period length to Calendar day.
Note: Ninety-nine percent availability over a single day is different
from 99% availability over a month. The first SLO would not permit more than 14 minutes of consecutive downtime (24 hrs * 1%), but the second SLO would allow consecutive downtime up to ~7 hours (30 days * 1%).
Set Performance goal to 99.5%.
The Preview graph shows how your goal is reflected
against real historical data.
The autogenerated JSON document is also displayed. You could use the APIs
instead to automate the creation of SLOs in the future.
To create the SLO, click +CreateSLO.
Task 6. Diagnose the problem
Use service metrics to see where the problem is
Click on your SLO entry in the SLO list.
This displays an expanded view.
Your SLO will probably show that you are already out of error budget. If not,
wait 3-5 minutes and refresh the page. Eventually, you will exhaust
your error budget, because too many of the requests to
this service will hit the new backend, which has high latency.
In the menu pane, click Metrics.
Scroll down to the Latency section of the Metrics view and note
that the service latency increased a few minutes earlier, around the
time you deployed the canary version of the service.
From the Breakdown By dropdown, select Source service.
Which pods are showing high latency and causing the overall failure to hit
your SLO?
To return to the Service Mesh page, in the menu pane, click Service Mesh.
One SLO is flagged as out of error budget, and a warning indicator is displayed next to the
problem service in the Services listing.
Note:
You have only defined a single SLO for a single service. In a real
production environment, you would probably have multiple SLOs for
each service.
Also, you have not defined any alerting policy for your SLO.
You would probably have Cloud Monitoring execute an alert if you
are exhausting your error budget faster than expected.
Use Cloud Trace to better understand where the delay is
In the Google Cloud Console, on the Navigation menu, click Trace > Trace explorer.
Click on a dot that charts at around 3000ms; it should represent one of
the requests to the product catalog service.
Note that all the time seems to be spent within the catalog service itself.
Although calls are made to other services, they all appear to return very
quickly, and something within the product catalog service
is taking a long time.
Note:
Cloud Service Mesh is automatically collecting information about network calls within the mesh and providing trace data that documents time spent on these calls. This is useful and required no extra developer effort.
However, how time is spent within the workload, in this case the product catalog service pod, isn't instrumented directly by Istio. If needed, to get this level of detail, the developer would add instrumentation logic within the service itself.
Task 7. Roll back the release and verify an improvement
In Cloud Shell, back out the destination rule canary release:
In the Google Cloud Console, on the Navigation menu, click Anthos > Service Mesh.
Click on productcatalogservice, and then in the menu pane, click Health.
Note the current compliance percentage.
Click Metrics.
On the latency chart, all the latency series show a dip that corresponds to when you rolled back the
bad version of the workload.
Return to the Health page.
Compare the current compliance metric with the one you saw earlier. It should be higher now, reflecting the fact that you are no longer seeing high-latency requests.
Task 8. Visualize your mesh with the Cloud Service Mesh dashboard
On the Navigation menu, click Kuberenetes Engine > Service Mesh.
View the Topology on the right side.
A chart representing your service mesh is displayed.
If you don't see a full topology chart like this, it's possible that not all
the topology data has been collected. It can take 10+ minutes for this
data to be reflected in the chart. You can proceed to the next
section and return later to see the chart.
Click on the frontend workload node and note the services called by that
workload.
Take a couple of minutes to explore further and better understand the architecture
of the application. You can rearrange nodes, drill down into workloads to see
constituent deployments and pods, change time spans, etc.
Congratulations! You've used Google Cloud's operations suite
tooling to evaluate, troubleshoot, and improve service performance on your
GKE Enterprise cluster.
Review
In this lab, you learned about logging, monitoring, and tracing using
Google Cloud's operations suite.
End your lab
When you have completed your lab, click End Lab. Google Cloud Skills Boost removes the resources you’ve used and cleans the account for you.
You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.
The number of stars indicates the following:
1 star = Very dissatisfied
2 stars = Dissatisfied
3 stars = Neutral
4 stars = Satisfied
5 stars = Very satisfied
You can close the dialog box if you don't want to provide feedback.
For feedback, suggestions, or corrections, please use the Support tab.
Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.
I lab creano un progetto e risorse Google Cloud per un periodo di tempo prestabilito
I lab hanno un limite di tempo e non possono essere messi in pausa. Se termini il lab, dovrai ricominciare dall'inizio.
In alto a sinistra dello schermo, fai clic su Inizia il lab per iniziare
Utilizza la navigazione privata
Copia il nome utente e la password forniti per il lab
Fai clic su Apri console in modalità privata
Accedi alla console
Accedi utilizzando le tue credenziali del lab. L'utilizzo di altre credenziali potrebbe causare errori oppure l'addebito di costi.
Accetta i termini e salta la pagina di ripristino delle risorse
Non fare clic su Termina lab a meno che tu non abbia terminato il lab o non voglia riavviarlo, perché il tuo lavoro verrà eliminato e il progetto verrà rimosso
Questi contenuti non sono al momento disponibili
Ti invieremo una notifica via email quando sarà disponibile
Bene.
Ti contatteremo via email non appena sarà disponibile
Un lab alla volta
Conferma per terminare tutti i lab esistenti e iniziare questo
Utilizza la navigazione privata per eseguire il lab
Utilizza una finestra del browser in incognito o privata per eseguire questo lab. In questo modo eviterai eventuali conflitti tra il tuo account personale e l'account Studente, che potrebbero causare addebiti aggiuntivi sul tuo account personale.
Do a custom install of Cloud Service Mesh, evaluate performance with monitoring, logging and tracing, and define and measure SLOs. Optionally, perform similar observability tasks using common OSS tools.
Durata:
Configurazione in 14 m
·
Accesso da 90 m
·
Completamento in 90 m