
Before you begin
- Labs create a Google Cloud project and resources for a fixed time
- Labs have a time limit and no pause feature. If you end the lab, you'll have to restart from the beginning.
- On the top left of your screen, click Start lab to begin
Create a Firestore Database
/ 50
Deploy the Autoscaler
/ 50
The Autoscaler tool for Cloud Spanner is an open source tool that allows you to automatically increase or reduce the compute capacity in one or more Spanner instances, based on their utilization.
Cloud Spanner is a fully managed relational database with unlimited scale, strong consistency, and up to 99.999% availability.
When you create a Cloud Spanner instance, you choose the number of nodes or processing units that provide compute resources for the instance. As the instance's workload changes, Cloud Spanner does not automatically adjust the number of nodes or processing units in the instance.
The Autoscaler monitors your instances and automatically adds or removes compute capacity to ensure that they stay within the recommended maximums for CPU utilization and the recommended limit for storage per node.
In this lab you'll deploy the Autoscaler in the per-project configuration. In this deployment configuration the Autoscaler is located in the same project as the Cloud Spanner instance being autoscaled.
The diagram above shows the components of the Cloud Spanner Autoscaler and the interaction flow:
The Poller component, made up of Cloud Scheduler, Cloud Pub/Sub and the Poller Cloud Run Function, queries the Cloud Monitoring API to retrieve the utilization metrics for each Spanner instance. For each instance, the Poller function pushes one message into the Scaling Pub/Sub topic containing the utilization metrics for the specific Spanner instance, and some of its corresponding configuration parameters.
The Scaler component is made up of Cloud Pub/Sub, Scaler Cloud Run Function and Cloud Firestore. For each message the Scaler function compares the Spanner instance metrics against the recommended thresholds, plus or minus an allowed margin. Using the chosen scaling method it determines if the instance should be scaled, and the number of nodes or processing units that it should be scaled to.
Throughout the flow, the Spanner Autoscaler writes a step by step summary of its recommendations and actions to Cloud Logging for tracking and auditing.
Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources will be made available to you.
This hands-on lab lets you do the lab activities yourself in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials that you use to sign in and access Google Cloud for the duration of the lab.
To complete this lab, you need:
Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is the Lab Details panel with the following:
Click Open Google Console. The lab spins up resources, and then opens another tab that shows the Sign in page.
Tip: Arrange the tabs in separate windows, side-by-side.
If necessary, copy the Username from the Lab Details panel and paste it into the Sign in dialog. Click Next.
Copy the Password from the Lab Details panel and paste it into the Welcome dialog. Click Next.
Click through the subsequent pages:
After a few moments, the Cloud Console opens in this tab.
Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.
When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. The output contains a line that declares the PROJECT_ID for this session:
gcloud
is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
Click Authorize.
Your output should now look like this:
Output:
Output:
Example output:
gcloud
, in Google Cloud, refer to the gcloud CLI overview guide.
gcloud
on your own machine, the config settings would've been persisted across sessions. But in Cloud Shell, you will need to set this for every new session / reconnection.
Click Check my progress to verify the objective.
TF_VAR_spanner_name
to autoscale-test
, the name of the Spanner instance that was created for you during lab setup:This causes Terraform to configure the Autoscaler for a Cloud Spanner named autoscale-test
and update IAM on the Cloud Spanner instance. That Cloud Spanner instance was created for you during lab setup.
Specifying the name of an existing instance is what you would typically do for production deployments where there is already a Cloud Spanner deployment.
yes
when prompted, after reviewing the resources that Terraform intends to create.terraform apply -parallelism=2
command in Cloud Shell.Go to Navigation menu > Firestore, click on your database id (default)
.
Now, click on Switch to Native Mode button and then click Switch Modes.
Collections failed to load
then, wait for few minutes and refresh the page.Click Check my progress to verify the objective.
Click on the three lines icon on the top left to open the Navigation menu, then click on View all products, then Databases and finally Spanner. The main Spanner page loads.
Click on the instance name autoscale-test, then click on System Insights on the left where you will see various Spanner metrics.
In the graph you can see two spikes that go over the 65% recommended threshold for CPU utilization.
The Autoscaler monitors the Spanner instance, and when the CPU utilization goes over 65%, it adds compute capacity. In this example, it adds more processing units each time. The number of processing units or nodes that the Autoscaler adds is determined by the scaling method that the Autoscaler uses.
Under Query results you can see all the messages from the Autoscaler functions. As the poller only runs every 2 minutes, you may need to re-run the query to receive the log messages.
Under Query results you will see only the messages from the poller function. As the poller only runs every 2 minutes, you may need to re-run the query to receive the log messages.
The Poller function is continuously monitoring the Spanner instance:
In this example, the Poller function gets the High Priority CPU, Rolling 24hr CPU and Storage metrics and publishes a message for the Scaler function. Note that the High Priority CPU is 78.32% at this point.
The Scaler function receives that message, and decides if the Spanner instance should be scaled.
In this example the LINEAR scaling method suggests scaling from 300 to 400 processing units based on the High Priority CPU value. Since the last scaling operation was done more than five minutes ago, the Scaler makes the decision to scale to 400 processing units.
You've now implemented the Autoscaler tool for Cloud Spanner which allows you to automatically increase or reduce the number of nodes based on workload needs. You practiced using Cloud Run Functions, Spanner and Scheduler, and Cloud Monitoring.
Manual Last Updated November 19, 2024
Lab Last Tested November 19, 2024
Copyright 2024 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.
This content is not currently available
We will notify you via email when it becomes available
Great!
We will contact you via email if it becomes available
One lab at a time
Confirm to end all existing labs and start this one