arrow_back

Deploy an Auto-Scaling HPC Cluster with Slurm

Join Sign in

Deploy an Auto-Scaling HPC Cluster with Slurm

1 hour 5 Credits

GSP690

Google Cloud self-paced labs logo

Overview

Google Cloud teamed up with SchedMD to release a set of tools that make it easier to launch the Slurm workload manager on Compute Engine, and to expand your existing cluster dynamically when you need extra resources. This integration was built by the experts at SchedMD in accordance with Slurm best practices.

In this lab you will learn to run a Slurm cluster on Google Cloud. You will learn how to provision and operate an auto-scaling Slurm cluster.

If you're planning on using the Slurm on Google Cloud integrations, or if you have any questions, please consider joining our Google Cloud & Slurm Community Discussion Group!

Slurm logo

About Slurm

Slurm architecture diagram.

Basic architectural diagram of a stand-alone Slurm Cluster in Google Cloud.

Slurm is one of the leading workload managers for HPC clusters around the world. Slurm provides an open-source, fault-tolerant, and highly-scalable workload management and job scheduling system for small and large Linux clusters. Slurm requires no kernel modifications for its operation and is relatively self-contained. As a cluster workload manager, Slurm has three key functions:

  1. It allocates exclusive or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work.

  2. It provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allocated nodes.

  3. It arbitrates contention for resources by managing a queue of pending work.

Objectives

In this lab, you will learn:

  • How to setup a Slurm cluster.

  • How to run a job using SLURM.

  • How to query cluster information and monitor running jobs in SLURM.

  • How to autoscale nodes to accommodate specific job parameters and requirements.

  • Where to find help with Slurm.

Setup and requirements

Before you click the Start Lab button

Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources will be made available to you.

This hands-on lab lets you do the lab activities yourself in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials that you use to sign in and access Google Cloud for the duration of the lab.

To complete this lab, you need:

  • Access to a standard internet browser (Chrome browser recommended).
Note: Use an Incognito or private browser window to run this lab. This prevents any conflicts between your personal account and the Student account, which may cause extra charges incurred to your personal account.
  • Time to complete the lab---remember, once you start, you cannot pause a lab.
Note: If you already have your own personal Google Cloud account or project, do not use it for this lab to avoid extra charges to your account.

How to start your lab and sign in to the Google Cloud Console

  1. Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is the Lab Details panel with the following:

    • The Open Google Console button
    • Time remaining
    • The temporary credentials that you must use for this lab
    • Other information, if needed, to step through this lab
  2. Click Open Google Console. The lab spins up resources, and then opens another tab that shows the Sign in page.

    Tip: Arrange the tabs in separate windows, side-by-side.

    Note: If you see the Choose an account dialog, click Use Another Account.
  3. If necessary, copy the Username from the Lab Details panel and paste it into the Sign in dialog. Click Next.

  4. Copy the Password from the Lab Details panel and paste it into the Welcome dialog. Click Next.

    Important: You must use the credentials from the left panel. Do not use your Google Cloud Skills Boost credentials. Note: Using your own Google Cloud account for this lab may incur extra charges.
  5. Click through the subsequent pages:

    • Accept the terms and conditions.
    • Do not add recovery options or two-factor authentication (because this is a temporary account).
    • Do not sign up for free trials.

After a few moments, the Cloud Console opens in this tab.

Note: You can view the menu with a list of Google Cloud Products and Services by clicking the Navigation menu at the top-left. Navigation menu icon

Task 1. Deploy Slurm Cluster using Google Marketplace

This deployment method leverages GCP Marketplace to make setting up clusters a breeze without leaving your browser. While this method is simplier and less flexible, it is great for exploring what slurm-gcp is!

  1. In the Google Cloud Console, go to Navigation menu > Marketplace. Search for SchedMD-Slurm-GCP in the search bar and press Enter.

  2. Click on the result and then click Launch.

Launch Slurm GCP

  1. Fill the following details in the form appears:

    • Cluster name: g1
    • Zone: us-east1-b
    • Network: Check the box for Controller External IP and Login External IP.
    • In Slurm Controller machine type:
      • Series: E2
      • Machine type: e2-medium (2vCPU, 8 gb memory)
    • In Slurm Login:
      • Select SHOW SLURM LOGIN OPTIONS
      • Series: E2
      • Machine type: e2-small
    • Select machine type for compute partition nodes:
      • Series: E2
      • Machine type: e2-standard-2
  2. Leave the rest options as default. Select I accept the GCP Marketplace Terms of Service and SchedMD/Slurm Terms of Service.

  3. Click Deploy.

Using the Cluster

  1. SSH to the Login node by clicking the SSH TO SLURM LOGIN NODE button.

SSH to Slurm

If you see an error related to IAP on your first try, click retry.

Once you ssh to the slurm, you will see the following:

SSH Window

If the following message appears upon login:

*** Slurm is currently being installed/configured in the background. ***

Wait and do not proceed with the lab until you see this message (approx 5 mins):

*** Slurm login setup complete ***
  1. Once you see the above message, you will have to log out and log back in to g1-login0 to continue the lab. To do so, press CTRL + C to end the task. Then execute the following command to logout of your instance:

exit

Now, reconnect to your login VM same as before.

Tour of the Slurm CLI tools

You're now logged in to your cluster's Slurm login node. This is the node that's dedicated to user/admin interaction, scheduling Slurm jobs, and administrative activity.

Let's run a couple commands to introduce you to the Slurm command line.

  1. Execute the sinfo command to view the status of our cluster's resources:

sinfo

Sample output of sinfo appears below. sinfo reports the nodes available in the cluster, the state of those nodes, and other information like the partition, availability, and any time limitation imposed on those nodes.

PARTITION AVAIL TIMELIMIT NODES STATE NODELIST p1* up infinite 10 idle~ g1-compute-0-[0-9]

You can see our 10 nodes, dictated by the debug partition’s “max_node_count” of 10, are marked as “idle~” (the node is in an idle and non-allocated mode, ready to be spun up).

  1. Next, execute the squeue command to view the status of your cluster's queue:

squeue

The expected output of squeue appears below. squeue reports the status of the queue for a cluster. This includes the job ID of each job scheduled on the cluster, the partition the job is assigned to, the name of the job, the user that launched the job, the state of the job, the wall clock time the job has been running, and the nodes that job is allocated to. You don’t have any jobs running, so the contents of this command is empty.

JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)

The Slurm commands “srun” and “sbatch” are used to run jobs that are put into the queue. “srun” runs parallel jobs, and can be used as a wrapper for mpirun. “sbatch” is used to submit a batch job to slurm, and can call srun once or many times in different configurations. “sbatch” can take batch scripts, or can be used with the --wrap option to run the entire job from the command line.

  1. Run a job so you can see Slurm in action and get a job in our queue!

Task 2. Run a Slurm job and scale the cluster

Now that you have our Slurm cluster running, let’s run a job and scale our cluster up.

  1. While logged in to g1-login0, run the following command to create a "hostname_batch" file:

tee hostname_batch << END #!/bin/sh sbatch -N2 --wrap="srun hostname" END

This command runs the Slurm batch command. It specifies that sbatch will run 2 nodes with the “-N” option. It also specifies that each of those nodes will run an “srun hostname” command in the “--wrap” option.

By default, sbatch will write it’s output to “slurm-%j.out” in the working directory the command is run from, where %j is substituted for the Job ID according to the Slurm Filename Patterns. In our example sbatch is being run from the user’s /home folder, which is a NFS-based shared file system hosted on the controller by default. This allows compute nodes to share input and output data if desired. In a production environment, the working storage should be separate from the /home storage to avoid performance impacts to the cluster operations. Separate storage mounts can be specified in the tfvars file in the “network_storage” options.

  1. After executing the sbatch script using the sbatch command line it will return a Job ID for the scheduled job, for example:

sbatch hostname_batch

Running sbatch will return a Job ID for the scheduled job, for example:

Submitted batch job 2

You can use the Job ID returned by the sbatch command to track and manage the job execution and resources.

  1. Execute the following command to view the Slurm job queue:

squeue

You will likely see the job you executed listed like below:

JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 2 p1 hostname student- CF 0:19 1 g1-compute-0-0

Since you didn’t have any compute nodes provisioned, Slurm will automatically create compute instances according to the job requirements. The automatic nature of this process has two benefits.

First, it eliminates the work typically required in a HPC cluster of manually provisioning nodes, configuring the software, integrating the node into the cluster, and then deploying the job. Second, it allows users to save money because idle, unused nodes are scaled down until the minimum number of nodes is running.

  1. You can also execute the sinfo command to view the Slurm cluster info:

sinfo

This will show the nodes listed in squeue in the “idle” state, meaning the nodes are being created:

PARTITION AVAIL TIMELIMIT NODES STATE NODELIST p1* up infinite 8 idle~ g1-compute-0-[2-9] p1* up infinite 2 idle g1-compute-0-[0-1]

You can also check the VM instances section in Cloud Console to view the newly provisioned nodes. It will take a few minutes to spin up the nodes and get Slurm installed before the job is allocated to the newly allocated nodes. Your VM instances list will soon resemble the following:

The VM instances page with a list of newly provisioned nodes.

Once the nodes are running the job the instances will move to an “alloc” state, meaning the jobs are allocated to a job:

PARTITION AVAIL TIMELIMIT NODES STATE NODELIST p1* up infinite 8 idle~ g1-compute-0-[2-9] p1* up infinite 2 alloc g1-compute-0-[0-1]

Once a job is complete, it will no longer be listed in squeue, and the “alloc” nodes in sinfo will return to the “idle” state. Run “squeue” periodically until the job is completed, after a minute or two.

The output file slurm-%j.out will have been written to your NFS-shared /home folder, and will contain the hostnames. Open or cat the output file (typically slurm-3.out), it contents of the output file will contain:

g1-compute-0-0 g1-compute-0-1

Great work, you’ve run a job and scaled up your Slurm cluster!

Note: sbatch is the Slurm command which runs Slurm batch scripts. More Info.

srun is the Slurm command which runs commands in parallel. More Info.

squeue is the Slurm queue monitoring command line tool. It lists all running jobs, and the resources they are associated with. More Info.

sinfo is the Slurm command which lists the information about the Slurm cluster. This lists the Slurm partition, availability, time limit, and current state of the nodes in the cluster. More Info.

Click Check my progress to verify the objective. Run a Slurm Job

Task 3. Run an MPI job

  1. Now let’s run an MPI job across our nodes. While logged in to g1-login0, use curl to download an MPI program written in the C programming language:

curl -O https://raw.githubusercontent.com/mpitutorial/mpitutorial/gh-pages/tutorials/mpi-hello-world/code/mpi_hello_world.c
  1. To use OpenMPI tools you need to to load the OpenMPI modules by running this command:

module load openmpi
  1. Use the mpicc tool to compile the MPI C code. Execute the following command:

mpicc mpi_hello_world.c -o mpi_hello_world

This compiles our C code to machine code so that you can run the code across our cluster through Slurm.

  1. Next, run the following to create a sbatch script called “helloworld_batch”:

tee helloworld_batch << END #!/bin/bash # #SBATCH --job-name=hello_world #SBATCH --output=hello_world-%j.out # #SBATCH --nodes=2 srun mpi_hello_world END

This script defines the Slurm batch execution environment and tasks. First, the execution environment is defined as bash. Next, the script defines the Slurm options first with the “#SBATCH” lines. The job name is defined as “hello_world”.

The output file is set as “hello_world-%j.out” where %j is substituted for the Job ID according to the Slurm Filename Patterns. This output file is written to the directory the sbatch script is run from. In our example this is the user’s /home folder, which is a NFS-based shared file system. This allows compute nodes to share input and output data if desired. In a production environment, the working storage should be separate from the /home storage to avoid performance impacts to the cluster operations.

Finally, the number of nodes this script should run on is defined as 2.

After the options are defined the executable commands are provided. This script will run the mpi_hello_world code in a parallel manner using the srun command, which is a drop-in replacement for the mpirun command.

  1. Then execute the sbatch script using the sbatch command line:

sbatch helloworld_batch

Running sbatch will return a Job ID for the scheduled job, for example:

Submitted batch job 4

This will run the hostname command across 2 nodes, with one task per node, as well as printing the output to the hello_world-4.out file.

Since you had 2 nodes already provisioned this job will run quickly.

  1. Monitor squeue until the job has completed and no longer listed:

squeue
  1. Once completed open or cat the hello_world-4.out file and confirm it ran on g1-compute-0-[0-1]:

cat hello_world-4.out

Expected output:

Hello world from processor g1-compute-0-2, rank 0 out of 2 processors Hello world from processor g1-compute-0-3, rank 1 out of 2 processors

After being idle for 5 minutes (configurable with the YAML’s suspend_time field, or slurm.conf’s SuspendTime field) the dynamically provisioned compute nodes will be de-allocated to release resources. You can validate this by running sinfo periodically and observing the cluster size fall back to 0.

JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 2 debug hello_wo username R 0:10 2 g1-compute-0-[0-1]

Click Check my progress to verify the objective. Run an MPI Job

  1. Logout of the slurm node by running this command:

exit

Summary

What was covered

  • How to use deploy Slurm on GCP.

  • How to run a job using Slurm on GCP.

  • How to query cluster information and monitor running jobs in Slurm.

  • How to autoscale nodes with Slurm on GCP to accommodate specific job parameters and requirements.

  • How to compile and run MPI applications on Slurm on GCP.

Congratulations!

Congratulations, you’ve created a Slurm cluster on Google Cloud Platform and used its latest features to auto-scale your cluster to meet workload demand! You can use this model to run any variety of jobs, and it scales to hundreds of instances in minutes by simply requesting the nodes in Slurm.

Are you building something cool using Slurm's new Google Cloud-native functionality? Have questions? Have a feature suggestion? Reach out to the Google Cloud team today through Google Cloud's High Performance Computing Solutions website, or chat with us in the Google Cloud & Slurm Discussion Group!

Find Slurm support

If you need support using these integrations in testing or production environments please contact SchedMD directly using their contact page

You may also use SchedMD's Troubleshooting guide

Finally you may also post your question to the Google Cloud & Slurm Discussion Group

Note: This community discussion group is not an official Slurm support channel, and a response is not guaranteed. Any support will be provided as best-effort, outside of traditional Slurm support channels. For official support please contact SchedMD.

Next steps / Learn more

Google Cloud links

Slurm links

Google Cloud training and certification

...helps you make the most of Google Cloud technologies. Our classes include technical skills and best practices to help you get up to speed quickly and continue your learning journey. We offer fundamental to advanced level training, with on-demand, live, and virtual options to suit your busy schedule. Certifications help you validate and prove your skill and expertise in Google Cloud technologies.

Manual Last Updated October 11, 2022
Lab Last Tested October 11, 2022

Copyright 2023 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.