Checkpoints
Create Persistent Disks
/ 15
Attach Persistent Disks to the Virtual Machine
/ 15
Configure and Mount Linux LVM volumes within Linux Virtual Machine
/ 10
Create a Cloud Storage Bucket
/ 20
Set permission on Cloud Storage Bucket
/ 20
SAP Landing Zone: Add and Configure Storage to SAP VMs
GSP1070
Overview
In this lab, you will learn about the key activities required to add and configure disk storage of SAP VMs involved in deploying the various Google Cloud services and resources required for building the majority of SAP Landing Zones on Google Cloud.
The tasks of this lab are based on activities required to plan & deploy the disk storage of virtual machine(s) to host a provided mock SAP estate.
Objectives
In this lab, you will learn how to:
-
Create “Persistent Disk(s)”
-
Attach “Persistent Disk(s)” to VM(s)
-
Configure and mount a “Linux LVM volume(s)” within a Linux VM(s)
-
Create “Cloud Storage Bucket(s)”
-
Set “IAM permission” on Cloud Storage Bucket(s)
Setup and Requirements
Before you click the Start Lab button
Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources will be made available to you.
This hands-on lab lets you do the lab activities yourself in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials that you use to sign in and access Google Cloud for the duration of the lab.
To complete this lab, you need:
- Access to a standard internet browser (Chrome browser recommended).
- Time to complete the lab---remember, once you start, you cannot pause a lab.
How to start your lab and sign in to the Google Cloud Console
-
Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is the Lab Details panel with the following:
- The Open Google Console button
- Time remaining
- The temporary credentials that you must use for this lab
- Other information, if needed, to step through this lab
-
Click Open Google Console. The lab spins up resources, and then opens another tab that shows the Sign in page.
Tip: Arrange the tabs in separate windows, side-by-side.
Note: If you see the Choose an account dialog, click Use Another Account. -
If necessary, copy the Username from the Lab Details panel and paste it into the Sign in dialog. Click Next.
-
Copy the Password from the Lab Details panel and paste it into the Welcome dialog. Click Next.
Important: You must use the credentials from the left panel. Do not use your Google Cloud Skills Boost credentials. Note: Using your own Google Cloud account for this lab may incur extra charges. -
Click through the subsequent pages:
- Accept the terms and conditions.
- Do not add recovery options or two-factor authentication (because this is a temporary account).
- Do not sign up for free trials.
After a few moments, the Cloud Console opens in this tab.
Activate Cloud Shell
Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.
- Click Activate Cloud Shell
at the top of the Google Cloud console.
When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. The output contains a line that declares the PROJECT_ID for this session:
gcloud
is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
-
(Optional) You can list the active account name with this command:
-
Click Authorize.
-
Your output should now look like this:
Output:
-
(Optional) You can list the project ID with this command:
Output:
Example output:
gcloud
, in Google Cloud, refer to the gcloud CLI overview guide.
Prerequisites
Although not mandatory, to maximize your learning in this lab, the following labs should be completed first:
-
Cloud Foundation Toolkit (CFT) Training: Getting Started
-
SAP Landing Zone - Lab 1 : Plan & deploy the SAP network
-
SAP Landing Zone - Lab 2 : Plan & deploy SAP VMs
Executing activities in this lab
This lab will focus on the following “Landing Zone” topic areas:
- Google Cloud services and resources to be deployed
This lab will cover sample key activities which will be involved in deploying the various Google Cloud services and resources involved in building the majority of "SAP Landing Zones" on Google Cloud.
The activities in this lab will build on each other, therefore it will be required to work through the activities in the order they are stated. This is intentional as it is representative of real world scenarios in which there will be dependencies between Google Cloud resources and highlights the need to work accurately as performing corrective action could be very effort intensive.
Google Cloud supports many different options which can be used to deploy Google Cloud resources. These include but are not limited to:
- Manually via the Google Cloud Console
- Command line (e.g. gcloud, gsutil, etc.)
- API calls
- Various programming languages (e.g. Go, Java, Node.js, Python)
- Infrastructure as Code tools (e.g. Terraform)
Which option(s) will be adopted for a customer specific “SAP Landing Zone” would be defined during the design process.
In this lab it is assumed that all activities will be executed from the command line and the required command line instructions will be provided. However students may opt to use any supported option to perform the activities. Where available, links to the relevant “quickstart” online documentation will be provided which will include syntax instructions for the other options.
In this lab a sample naming convention will be provided for all resources (please reference the naming convention section and table provided in the pre-reading material for more information). The bulk of the activities will need the Google Cloud resources to be deployed in accordance with the naming convention and the configuration specifications provided as this will be representative of what would be expected in real world scenarios. Various activities in these labs will only be marked as completed if they are fully compliant to both the expected naming and the stated configuration specifications.
Parameter value color coding key:
To help you understand what the various parameter values relate to, the following color coding of the parameter values has been adopted:
-
Green = refers to the ID of the resource being created
-
Orange = refers to free text entry, generally used for descriptions
-
Blue = refers to the ID of a resource which has previously been created
-
Purple = refers to a configuration parameter of the resource being created
Timekeeping
This lab is time restricted to 60 min. At the end of the time the lab environment will automatically shutdown and any resources deployed will be erased. Recommending the lab will require all tasks to be executed afresh.
In order to ensure you have sufficient time to complete all the tasks, before clicking Start Lab it is highly advisable that you:
-
Read through all the tasks of the lab
-
Decide which option(s) you want to use to deploy the Google Cloud resources
-
Prepare the deployment code and tooling to be executed
Mock SAP Estate
To help the student be able to both visualize and relate the activities in this lab back to their own SAP estates, a mock SAP estate has been provided on which all activities of this lab will be based.
SAP Estate Overview
The below table outlines the key configuration of attributes of the SAP estate this lab will be based on. The table should be referenced in combination with the naming convention table provided in the next section.
The table outlines in yellow the components which will be deployed by the activities of this lab.
Lab pre-read material: “SAP on Google Cloud” onboarding
For additional real world conditions information about the customer cloud “onboarding” journey of the customer, you are advised to read the “Lab pre-read: “SAP on Cloud” onboarding” material.
Task 1. Create “Persistent Disk(s)”
Although disks may be often defined at the time of creating the virtual machine, it is quite common that additional disks may be required to be created.
Real world considerations:
On Google Cloud the performance of disk storage is determined based on a combination of the number of vCPUs, type of disk and its cumulative capacity. Because of the way the Google disk storage subsystem works, having multiple disks of the same type will produce the same performance as one disk of the same cumulative capacity (e.g.: 50GB + 1000GB + 500GB PD-balance disks all have the same performance characteristics as a single 1550GB PD-balanced disk).
There are a number of factors which will determine whether a customer should opt for 1 large disk or multiple smaller of the same type with the same cumulative size.
In the case of SAP HANA, and similarly with other SAP supported DBs, if disk storage snapshots (orchestrated by the RDBMS) are to be adopted as a backup approach it is essential to ensure the DB data files, DB log space/area and DB executables (where the backup catalog is stored) are each on separate disks so as to ensure that recovery activities can be successfully actioned.
In this lab, it will be assumed that the customer will be adopting disk storage snapshots as a backup approach and that all systems are for non-productive SAP HANA systems based on the following:
Total disk size = Disk 01 + Disk 02 + Disk 03
- Disk 01 -> hana/shared = 50 GB
- Disk 02 -> hana/data = (RAM x 1.2) + 10%
- Disk 03 -> hana/log = (based on RAM) + 10%
One of the key benefits of Google Cloud is that you only pay for what you consume and it is almost limitless in ability to quickly grow your resources when needed. This means that during initial deployment activities very small VMs and disks sizes are highly recommended so as to avoid wasted costs. This can allow significant cost savings during initial deployment phases prior to certain configuration activities being performed.
This can be particularly relevant with SAP HANA VMs which may have 128GB+ of RAM & 800GB+ of disk storage when live but until such time as both the SAP HANA system and application data load is deployed to the VM virtually all of these resources will be idle.
Online Documentation:
Dependencies:
In order to be able to successfully deploy the required “Persistent Disk(s)” the following dependencies will need to be correctly deployed before proceeding with this activity:
-
GCP account setup
-
IAM setup
-
Resource hierarchy setup
-
GCP project to host VM resources is setup
Naming Convention:
- ID:
- Character limit: 62
- Structure: <01><02>-disk--<03>-<04>-<05>--<06>--data-disk-<07>--<07>
<01> - company id
<02> - business entity id
<03> - workload id
<04> - operational area id
<05> - environment id
<06> - system id
<07> - disk count no.
<08> - short virtual hostname
- Description:
- Character limit: unlimited
- Structure: “<01>-<02> disk = <03>-<04>-<05> - <06> (<07>) Data disk <08> - virtual host=<09>”
<01> - Company
<02> - Business entity ID
<03> - Workload ID
<04> - Operational area ID
<05> - Environment ID
<06> - Application ID
<07> - System ID
<08> - disk count no.
<09> - short virtual hostname
Resources to be created (Count = 3):
Using any supported method create the OS disk “Persistent Disk(s)” with the following configuration details:
#01 of 03:
Parameter | Value |
---|---|
ID | xgl-disk--cerps-bau-dev--dh1--data-disk-01--d-cerpshana1 |
Description | XYZ-Global disk = CERPS-BaU-Dev - SAP HANA 1 (DH1) - Data disk 01 - virtualhost=d-cerpshana1 |
Zone | |
SAP HANA DB - Live size | 128 GB RAM |
Disk - Role | hana/shared |
Disk type | pd-balanced |
Disk - Deployment size | 50 |
Disk - Live size | 50 |
#02 of 03:
Parameter | Value |
---|---|
ID | xgl-disk--cerps-bau-dev--dh1--data-disk-02--d-cerpshana1 |
Description | XYZ-Global disk = CERPS-BaU-Dev - SAP HANA 1 (DH1) - Data disk 02 - virtualhost=d-cerpshana1 |
Zone | |
SAP HANA DB - Live size | 128 GB RAM |
Disk - Role | hana/data |
Disk - Type | pd-balanced |
Disk - Deployment size | 50 |
Disk - Live size | 180 |
#03 of 03:
Parameter | Value |
---|---|
ID | xgl-disk--cerps-bau-dev--dh1--data-disk-03--d-cerpshana1 |
Description | XYZ-Global disk = CERPS-BaU-Dev - SAP HANA 1 (DH1) - Data disk 03 - virtualhost=d-cerpshana1 |
Zone | |
SAP HANA DB - Live size | 128 GB RAM |
Disk - Role | hana/log |
Disk - Type | pd-balanced |
Disk - Deployment size | 50 |
Disk - Live size | 70 |
Command line:
Command 01 of 01 - Create a “Persistent Disk” with a custom name for a VM:
(Example output)
Review and validate in the Cloud Console:
Confirm the “Persistent Disk(s)” have been deployed in the Cloud Console using the following path “Navigation menu -> COMPUTE -> Compute Engine -> Storage -> Disks”.
Ensure all lab resources have finished deploying
You can confirm the lab is completely deployed by ensuring that there are 6 VM instances deployed in the Cloud Console using the following path Navigation menu -> COMPUTE -> Compute Engine -> Virtual Machines -> VM instances.
VM instances
get created. Navigate to Compute Engine > VM instances and wait for sometime untill all the instances get created.Click Check my progress to verify the objective.
Task 2. Attach “Persistent Disk(s)” to Virtual Machine(s)
Although disks may be often defined at the time of creating the virtual machine, it is quite common that additional disks may be required to be created. Once a disk is created it must be attached to a VM before it can be used by that VM.
Real world considerations:
Disks connected to VMs have 2 levels of management:
- At the hypervisor level - For defining the size and type of disks
- At the VM level - For defining how the VM will access the disk (e.g.: via file system, as raw devices, etc.)
A VM could have many disks attached to it (max 127) therefore it is important to be able to easily identify which disk at the VM level relates to the corresponding disk at the hypervisor level for various actions (e.g.: increasing the size of disk). Google Cloud provides the “--device-name” parameter when attaching a disk to a VM which can be populated with free text. When using commands such as “ls -l /dev/disk/by-id” the OS will list the disks at the OS level using the value defined for the “--device-name” parameter.
Online Documentation:
Dependencies:
In order to be able to successfully attached the required “Persistent Disk(s)” the following dependencies will need to be correctly deployed before proceeding with this activity:
-
GCP account setup
-
IAM setup
-
Resource hierarchy setup
-
GCP project to host VPC setup
-
VPC network(s) deployed
-
VPC subnets deployed
-
VPC firewall rule(s) deployed
-
GCP project to host the VM(s) and store the disk(s) is setup
-
Service account(s) deployed
-
Persistent disk to be attached to a VM has been created
-
VM which disks is to be attached to has been created
Resources to be created (Count = 3):
Using any supported method attach the “Persistent Disk(s)” to the required VM(s) with the following configuration details:
#01 of 03:
Parameter | Value |
---|---|
VM - ID | xgl-vm--cerps-bau-dev--dh1--be1ncerps001 |
Disk - ID | xgl-disk--cerps-bau-dev--dh1--data-disk-01--d-cerpshana1 |
Zone | |
OS device name | xgl-disk--cerps-bau-dev--dh1--data-disk-01--d-cerpshana1 |
#02 of 03:
Parameter | Value |
---|---|
VM - ID | xgl-vm--cerps-bau-dev--dh1--be1ncerps001 |
Disk - ID | xgl-disk--cerps-bau-dev--dh1--data-disk-02--d-cerpshana1 |
Zone | |
OS device name | xgl-disk--cerps-bau-dev--dh1--data-disk-02--d-cerpshana1 |
#03 of 03:
Parameter | Value |
---|---|
VM - ID | xgl-vm--cerps-bau-dev--dh1--be1ncerps001 |
Disk - ID | xgl-disk--cerps-bau-dev--dh1--data-disk-03--d-cerpshana1 |
Zone | |
OS device name | xgl-disk--cerps-bau-dev--dh1--data-disk-03--d-cerpshana1 |
Command line:
Command 01 of 01 - Attach a “Persistent Disk” to a VM:
(Example output)
Review and validate in the Cloud Console:
Confirm the “Persistent Disk(s)” have been attached to the required VM(s) in the Cloud Console using the following path “Navigation menu -> COMPUTE -> Compute Engine -> Storage -> Disks”.
Click Check my progress to verify the objective.
Task 3. Configure and Mount “Linux LVM volume(s)” within Linux Virtual Machine(s)
Once a newly created disk has been attached to a VM it still needs to be prepared to be able to be accessed via the OS. Linux LVM is commonly used to partition disks for usage at the OS level.
Real world considerations:
With Google Cloud the performance of disk storage is determined based on a combination of the number of vCPUs, type of disk and its cumulative capacity. Because of the way the Google disk storage subsystem works having multiple disks of the same type will produce the same performance as one disk of the same cumulative capacity (e.g.: 50GB + 1000GB + 500GB PD-balance disks all have the same performance characteristics as a single 1550GB PD-balanced disk).
Some other cloud providers recommend using multiple disks configured in software RAID 0 group (e.g. using LVM) to achieve the required performance. On GCP there is no performance benefit from configuring multiple disks in a RAID 0 group configuration and therefore should be avoided (unless there is some other non-performance related justification).
Online Documentation:
Dependencies:
In order to be able to successfully configure the “Linux LVM volume(s)” the following dependencies will need to be correctly deployed before proceeding with this activity:
-
GCP account setup
-
IAM setup
-
Resource hierarchy setup
-
GCP project to host VPC setup
-
VPC network(s) deployed
-
VPC subnets deployed
-
VPC firewall rule(s) deployed
-
GCP project to host the VM(s) and store the disk(s) is setup
-
Service account(s) deployed
-
VM to which disks is to be attached to has been created
-
Persistent disk to be attached to a VM has been created
-
Persistent disk has been attached to the required VM
Naming Convention:
- Google Persistent Disk ID in Linux:
- Structure: google-<01>
<01> - OS device name
- LVM - Volume Group:
- Structure: vg--<01>
<01> - OS device name
- LVM - Logical Volume:
- Structure: lv--<01>
<01> - the directory mount point of the logical volume (replace “/” with “-”)
Resources to be created (Count = 3):
Navigate to Compute Engine > VM Instances page, Start and SSH into the required VM and then create and mount the following “Linux LVM volume(s)” with the following configuration details:
#01 of 03:
Parameter | Value |
---|---|
VM - ID | xgl-vm--cerps-bau-dev--dh1--be1ncerps001 |
OS device name | xgl-disk--cerps-bau-dev--dh1--data-disk-01--d-cerpshana1 |
LVM - Volume group name | vg--xgl-disk--cerps-bau-dev--dh1--data-disk-01--d-cerpshana1 |
LVM - Logical volume name | lv--hana-shared-DH1 |
Filesystem - Mount point | /hana/shared/DH1 |
Filesystem - Parent directory | /hana |
Filesystem - Type | xfs |
#02 of 03:
Parameter | Value |
---|---|
VM - ID | xgl-vm--cerps-bau-dev--dh1--be1ncerps001 |
OS device name | xgl-disk--cerps-bau-dev--dh1--data-disk-02--d-cerpshana1 |
LVM - Volume group name | vg--xgl-disk--cerps-bau-dev--dh1--data-disk-02--d-cerpshana1 |
LVM - Logical volume name | lv--hana-data-DH1 |
Filesystem - Mount point | /hana/data/DH1 |
Filesystem - Parent directory | /hana |
Filesystem - Type | xfs |
#03 of 03:
Parameter | Value |
---|---|
VM - ID | xgl-vm--cerps-bau-dev--dh1--be1ncerps001 |
OS device name | xgl-disk--cerps-bau-dev--dh1--data-disk-02--d-cerpshana1 |
LVM - Volume group name | vg--xgl-disk--cerps-bau-dev--dh1--data-disk-02--d-cerpshana1 |
LVM - Logical volume name | lv--hana-log-DH1 |
Filesystem - Mount point | /hana/log/DH1 |
Filesystem - Parent directory | /hana |
Filesystem - Type | xfs |
Command line:
Command 01 of 14 - List disks (by-ID) attached to a VM:
Make yourself the root user by executing the following command:
(Example output)
Command 02 of 14 - Create LVM physical volume from a attached disk:
(Example output)
Command 03 of 14 - Display LVM physical volume(s):
(Example output)
Command 04 of 14 - Create LVM volume group within a attached disk:
(Example output)
Command 05 of 14 - Display LVM volume group(s):
(Example output)
Command 06 of 14 - Create LVM logical volume(s) within the volume group:
(Example output)
Command 07 of 14 - Display LVM logical volume(s):
(Example output)
Command 08 of 14 - Create xfs filesystem on the logical volume(s):
(Example output)
Command 09 of 14 - Create filesystem mount point(s):
Command 10 of 14 - Change ownership of the parent directory recursively:
Command 11 of 14 - Change permissions of the parent directory recursively:
Command 12 of 14 - Add file system details to the /etc/fstab file:
Command 13 of 14 - Mount all filesystems defined in the /etc/fstab file:
Command 14 of 14 - Confirm the file system(s) have been mounted:
(Example output)
Click Check my progress to verify the objective.
Task 4. Create “CloudStorage Bucket(s)”
GCS buckets are commonly used to securely write to and store SAP HANA BackInt DB backups to a more cost effective backup medium as opposed to disks connected to the VM.
Real world considerations:
There are a number options which can be adopted with regards to storage bucket deployment options, each with their own PROs, CONs & limitations. Customers should carefully review their security policies and requirements of the storage buckets so as to ensure backup data written to these storage buckets is adequately secure and not too onerous from an ongoing support & maintenance perspective.
Different backup policy requirements may require different types (e.g.: storage class), location (e.g.: in a specific region, out of region, multi-regional, etc) and min retention period for the backups which could influence the configuration of the storage bucket.
In this lab it will be assumed that every SAP HANA DB will have a dedicated single storage bucket in the same region as the system.
Online Documentation:
-
Terraform resource to create a storage bucket (google_storage_bucket)
-
Terraform view on GitHub - terraform-google-modules/storage/new_bucket/main.tf
Dependencies:
In order to be able to successfully create the required “Cloud Storage Bucket(s)” the following dependencies will need to be correctly deployed before proceeding with this activity:
-
GCP account setup
-
IAM setup
-
Resource hierarchy setup
-
GCP project to host the VM(s) and store the bucket(s) is setup (or an alternative project)
Naming Convention:
- ID:
- Character limit: 63
- Structure: <01><02>-bucket--<03>-<04>-<05>--<06>--db-backups-<07>
<01> - company id
<02> - business entity id
<03> - workload id
<04> - operational area id
<05> - environment id
<06> - system id
<07> - bucket no.
Resources to be created (Count = 1):
Using any supported method deploy the “Cloud Storage Bucket(s)” with the following configuration details:
#01 of 01:
Parameter | Value |
---|---|
ID | xgl-bucket--cerps-bau-dev--dh1--db-backups-{random number} |
Location | us-central1 |
Storage class | standard |
Min retention | 7d |
Command line:
Command 01 of 01 - Create a “Cloud Storage Bucket(s)”:
(Example output)
Review and validate in the Cloud Console:
Confirm the “Cloud Storage Bucket(s)” have been deployed in the Cloud Console using the following path “Navigation menu -> Cloud Storage -> Buckets”.
Click Check my progress to verify the objective.
Task 5. Set permission on “Cloud Storage Bucket(s)”
In order for a SAP HANA system to be able to access the GCS bucket for SAP HANA Backint DB backups the service account of the VM hosting the SAP HANA needs to be assigned the IAM permission to be able to read & write to the GCS bucket.
Real world considerations:
There are a number options which can be adopted with regards to storage bucket deployment options, each with their own PROs, CONs & limitations. Customers should carefully review their security policies and requirements of the storage buckets so as to ensure backup data written to these storage buckets is adequately secure and not too onerous from an ongoing support & maintenance perspective.
In this lab it will be assumed that each SAP HANA DB system will have access to its own dedicated storage bucket.
Online Documentation:
-
Command line - iam - Get, set, or change bucket and/or object IAM permissions
-
Terraform resource to set IAM policy for Cloud Storage Bucket (google_storage_bucket_iam_policy)
Dependencies:
In order to be able to successfully create the required disk image(s) the following dependencies will need to be correctly deployed before proceeding with this activity:
-
GCP account setup
-
IAM setup
-
Resource hierarchy setup
-
Service account(s) deployed
-
GCP project to host the VM(s) and store the bucket(s) is setup (or an alternative project)
-
“Cloud Storage Bucket(s)” deployed
Resources to be created (Count = 1):
Using any supported method deploy the “cloud storage bucket(s)” with the following configuration details:
Parameter | Value |
---|---|
Service Account - ID | xgl-sa--cerps-bau-dev--dh1-db |
Bucket - ID | xgl-bucket--cerps-bau-dev--dh1--db-backups-{random number} |
IAM roles | objectCreator,objectViewer |
Command line:
Command 01 of 01 - Grant VM “service account” access to the “cloud storage bucket”:
Review and validate in the Cloud Console:
Confirm the “Cloud Storage Bucket(s)” have been assigned the “IAM roles” in the Cloud Console using the following path “Navigation menu -> Cloud Storage -> Buckets -> [bucket name] -> Permissions -> Principles”.
Click Check my progress to verify the objective.
Congratulations!
Summary of Activities:
The below table outlines in yellow the components which will have been deployed by the activities of this lab.
Next Steps / Learn More
Be sure to check out the following documentation for more practice with SAP:
Google Cloud training and certification
...helps you make the most of Google Cloud technologies. Our classes include technical skills and best practices to help you get up to speed quickly and continue your learning journey. We offer fundamental to advanced level training, with on-demand, live, and virtual options to suit your busy schedule. Certifications help you validate and prove your skill and expertise in Google Cloud technologies.
Manual Last Updated June 02, 2023
Lab Last Tested June 02, 2023
Copyright 2023 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.