arrow_back

SAP Landing Zone: Add and Configure Storage to SAP VMs

Join Sign in

SAP Landing Zone: Add and Configure Storage to SAP VMs

1 hour 1 Credit

GSP1070

Google Cloud self-paced labs logo

Overview

In this lab, you will learn about the key activities required to add and configure disk storage of SAP VMs involved in deploying the various Google Cloud services and resources required for building the majority of SAP Landing Zones on Google Cloud.

The tasks of this lab are based on activities required to plan & deploy the disk storage of virtual machine(s) to host a provided mock SAP estate.

Note: Building a SAP Landing Zone requires a number of Google Cloud resources to be deployed. It is not the goal of this lab to reduce the design process/effort required to define a customer specific “SAP Landing Zone” into a set of prescriptive steps which can be executed to deploy an enterprise-ready “SAP Landing Zone” in an end-to-end manner. Please refer to the “lab pre-reading” material for more information.

Objectives

In this lab, you will learn how to:

  • Create “Persistent Disk(s)”

  • Attach “Persistent Disk(s)” to VM(s)

  • Configure and mount a “Linux LVM volume(s)” within a Linux VM(s)

  • Create “Cloud Storage Bucket(s)”

  • Set “IAM permission” on Cloud Storage Bucket(s)

Setup and Requirements

Before you click the Start Lab button

Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources will be made available to you.

This hands-on lab lets you do the lab activities yourself in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials that you use to sign in and access Google Cloud for the duration of the lab.

To complete this lab, you need:

  • Access to a standard internet browser (Chrome browser recommended).
Note: Use an Incognito or private browser window to run this lab. This prevents any conflicts between your personal account and the Student account, which may cause extra charges incurred to your personal account.
  • Time to complete the lab---remember, once you start, you cannot pause a lab.
Note: If you already have your own personal Google Cloud account or project, do not use it for this lab to avoid extra charges to your account.

How to start your lab and sign in to the Google Cloud Console

  1. Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is the Lab Details panel with the following:

    • The Open Google Console button
    • Time remaining
    • The temporary credentials that you must use for this lab
    • Other information, if needed, to step through this lab
  2. Click Open Google Console. The lab spins up resources, and then opens another tab that shows the Sign in page.

    Tip: Arrange the tabs in separate windows, side-by-side.

    Note: If you see the Choose an account dialog, click Use Another Account.
  3. If necessary, copy the Username from the Lab Details panel and paste it into the Sign in dialog. Click Next.

  4. Copy the Password from the Lab Details panel and paste it into the Welcome dialog. Click Next.

    Important: You must use the credentials from the left panel. Do not use your Google Cloud Skills Boost credentials. Note: Using your own Google Cloud account for this lab may incur extra charges.
  5. Click through the subsequent pages:

    • Accept the terms and conditions.
    • Do not add recovery options or two-factor authentication (because this is a temporary account).
    • Do not sign up for free trials.

After a few moments, the Cloud Console opens in this tab.

Note: You can view the menu with a list of Google Cloud Products and Services by clicking the Navigation menu at the top-left. Navigation menu icon

Activate Cloud Shell

Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.

  1. Click Activate Cloud Shell Activate Cloud Shell icon at the top of the Google Cloud console.

When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. The output contains a line that declares the PROJECT_ID for this session:

Your Cloud Platform project in this session is set to YOUR_PROJECT_ID

gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.

  1. (Optional) You can list the active account name with this command:

gcloud auth list
  1. Click Authorize.

  2. Your output should now look like this:

Output:

ACTIVE: * ACCOUNT: student-01-xxxxxxxxxxxx@qwiklabs.net To set the active account, run: $ gcloud config set account `ACCOUNT`
  1. (Optional) You can list the project ID with this command:

gcloud config list project

Output:

[core] project = <project_ID>

Example output:

[core] project = qwiklabs-gcp-44776a13dea667a6 Note: For full documentation of gcloud, in Google Cloud, refer to the gcloud CLI overview guide.

Prerequisites

Although not mandatory, to maximize your learning in this lab, the following labs should be completed first:

  • Cloud Foundation Toolkit (CFT) Training: Getting Started

  • SAP Landing Zone - Lab 1 : Plan & deploy the SAP network

  • SAP Landing Zone - Lab 2 : Plan & deploy SAP VMs

Executing activities in this lab

This lab will focus on the following “Landing Zone” topic areas:

  • Google Cloud services and resources to be deployed

This lab will cover sample key activities which will be involved in deploying the various Google Cloud services and resources involved in building the majority of "SAP Landing Zones" on Google Cloud.

The activities in this lab will build on each other, therefore it will be required to work through the activities in the order they are stated. This is intentional as it is representative of real world scenarios in which there will be dependencies between Google Cloud resources and highlights the need to work accurately as performing corrective action could be very effort intensive.

Google Cloud supports many different options which can be used to deploy Google Cloud resources. These include but are not limited to:

  • Manually via the Google Cloud Console
  • Command line (e.g. gcloud, gsutil, etc.)
  • API calls
  • Various programming languages (e.g. Go, Java, Node.js, Python)
  • Infrastructure as Code tools (e.g. Terraform)

Which option(s) will be adopted for a customer specific “SAP Landing Zone” would be defined during the design process.

In this lab it is assumed that all activities will be executed from the command line and the required command line instructions will be provided. However students may opt to use any supported option to perform the activities. Where available, links to the relevant “quickstart” online documentation will be provided which will include syntax instructions for the other options.

In this lab a sample naming convention will be provided for all resources (please reference the naming convention section and table provided in the pre-reading material for more information). The bulk of the activities will need the Google Cloud resources to be deployed in accordance with the naming convention and the configuration specifications provided as this will be representative of what would be expected in real world scenarios. Various activities in these labs will only be marked as completed if they are fully compliant to both the expected naming and the stated configuration specifications.

Parameter value color coding key:

To help you understand what the various parameter values relate to, the following color coding of the parameter values has been adopted:

  • Green = refers to the ID of the resource being created

  • Orange = refers to free text entry, generally used for descriptions

  • Blue = refers to the ID of a resource which has previously been created

  • Purple = refers to a configuration parameter of the resource being created

Timekeeping

This lab is time restricted to 60 min. At the end of the time the lab environment will automatically shutdown and any resources deployed will be erased. Recommending the lab will require all tasks to be executed afresh.

In order to ensure you have sufficient time to complete all the tasks, before clicking Start Lab it is highly advisable that you:

  • Read through all the tasks of the lab

  • Decide which option(s) you want to use to deploy the Google Cloud resources

  • Prepare the deployment code and tooling to be executed

Mock SAP Estate

To help the student be able to both visualize and relate the activities in this lab back to their own SAP estates, a mock SAP estate has been provided on which all activities of this lab will be based.

SAP Estate Overview

The below table outlines the key configuration of attributes of the SAP estate this lab will be based on. The table should be referenced in combination with the naming convention table provided in the next section.

The table outlines in yellow the components which will be deployed by the activities of this lab.

deploy_activities.png

Lab pre-read material: “SAP on Google Cloud” onboarding

For additional real world conditions information about the customer cloud “onboarding” journey of the customer, you are advised to read the “Lab pre-read: “SAP on Cloud” onboarding” material.

Note: If you wish to obtain the “SAP landing zone” skill badge, you will be required to read and answer a series of questions related to this pre-reading material.

Task 1. Create “Persistent Disk(s)”

Although disks may be often defined at the time of creating the virtual machine, it is quite common that additional disks may be required to be created.

Real world considerations:

On Google Cloud the performance of disk storage is determined based on a combination of the number of vCPUs, type of disk and its cumulative capacity. Because of the way the Google disk storage subsystem works, having multiple disks of the same type will produce the same performance as one disk of the same cumulative capacity (e.g.: 50GB + 1000GB + 500GB PD-balance disks all have the same performance characteristics as a single 1550GB PD-balanced disk).

There are a number of factors which will determine whether a customer should opt for 1 large disk or multiple smaller of the same type with the same cumulative size.

In the case of SAP HANA, and similarly with other SAP supported DBs, if disk storage snapshots (orchestrated by the RDBMS) are to be adopted as a backup approach it is essential to ensure the DB data files, DB log space/area and DB executables (where the backup catalog is stored) are each on separate disks so as to ensure that recovery activities can be successfully actioned.

In this lab, it will be assumed that the customer will be adopting disk storage snapshots as a backup approach and that all systems are for non-productive SAP HANA systems based on the following:

Total disk size = Disk 01 + Disk 02 + Disk 03

  • Disk 01 -> hana/shared = 50 GB
  • Disk 02 -> hana/data = (RAM x 1.2) + 10%
  • Disk 03 -> hana/log = (based on RAM) + 10%

One of the key benefits of Google Cloud is that you only pay for what you consume and it is almost limitless in ability to quickly grow your resources when needed. This means that during initial deployment activities very small VMs and disks sizes are highly recommended so as to avoid wasted costs. This can allow significant cost savings during initial deployment phases prior to certain configuration activities being performed.

This can be particularly relevant with SAP HANA VMs which may have 128GB+ of RAM & 800GB+ of disk storage when live but until such time as both the SAP HANA system and application data load is deployed to the VM virtually all of these resources will be idle.

Note: For productive use VMs hosting SAP HANA systems need to comply with certain performance requirements of the disk storage to meet SAP performance certification status.

Online Documentation:

Dependencies:

In order to be able to successfully deploy the required “Persistent Disk(s)” the following dependencies will need to be correctly deployed before proceeding with this activity:

  • GCP account setup

  • IAM setup

  • Resource hierarchy setup

  • GCP project to host VM resources is setup

Naming Convention:

  • ID:
    • Character limit: 62
    • Structure: <01><02>-disk--<03>-<04>-<05>--<06>--data-disk-<07>--<07>

                             <01> - company id

                             <02> - business entity id

                             <03> - workload id

                             <04> - operational area id

                             <05> - environment id

                             <06> - system id

                             <07> - disk count no.

                             <08> - short virtual hostname

  • Description:
    • Character limit: unlimited
    • Structure: “<01>-<02> disk = <03>-<04>-<05> - <06> (<07>) Data disk <08> - virtual host=<09>”

                             <01> - Company

                             <02> - Business entity ID

                             <03> - Workload ID

                             <04> - Operational area ID

                             <05> - Environment ID

                             <06> - Application ID

                             <07> - System ID

                             <08> - disk count no.

                             <09> - short virtual hostname

Resources to be created (Count = 3):

Using any supported method create the OS disk “Persistent Disk(s)” with the following configuration details:

#01 of 03:

Parameter Value
ID xgl-disk--cerps-bau-dev--dh1--data-disk-01--d-cerpshana1
Description XYZ-Global disk = CERPS-BaU-Dev - SAP HANA 1 (DH1) - Data disk 01 - virtualhost=d-cerpshana1
Zone
SAP HANA DB - Live size 128 GB RAM
Disk - Role hana/shared
Disk type pd-balanced
Disk - Deployment size 50
Disk - Live size 50

#02 of 03:

Parameter Value
ID xgl-disk--cerps-bau-dev--dh1--data-disk-02--d-cerpshana1
Description XYZ-Global disk = CERPS-BaU-Dev - SAP HANA 1 (DH1) - Data disk 02 - virtualhost=d-cerpshana1
Zone
SAP HANA DB - Live size 128 GB RAM
Disk - Role hana/data
Disk - Type pd-balanced
Disk - Deployment size 50
Disk - Live size 180

#03 of 03:

Parameter Value
ID xgl-disk--cerps-bau-dev--dh1--data-disk-03--d-cerpshana1
Description XYZ-Global disk = CERPS-BaU-Dev - SAP HANA 1 (DH1) - Data disk 03 - virtualhost=d-cerpshana1
Zone
SAP HANA DB - Live size 128 GB RAM
Disk - Role hana/log
Disk - Type pd-balanced
Disk - Deployment size 50
Disk - Live size 70

Command line:

Command 01 of 01 - Create a “Persistent Disk” with a custom name for a VM:

gcloud compute disks create [insert ID here] \ --description="[insert description here]" \ \ --project=[insert your GCP project ID here] \ --zone=[insert zone here] \ \ --type=[insert disk type here] \ --size=[insert deployment disk size here]

(Example output)

Created [https://www.googleapis.com/compute/v1/projects/[GCP project ID]/zones/[zone ID]/disks/[disk ID]]. NAME: [disk ID] ZONE: [zone ID] SIZE_GB: [disk size GB] TYPE: [disk type] STATUS: READY

Review and validate in the Cloud Console:

Confirm the “Persistent Disk(s)” have been deployed in the Cloud Console using the following path “Navigation menu -> COMPUTE -> Compute Engine -> Storage -> Disks”.

data_disks_created_summary

Ensure all lab resources have finished deploying

Note: The lab will take approx. 10-15 min to complete deploying all the resources from the time the “Start lab” button was clicked.

You can confirm the lab is completely deployed by ensuring that there are 6 VM instances deployed in the Cloud Console using the following path Navigation menu -> COMPUTE -> Compute Engine -> Virtual Machines -> VM instances.

pre-build_summary

NOTE: Before continuing the lab, make sure that all the VM instances get created. Navigate to Compute Engine > VM instances and wait for sometime untill all the instances get created.

Click Check my progress to verify the objective.

Create “Persistent Disk(s)”

Task 2. Attach “Persistent Disk(s)” to Virtual Machine(s)

Although disks may be often defined at the time of creating the virtual machine, it is quite common that additional disks may be required to be created. Once a disk is created it must be attached to a VM before it can be used by that VM.

Real world considerations:

Disks connected to VMs have 2 levels of management:

  • At the hypervisor level - For defining the size and type of disks
  • At the VM level - For defining how the VM will access the disk (e.g.: via file system, as raw devices, etc.)

A VM could have many disks attached to it (max 127) therefore it is important to be able to easily identify which disk at the VM level relates to the corresponding disk at the hypervisor level for various actions (e.g.: increasing the size of disk). Google Cloud provides the “--device-name” parameter when attaching a disk to a VM which can be populated with free text. When using commands such as “ls -l /dev/disk/by-id” the OS will list the disks at the OS level using the value defined for the “--device-name” parameter.

Note: It is strongly recommended that the disk ID at the hypervisor level is specified for the “--device-name” parameter when attaching a disk to a VM as it will allow for a consistent naming of the disk across both the hypervisor and VM.

Online Documentation:

Dependencies:

In order to be able to successfully attached the required “Persistent Disk(s)” the following dependencies will need to be correctly deployed before proceeding with this activity:

  • GCP account setup

  • IAM setup

  • Resource hierarchy setup

  • GCP project to host VPC setup

  • VPC network(s) deployed

  • VPC subnets deployed

  • VPC firewall rule(s) deployed

  • GCP project to host the VM(s) and store the disk(s) is setup

  • Service account(s) deployed

  • Persistent disk to be attached to a VM has been created

  • VM which disks is to be attached to has been created

Resources to be created (Count = 3):

Using any supported method attach the “Persistent Disk(s)” to the required VM(s) with the following configuration details:

#01 of 03:

Parameter Value
VM - ID xgl-vm--cerps-bau-dev--dh1--be1ncerps001
Disk - ID xgl-disk--cerps-bau-dev--dh1--data-disk-01--d-cerpshana1
Zone
OS device name xgl-disk--cerps-bau-dev--dh1--data-disk-01--d-cerpshana1

#02 of 03:

Parameter Value
VM - ID xgl-vm--cerps-bau-dev--dh1--be1ncerps001
Disk - ID xgl-disk--cerps-bau-dev--dh1--data-disk-02--d-cerpshana1
Zone
OS device name xgl-disk--cerps-bau-dev--dh1--data-disk-02--d-cerpshana1

#03 of 03:

Parameter Value
VM - ID xgl-vm--cerps-bau-dev--dh1--be1ncerps001
Disk - ID xgl-disk--cerps-bau-dev--dh1--data-disk-03--d-cerpshana1
Zone
OS device name xgl-disk--cerps-bau-dev--dh1--data-disk-03--d-cerpshana1

Command line:

Command 01 of 01 - Attach a “Persistent Disk” to a VM:

gcloud compute instances attach-disk [insert VM ID here] \ \ --project=[insert your GCP project ID here] \ --zone=[insert zone here] \ \ --disk=[insert disk ID here] \ --device-name=[insert os device name here] \ --mode=rw

(Example output)

Updated [https://www.googleapis.com/compute/v1/projects/[GCP project ID]/zones/[zone ID]/instances/[VM ID]].

Review and validate in the Cloud Console:

Confirm the “Persistent Disk(s)” have been attached to the required VM(s) in the Cloud Console using the following path “Navigation menu -> COMPUTE -> Compute Engine -> Storage -> Disks”.

data_disks_attached_summary

Click Check my progress to verify the objective.

Attach “Persistent Disk(s)”

Task 3. Configure and Mount “Linux LVM volume(s)” within Linux Virtual Machine(s)

Once a newly created disk has been attached to a VM it still needs to be prepared to be able to be accessed via the OS. Linux LVM is commonly used to partition disks for usage at the OS level.

Real world considerations:

With Google Cloud the performance of disk storage is determined based on a combination of the number of vCPUs, type of disk and its cumulative capacity. Because of the way the Google disk storage subsystem works having multiple disks of the same type will produce the same performance as one disk of the same cumulative capacity (e.g.: 50GB + 1000GB + 500GB PD-balance disks all have the same performance characteristics as a single 1550GB PD-balanced disk).

Some other cloud providers recommend using multiple disks configured in software RAID 0 group (e.g. using LVM) to achieve the required performance. On GCP there is no performance benefit from configuring multiple disks in a RAID 0 group configuration and therefore should be avoided (unless there is some other non-performance related justification).

Note: For VMs running on Google Cloud when configuring “Linux LVM volume(s)” it is generally recommended that only a single persistent disk be added to a LVM volume group, however you can have multiple logical volumes within a volume group.

Online Documentation:

Dependencies:

In order to be able to successfully configure the “Linux LVM volume(s)” the following dependencies will need to be correctly deployed before proceeding with this activity:

  • GCP account setup

  • IAM setup

  • Resource hierarchy setup

  • GCP project to host VPC setup

  • VPC network(s) deployed

  • VPC subnets deployed

  • VPC firewall rule(s) deployed

  • GCP project to host the VM(s) and store the disk(s) is setup

  • Service account(s) deployed

  • VM to which disks is to be attached to has been created

  • Persistent disk to be attached to a VM has been created

  • Persistent disk has been attached to the required VM

Naming Convention:

  • Google Persistent Disk ID in Linux:
    • Structure: google-<01>

                             <01> - OS device name

  • LVM - Volume Group:
    • Structure: vg--<01>

                             <01> - OS device name

  • LVM - Logical Volume:
    • Structure: lv--<01>

                             <01> - the directory mount point of the logical volume (replace “/” with “-”)

Resources to be created (Count = 3):

Navigate to Compute Engine > VM Instances page, Start and SSH into the required VM and then create and mount the following “Linux LVM volume(s)” with the following configuration details:

#01 of 03:

Parameter Value
VM - ID xgl-vm--cerps-bau-dev--dh1--be1ncerps001
OS device name xgl-disk--cerps-bau-dev--dh1--data-disk-01--d-cerpshana1
LVM - Volume group name vg--xgl-disk--cerps-bau-dev--dh1--data-disk-01--d-cerpshana1
LVM - Logical volume name lv--hana-shared-DH1
Filesystem - Mount point /hana/shared/DH1
Filesystem - Parent directory /hana
Filesystem - Type xfs

#02 of 03:

Parameter Value
VM - ID xgl-vm--cerps-bau-dev--dh1--be1ncerps001
OS device name xgl-disk--cerps-bau-dev--dh1--data-disk-02--d-cerpshana1
LVM - Volume group name vg--xgl-disk--cerps-bau-dev--dh1--data-disk-02--d-cerpshana1
LVM - Logical volume name lv--hana-data-DH1
Filesystem - Mount point /hana/data/DH1
Filesystem - Parent directory /hana
Filesystem - Type xfs

#03 of 03:

Note: The student must first perform a OS login to the VM in question to perform the following commands.
Parameter Value
VM - ID xgl-vm--cerps-bau-dev--dh1--be1ncerps001
OS device name xgl-disk--cerps-bau-dev--dh1--data-disk-02--d-cerpshana1
LVM - Volume group name vg--xgl-disk--cerps-bau-dev--dh1--data-disk-02--d-cerpshana1
LVM - Logical volume name lv--hana-log-DH1
Filesystem - Mount point /hana/log/DH1
Filesystem - Parent directory /hana
Filesystem - Type xfs

Command line:

Note: The student must first perform a OS logon to the VM in question to perform the following commands.

Command 01 of 14 - List disks (by-ID) attached to a VM:

Make yourself the root user by executing the following command:

sudo -i sudo ls -l /dev/disk/by-id

(Example output)

lrwxrwxrwx 1 root root 9 Aug 12 20:50 google-os-disk -> ../../sda lrwxrwxrwx 1 root root 10 Aug 12 20:50 google-os-disk-part1 -> ../../sda1 lrwxrwxrwx 1 root root 10 Aug 12 20:50 google-os-disk-part2 -> ../../sda2 lrwxrwxrwx 1 root root 10 Aug 12 20:50 google-os-disk-part3 -> ../../sda3 lrwxrwxrwx 1 root root 9 Aug 12 20:50 google-[OS device name] -> ../../sdb lrwxrwxrwx 1 root root 9 Aug 12 20:50 scsi-0Google_PersistentDisk_os-disk -> ../../sda lrwxrwxrwx 1 root root 10 Aug 12 20:50 scsi-0Google_PersistentDisk_os-disk-part1 -> ../../sda1 lrwxrwxrwx 1 root root 10 Aug 12 20:50 scsi-0Google_PersistentDisk_os-disk-part2 -> ../../sda2 lrwxrwxrwx 1 root root 10 Aug 12 20:50 scsi-0Google_PersistentDisk_os-disk-part3 -> ../../sda3 lrwxrwxrwx 1 root root 9 Aug 12 20:50 scsi-0Google_PersistentDisk_[OS device name] -> ../../sdb

Command 02 of 14 - Create LVM physical volume from a attached disk:

sudo pvcreate /dev/disk/by-id/google-[insert os device name here]

(Example output)

Physical volume "/dev/disk/by-id/google-[OS device name]" successfully created.

Command 03 of 14 - Display LVM physical volume(s):

sudo pvdisplay

(Example output)

PV Name /dev/sd[?] VG Name PV Size [disk size] GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID [xxxxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxxxxx]

Command 04 of 14 - Create LVM volume group within a attached disk:

sudo vgcreate \ [insert LVM volume group name here] \ \ /dev/disk/by-id/google-[insert os device name here]

(Example output)

Volume group "vg--[OS device name]" successfully created

Command 05 of 14 - Display LVM volume group(s):

sudo vgdisplay

(Example output)

--- Volume group --- VG Name vg--[OS device name] System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size [disk size] GiB PE Size 4.00 MiB Total PE 12799 Alloc PE / Size 0 / 0 Free PE / Size 12799 / [disk size] GiB VG UUID [xxxxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxxxxx]

Command 06 of 14 - Create LVM logical volume(s) within the volume group:

sudo lvcreate -l 100%FREE -n [insert LVM logical volume name here] [insert LVM volume group name here]

(Example output)

Logical volume "[LVM logical volume name]" created.

Command 07 of 14 - Display LVM logical volume(s):

sudo lvdisplay

(Example output)

--- Logical volume --- LV Path /dev/vg--[OS device name]/[LVM logical volume name] LV Name [LVM logical volume name] VG Name vg--[OS device name] LV UUID [xxxxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxxxxx] LV Write Access read/write LV Creation host, time [hostname], 2022-08-12 20:55:44 +0000 LV Status available # open 0 LV Size [disk size] GiB Current LE 12799 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 1024 Block device 254:0

Command 08 of 14 - Create xfs filesystem on the logical volume(s):

sudo mkfs \ -t [insert filesystem type here] \ /dev/[insert LVM volume group name here]/[insert LVM logical volume name here]

(Example output)

meta-data=/dev/vg--[OS device name]/[LVM logical volume name] isize=512 agcount=4, agsize=3276544 blks = sectsz=4096 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=0 bigtime=0 inobtcount=0 data = bsize=4096 blocks=13106176, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=6399, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 Discarding blocks...Done.

Command 09 of 14 - Create filesystem mount point(s):

sudo mkdir -p [insert filesystem mount point here]

Command 10 of 14 - Change ownership of the parent directory recursively:

sudo chown -R nobody:nogroup [insert filesystem parent directory here]

Command 11 of 14 - Change permissions of the parent directory recursively:

sudo chmod -R 755 [insert filesystem parent directory here]

Command 12 of 14 - Add file system details to the /etc/fstab file:

Note: The following command automatically adds the line to the etc/fstab file. If errors are made or the command is run multiple times for the same line the file will need to be manually edited and corrected. sudo su - echo "/dev/[insert LVM volume group name here]/[insert LVM logical volume name here] [insert filesystem mount point here] [insert filesystem type here] defaults,discard,nofail 0 2" >>/etc/fstab exit

Command 13 of 14 - Mount all filesystems defined in the /etc/fstab file:

sudo mount -a

Command 14 of 14 - Confirm the file system(s) have been mounted:

sudo df -h

(Example output)

Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 8.0K 4.0M 1% /dev tmpfs 278M 0 278M 0% /dev/shm tmpfs 112M 3.8M 108M 4% /run tmpfs 4.0M 0 4.0M 0% /sys/fs/cgroup /dev/sda3 50G 1.7G 49G 4% / /dev/sda2 20M 2.9M 18M 15% /boot/efi tmpfs 56M 0 56M 0% /run/user/1378871550 /dev/mapper/vg--[OS device name]-[LVM logical volume name] 50G 84M 50G 1% [Mount point]

Click Check my progress to verify the objective.

Configure and Mount Linux LVM volumes within Linux Virtual Machine

Task 4. Create “CloudStorage Bucket(s)”

GCS buckets are commonly used to securely write to and store SAP HANA BackInt DB backups to a more cost effective backup medium as opposed to disks connected to the VM.

Real world considerations:

There are a number options which can be adopted with regards to storage bucket deployment options, each with their own PROs, CONs & limitations. Customers should carefully review their security policies and requirements of the storage buckets so as to ensure backup data written to these storage buckets is adequately secure and not too onerous from an ongoing support & maintenance perspective.

Different backup policy requirements may require different types (e.g.: storage class), location (e.g.: in a specific region, out of region, multi-regional, etc) and min retention period for the backups which could influence the configuration of the storage bucket.

In this lab it will be assumed that every SAP HANA DB will have a dedicated single storage bucket in the same region as the system.

Note: Some customers want to ensure that all SAP HANA BackInt DB backups for productive systems are written to a secondary region so that even in the case of an entire region failure the availability of the backups will be unaffected. Google Cloud supports this but the customer should be aware that there will be additional network costs for the inter-region egress traffic.

Online Documentation:

Dependencies:

In order to be able to successfully create the required “Cloud Storage Bucket(s)” the following dependencies will need to be correctly deployed before proceeding with this activity:

  • GCP account setup

  • IAM setup

  • Resource hierarchy setup

  • GCP project to host the VM(s) and store the bucket(s) is setup (or an alternative project)

Naming Convention:

Note: Google Cloud Storage buckets must have a globally unique name which may present challenges for customers to adhere to a defined naming convention as there is nothing which prevents another customer from creating the bucket first with the same desired name. More information can be found here: Bucket naming guidelines
  • ID:
    • Character limit: 63
    • Structure: <01><02>-bucket--<03>-<04>-<05>--<06>--db-backups-<07>

                             <01> - company id

                             <02> - business entity id

                             <03> - workload id

                             <04> - operational area id

                             <05> - environment id

                             <06> - system id

                             <07> - bucket no.

Resources to be created (Count = 1):

Using any supported method deploy the “Cloud Storage Bucket(s)” with the following configuration details:

#01 of 01:

Parameter Value
ID xgl-bucket--cerps-bau-dev--dh1--db-backups-{random number}
Location us-central1
Storage class standard
Min retention 7d

Command line:

Command 01 of 01 - Create a “Cloud Storage Bucket(s)”:

gsutil mb \ -p [insert your GCP project ID here] \ -l [insert location here] \ -c [insert storage class here] \ --retention [insert retention period here] \ --pap enforced \ gs://[insert ID here]

(Example output)

Creating gs://[cloud storage bucket ID]/...

Review and validate in the Cloud Console:

Confirm the “Cloud Storage Bucket(s)” have been deployed in the Cloud Console using the following path “Navigation menu -> Cloud Storage -> Buckets”.

storage_bucket_summary

Click Check my progress to verify the objective.

Create “Cloud Storage Bucket(s)”

Task 5. Set permission on “Cloud Storage Bucket(s)”

In order for a SAP HANA system to be able to access the GCS bucket for SAP HANA Backint DB backups the service account of the VM hosting the SAP HANA needs to be assigned the IAM permission to be able to read & write to the GCS bucket.

Real world considerations:

There are a number options which can be adopted with regards to storage bucket deployment options, each with their own PROs, CONs & limitations. Customers should carefully review their security policies and requirements of the storage buckets so as to ensure backup data written to these storage buckets is adequately secure and not too onerous from an ongoing support & maintenance perspective.

In this lab it will be assumed that each SAP HANA DB system will have access to its own dedicated storage bucket.

Online Documentation:

Dependencies:

In order to be able to successfully create the required disk image(s) the following dependencies will need to be correctly deployed before proceeding with this activity:

  • GCP account setup

  • IAM setup

  • Resource hierarchy setup

  • Service account(s) deployed

  • GCP project to host the VM(s) and store the bucket(s) is setup (or an alternative project)

  • “Cloud Storage Bucket(s)” deployed

Resources to be created (Count = 1):

Using any supported method deploy the “cloud storage bucket(s)” with the following configuration details:

Parameter Value
Service Account - ID xgl-sa--cerps-bau-dev--dh1-db
Bucket - ID xgl-bucket--cerps-bau-dev--dh1--db-backups-{random number}
IAM roles objectCreator,objectViewer

Command line:

Command 01 of 01 - Grant VM “service account” access to the “cloud storage bucket”:

gsutil iam ch serviceAccount:[insert service account ID here]@[insert your GCP project ID here].iam.gserviceaccount.com:[insert iam roles here] \ gs://[insert bucket ID here]

Review and validate in the Cloud Console:

Confirm the “Cloud Storage Bucket(s)” have been assigned the “IAM roles” in the Cloud Console using the following path “Navigation menu -> Cloud Storage -> Buckets -> [bucket name] -> Permissions -> Principles”.

storage_bucket_iam_summary

Click Check my progress to verify the objective.

“IAM role(s)” assigned to Cloud Storage Bucket(s)

Congratulations!

Summary of Activities:

The below table outlines in yellow the components which will have been deployed by the activities of this lab.

activity_summary

Next Steps / Learn More

Be sure to check out the following documentation for more practice with SAP:

Google Cloud training and certification

...helps you make the most of Google Cloud technologies. Our classes include technical skills and best practices to help you get up to speed quickly and continue your learning journey. We offer fundamental to advanced level training, with on-demand, live, and virtual options to suit your busy schedule. Certifications help you validate and prove your skill and expertise in Google Cloud technologies.

Manual Last Updated June 02, 2023

Lab Last Tested June 02, 2023

Copyright 2023 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.