arrow_back

Protecting Data with NetApp BlueXP & Cloud Volumes ONTAP for Google Cloud

Sign in Join
Test and share your knowledge with our community!
done
Get access to over 700 hands-on labs, skill badges, and courses

Protecting Data with NetApp BlueXP & Cloud Volumes ONTAP for Google Cloud

Lab 1 hour 30 minutes universal_currency_alt 1 Credit show_chart Introductory
info This lab may incorporate AI tools to support your learning.
Test and share your knowledge with our community!
done
Get access to over 700 hands-on labs, skill badges, and courses

This lab was developed with our partner, NetApp. Your personal information may be shared with NetApp, the lab sponsor, if you have opted-in to receive product updates, announcements, and offers in your Account Profile.

GSP876

Google Cloud self-paced labs logo

Overview

In this lab, you will learn how to implement a full data protection strategy with NetApp Cloud Volumes ONTAP and BlueXP capabilities. You will get practical experience in managing NetApp Snapshots and use them for recovery and cloning purposes. You will also be introduced with Cloud Volumes ONTAP replication engine, allowing you to implement cross-regions & cross-clouds disaster recovery. This lab is derived from BlueXP's documentation published by NetApp.

What you'll learn

In this lab, you will learn how to perform the following tasks:

  • Manage NetApp Snapshots for local protection
  • Use NetApp Snapshots for recovery and cloning purposes
  • Replicate data cross-regions for disaster recovery
  • Enable ransomware protection

Prerequisites

This is a fundamental lab and no prior knowledge is required. It is recommended to be familiar with using NetApp BlueXP and knowing how to perform foundational tasks in Google Cloud. The lab, Getting Started with NetApp BlueXP & Cloud Volumes ONTAP for Google Cloud, is a good introduction.

Setup and requirements

Before you click the Start Lab button

Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources will be made available to you.

This hands-on lab lets you do the lab activities yourself in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials that you use to sign in and access Google Cloud for the duration of the lab.

To complete this lab, you need:

  • Access to a standard internet browser (Chrome browser recommended).
Note: Use an Incognito or private browser window to run this lab. This prevents any conflicts between your personal account and the Student account, which may cause extra charges incurred to your personal account.
  • Time to complete the lab---remember, once you start, you cannot pause a lab.
Note: If you already have your own personal Google Cloud account or project, do not use it for this lab to avoid extra charges to your account.

How to start your lab and sign in to the Google Cloud console

  1. Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is the Lab Details panel with the following:

    • The Open Google Cloud console button
    • Time remaining
    • The temporary credentials that you must use for this lab
    • Other information, if needed, to step through this lab
  2. Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).

    The lab spins up resources, and then opens another tab that shows the Sign in page.

    Tip: Arrange the tabs in separate windows, side-by-side.

    Note: If you see the Choose an account dialog, click Use Another Account.
  3. If necessary, copy the Username below and paste it into the Sign in dialog.

    {{{user_0.username | "Username"}}}

    You can also find the Username in the Lab Details panel.

  4. Click Next.

  5. Copy the Password below and paste it into the Welcome dialog.

    {{{user_0.password | "Password"}}}

    You can also find the Password in the Lab Details panel.

  6. Click Next.

    Important: You must use the credentials the lab provides you. Do not use your Google Cloud account credentials. Note: Using your own Google Cloud account for this lab may incur extra charges.
  7. Click through the subsequent pages:

    • Accept the terms and conditions.
    • Do not add recovery options or two-factor authentication (because this is a temporary account).
    • Do not sign up for free trials.

After a few moments, the Google Cloud console opens in this tab.

Note: To view a menu with a list of Google Cloud products and services, click the Navigation menu at the top-left. Navigation menu icon

Activate Cloud Shell

Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.

  1. Click Activate Cloud Shell Activate Cloud Shell icon at the top of the Google Cloud console.

When you are connected, you are already authenticated, and the project is set to your Project_ID, . The output contains a line that declares the Project_ID for this session:

Your Cloud Platform project in this session is set to {{{project_0.project_id | "PROJECT_ID"}}}

gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.

  1. (Optional) You can list the active account name with this command:
gcloud auth list
  1. Click Authorize.

Output:

ACTIVE: * ACCOUNT: {{{user_0.username | "ACCOUNT"}}} To set the active account, run: $ gcloud config set account `ACCOUNT`
  1. (Optional) You can list the project ID with this command:
gcloud config list project

Output:

[core] project = {{{project_0.project_id | "PROJECT_ID"}}} Note: For full documentation of gcloud, in Google Cloud, refer to the gcloud CLI overview guide.

Set up service accounts

  1. To start, in your Cloud Shell window, run the following commands to set some environment variables:
export PROJECT_ID=$(gcloud config get-value project) export serviceAccount="netapp-cloud-manager@"$PROJECT_ID".iam.gserviceaccount.com" export serviceAccount2="netapp-cvo@"$PROJECT_ID".iam.gserviceaccount.com"
  1. Next, run the following commands to assign roles to the pre-created netapp-cloud-manager service account:
# assign roles to the netapp-cloud-manager service account gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:${serviceAccount}" \ --role='roles/iam.serviceAccountAdmin' gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:${serviceAccount}" \ --role='roles/storage.objectAdmin' gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:${serviceAccount}" \ --role='roles/deploymentmanager.editor' gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:${serviceAccount}" \ --role='roles/logging.admin' gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:${serviceAccount}" \ --role='roles/compute.admin' gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:${serviceAccount}" \ --role='roles/cloudkms.admin' gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:${serviceAccount}" \ --role='roles/storage.admin'

Click Check my progress to verify that you've performed the above task.

Assign roles to service account
  1. Lastly, create the netapp-cvo service account and assign it the relevant roles:
# create netapp-cvo service account gcloud iam service-accounts create netapp-cvo --display-name=netapp-cvo gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:${serviceAccount2}" \ --role='roles/storage.objectAdmin' gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:${serviceAccount2}" \ --role='roles/storage.admin' gcloud iam service-accounts add-iam-policy-binding ${serviceAccount2} --member="serviceAccount:${serviceAccount}" --role='roles/iam.serviceAccountUser'

Click Check my progress to verify that you've performed the above task.

Set up Service Accounts

Set up a new ONTAP storage environment

In this section, you will log in to NetApp BlueXP, NetApp’s data lifecycle management platform, and create a new Cloud Volumes ONTAP storage environment on Google Cloud infrastructure.

  1. From the Navigation Menu, go to Compute Engine > VM instances. You should see the cloudmanager VM already deployed for you.

cloudmanager VM with checkmark alongside it

  1. Click the External IP to connect to the VM in a new window. Use http://<IP Address>/.
Note: If Chrome gives you a privacy error, you can still continue by clicking Advanced, then Proceed.
  1. Once you are on the BlueXP SaaS setup page, you can set up your NetApp Account.

First click Sign Up, then enter in your email address and a password. Fill out the rest of the fields to authenticate yourself with NetApp’s Cloud Central and click Sign Up to log in to BlueXP.

The deployed cloudmanager VM

  1. Next, you will be prompted to create an Account Name and a BlueXP Name. For these, you can just use user. Click Let's Start. Click on Lets go to BlueXP.

  2. Before you add a working environment, navigate to the gear icon on the top-right of the page then click Connector Settings > General.

  3. For Automatic Cloud Volumes ONTAP update during deployment, click the box and then uncheck the Automatically update Cloud Volumes ONTAP box to turn the updates off. Click Save. Your configuration settings should resemble the following:

Cloud Manager Connector Settings page

  1. Click BlueXP on the top left to navigate to the home page. Click Add Working Environment.
Note: If you happen to run into any issues adding a Working Environment (i.e server fault), this can be due to BlueXP automatically updating. If this happens, please wait a few minutes, refresh the page, and try again.
  1. Add Working Environment: select Google Cloud Platform.

  2. Choose Type: Cloud Volumes ONTAP Single Node and click Add New.

  3. Specify a cluster name, optionally add labels, and then specify a password for the default admin account. Click Continue.

Note: The service account netapp-cvo has been pre-created for you with the Storage Admin role.
  1. On the Services page, toggle off Backup to Cloud (you will enable it later on the lab) and click Continue.

  2. Location & Connectivity: For the GCP Region and GCP Zone, use and , respectively. For the VPC, Subnet, and Firewall Policy, leave as default. Select the checkbox to confirm network connectivity to Google Cloud storage for data tiering, and click Continue.

  3. Cloud Volumes ONTAP Charging Methods & NSS Account: Select Freemium (Up to 500 GiB) and click on Continue.

  4. Preconfigured Packages: click Change Configuration.

  5. On the Licensing section, first click Change version and select ONTAP-9.7P5 version from the drop down. Click Apply. Next, from the Machine Type dropdown list select n1-highmem-4. Click Continue.

  6. Underlying Storage Resources: Choose Standard and for the Google Cloud Disk Size select 100 GB. Click Continue.

  7. WORM (write once, read many): keep the settings as default and click Continue.

  8. Create Volume: Create a NetApp volume that will be used in the next sections. Use the following details:

  • Set Volume Name to data1
  • Set Size to 10 (size set in GB)
  • Set Snapshot Policy to none
  • Select NFS as Protocol
  • Verify Access Control is set to Custom export policy
  • Verify Custom export policy is set with the VPC’s CIDR range
  • Click Continue

Your configuration settings should resemble the following:

Create Volume page with two sections: Details & Protection, and Protocol

  1. Create Volume - Usage Profile Disk Type & Tiering Policy: use the following configurations:
  • Verify the Storage Efficiency is marked (to enable NetApp's data footprint reduction technologies).
  • Verify the Disk Type is set to Standard (volume will be created in Standard-backed PD).
  • Leave Tiering data to object storage with default settings.
  • Click Continue.
  1. Review & Approve: Review and confirm your selections:
  • Review details about the configuration.
  • Click More information to review details about support and the Google Cloud resources that BlueXP will purchase.
  • Select both checkboxes.
  • Click Go.

Great! You're done with your working environment setup. Now, sit back while BlueXP deploys your Cloud Volumes ONTAP system and creates a NetApp volume. This should take around ten minutes to complete. In the meantime, you can check out the NetApp documentation for Cloud Volumes ONTAP for Google Cloud.

Click Check my progress to verify that you've performed the above task.

Set up a new ONTAP Storage Environment

NetApp Snapshots management

In this section, you will learn about NetApp Snapshots and leverage them for instant recovery and cloning. A NetApp Snapshot is a highly efficient, read-only, point-in-time image of a NetApp volume providing it with local protection. NetApp Snapshots do not affect system performance, they are instantly created regardless of the data size in the volume, and only consume storage space when changes are made to existing data. Go to NetApp documentation center for more information.

Access the volume

In this section, set up access to the Cloud Volumes ONTAP volume, created in the previous section, from a Linux VM using the NFS protocol.

  1. Go back to the Google Cloud Console. Run the following commands in Cloud Shell to reset your PROJECT_ID environment variable and create a new linux-vm instance:
export PROJECT_ID=$(gcloud config get-value project) gcloud beta compute --project=$PROJECT_ID instances create linux-vm --zone={{{ my_primary_project.default_zone | "ZONE" }}} --machine-type=e2-medium
  1. After the VM has been deployed, navigate back to the NetApp BlueXP page and double-click on your Cloud Volumes ONTAP working environment to access the Volumes page.

  2. On the Volumes page, locate the data1 volume and click on its triple-bar.

data1 volume page

  1. From the list of operations revealed, click on Mount Command.

  2. Then click on Copy to copy the command to your clipboard.

  3. Back in the Cloud Console, from the Navigation Menu, navigate to Compute Engine > VM instances.

  4. On the VM instances page, locate the linux-vm instance and connect to it by clicking on SSH.

  5. In the linux VM SSH window, create a local directory under /mnt named data1 that would serve as a mount point:

sudo mkdir /mnt/data1
  1. Before mounting, use the following command to install nfs-common:
sudo apt install nfs-common -y
  1. Next, paste the Mount Command (copied on step 2) and replace <dest_dir> with the /mnt/data1 directory created. As well, add -o vers=3 to the command to mount with NFS v3. Press ENTER to mount Cloud Volumes ONTAP volume using NFS.
## Your command should resemble this but with your IP sudo mount -o vers=3 10.142.0.36:/data1 /mnt/data1
  1. Verify that your volume is properly mounted using the following native Linux command:
df -h

You should see the data1 volume mounted on /mnt/data.

  1. Finally, create a file with a single line within the mounted volume using the following native Linux command:
echo In this qwiklab > /mnt/data1/cvo.google

Great! You have just mounted the volume created in the previous section, now let's see what these NetApp Snapshots can do.

Click Check my progress to verify that you've performed the above task.

Access the Volume

Creating NetApp Snapshots

In each NetApp volume up to 1,023 Snapshots can be retained. Now you'll configure Cloud Volumes ONTAP to automatically create NetApp Snapshots and manually create one.

  1. Go back to BlueXP and double click on Cloud Volumes ONTAP working environment to enter the Volumes page.

  2. On the Volumes page, locate the data1 volume click on its triple-bar and from the list of operations revealed click on Edit.

  3. From the Edit page, set the Snapshot Policy to default. Note the schedules at which NetApp Snapshots are created automatically and the retention for each. Click Update.

Note: The default snapshot policy is automatically associated with any new volume created, unless specified otherwise (like you did in the first section). Additional policies can be created through the ONTAP System Manager.
  1. On the Volumes page, click on the triple-bar of the data1 volume and from the list of operations revealed click on Create a Snapshot copy.

  2. On the Create a Snapshot page, leave the Snapshot Copy Name unchanged and click on Create. Note the notification that appears when redirected back to the Volumes page.

Well done! You have just enabled automatic Snapshots creation for your volume (if you stay long enough you can see them accumulating) and created a manual snapshot instantly. Now, see what you can do with it.

Click Check my progress to verify that you've performed the above task.

Create NetApp Snapshots

Restoring from NetApp Snapshots

It's time to use a Snapshot copy to recover data. You can recover individual files and directories, LUNs (when using iSCSI), or recover the entire content of a volume. In this section you will practice file restore through an NFS client as well as volume restore through BlueXP.

  1. In the linux-vm SSH window, use the following native Linux command to append a line to the file you have previously created:
echo we are learning NetApp data protection >> /mnt/data1/cvo.google
  1. Go back to the Volumes page, click on the triple-bar of the data1 volume and from the list of operations revealed click on Create a Snapshot copy.

  2. On the Create a Snapshot page, leave the Snapshot Copy Name unchanged and click on Create.

  3. Return to your linux-vm SSH window. List the /mnt/data1 directory including hidden items. Use the following commands:

ls -la /mnt/data1

Your output should look like the following. Note the file you created and the .snapshot directory.

Output

  1. All the Snapshots created for that volume are accessible (in read-only) through the .snapshot directory. Next, list the volume's Snapshots by running the following command from your linux-vm:
ls -l /mnt/data1/.snapshot

Your output should look similar to this: (note the two manual Snapshots you have created).

Output

  1. Recover the first version of the cvo.google file by simply copying it from the first manual snapshot you created. Use the following cp command, and replace <timestamp> with the one of the first manual snapshot created. (If typing, use the Tab key for auto-completion).
cp /mnt/data1/.snapshot/manually.<timestamp>/cvo.google /mnt/data1/cvo.google.v1
  1. When the copy is done, list the /mnt/data1 directory including hidden items using the following command:
ls -la /mnt/data1

Your output should look like this:

Output

  • You can use the following commands to see the differences between the files:
cat /mnt/data1/cvo.google /mnt/data1/cvo.google.v1 diff /mnt/data1/cvo.google /mnt/data1/cvo.google.v1

Restoring single file, checked! Now see how it's done for the whole volume.

Click Check my progress to verify that you've performed the above task. Create copy of Snapshots

  1. Go back to BlueXP and from the Volumes page, click on the triple-bar of the data1 volume. From the list of operations revealed, click on Restore from Snapshot copy.

  2. From the Restore from Snapshot copy page, select the last manual snapshot created, use the default Restored Volume Name and click Restore.

Restore from Snapshot copy for volume data1 page

When the restore is completed, the volume's page should look like this:

Volumes page with sections for data1 and data1 restore

Restore is instant regardless of the volume's size or used capacity. There's no copying of data, it's all a game of NetApp's WAFL file system pointers. Check out this video to see how it works. Now, access the restored volume.

  1. Click on the triple-bar of data1_restore volume. From the list of operations revealed, click on Mount Command and then click on Copy to copy the command to your clipboard.

  2. In the linux VM SSH window, create a local directory under /mnt named data1_restore that would serve as a mount point:

sudo mkdir /mnt/data1_restore
  1. Next, paste the Mount Command (copied on step 10) and replace <dest_dir> with the /mnt/data1_restore directory created, and add -o vers=3 to the command to use NFS v3:
## Your mount command should look like this with the exception of the IP sudo mount -o vers=3 10.142.0.36:/data1_restore /mnt/data1_restore
  • Press ENTER to mount the restored volume.
  1. Verify that your volume is properly mounted and list its content using the following native Linux commands:
df -h ls -la /mnt/data1_restore

Since snapshot represents a point-in-time and you have recovered the volume to a snapshot created before the single file restore, only the file named cvo.google appears.

Instantly restoring the entire volume, checked! Ready to move on?

Click Check my progress to verify that you've performed the above task. Restore from NetApp Snapshots

Cloning from NetApp Snapshots

Snapshots can also be used for creating instant thin clones using Cloud Volumes ONTAP's FlexClone technology. With FlexClone you copy even the largest datasets almost instantaneously. That makes it ideal for situations in which you need multiple copies of identical datasets or temporary copies of a dataset, like when testing your disaster recovery copy (discussed later in Cross-Cloud / Cross-Region Disaster Recovery section). Let's do some cloning!

  1. From the Volumes page, click on the triple-bar of the data1 volume. From the list of operations revealed, click on Clone.

  2. On the Clone page, use the default Clone Volume Name and click Clone.

When cloning is completed, the volume's page should look like this:

Volumes page ith sections for data1, data1 clone, and data1 restore

Same as restore, cloning is instant regardless of the volume's size or used capacity without any data copying. Now, let's access the cloned volume.

  1. Click on the triple-bar of data1_clone volume. From the list of operations revealed, click on Mount Command and then click on Copy to copy the command to your clipboard.

  2. In the linux VM SSH window, create a local directory under /mnt named data1_clone that would serve as a mount point:

sudo mkdir /mnt/data1_clone
  1. Next, paste the Mount Command (copied on step 3) and replace <dest_dir> with the /mnt/data1_clone directory created. As well, add -o vers=3 to the command to use NFS v3. Press ENTER to mount the cloned volume.
## Your mount command should look like this with the exception of the IP. sudo mount -o vers=3 10.142.0.36:/data1_clone /mnt/data1_clone
  1. Verify that your volume is properly mounted and list its content using the following native Linux commands:
df -h ls -la /mnt/data1\_clone

The clone operation creates a new snapshot which represents the current state of the volume, after the single file restore. Therefore, two files named cvo.google and cvo.google.v1 appear.

Instantly cloning a volume, checked! Ready to try something new?

Cross-Cloud / cross-region disaster recovery

Any ONTAP system, including Cloud Volumes ONTAP for Google Cloud include the NetApp SnapMirror data replication engine. With SnapMirror you can replicate data between working environments, located on BlueXP's Storage Canvas. SnapMirror is a highly efficient data transfer engine that replicates NetApp Snapshots, while preserving storage efficiencies (deduplication & compression) from one ONTAP system to another enabling cross-region and cross-cloud replication. SnapMirror use cases include: disaster recovery, long-term retention and migration.

BlueXP simplifies data replication between volumes on separate systems using SnapMirror. BlueXP purchases the required disks, configures relationships, applies the replication policy, and then initiates the baseline transfer between volumes. All you have to do is a drag-and-drop operation of one working environment over the other to identify the source volume and the destination volume, and then choose a replication policy and schedule.

Together with local Snapshots, backup to Google Cloud Storage, and SnapMirror a full data protection strategy can be easily implemented.

Note: Due to lab quota limitations, a second Cloud Volumes ONTAP instance cannot be deployed to try SnapMirror out.

Ransomware prevention (optional)

FPolicy is a file access notification framework that is used to monitor and manage file access events on Cloud Volumes ONTAP. FPolicy can operate in two modes. One mode uses external FPolicy servers to process and act upon notifications. The other uses the internal, native FPolicy server for simple file blocking based on extensions. Both act as a proactive preventive solution to limit ransomware and other malware infections. In this section you will enable native FPolicy server.

  1. Go back to BlueXP and double click on Cloud Volumes ONTAP working environment to enter the Volumes page.

  2. On the right side menu of the page, locate the orange colored shield (Ransomware Protection) and click it.

  3. On the Ransomware Protection page, click Activate FPolicy.

Once the status has changed the page would look like this, and the shield will be colored blue (it takes a few seconds for the status to change - you can click on Volumes and again on the shield accelerate it).

Ransomware Protection page with FPPolicy is Active message in the Block Ransomware File Extensions section

  1. Test this: In the linux VM SSH window, try to create a file with a well-known ransomware extension, in the data1 volume using the following command:
touch /mnt/data1/ransomware.LOL!

The FPolicy engine prevents file operations on files with an extension that appears in the blocking policy. Therefore you should see a permission denied message like the following:

touch: cannot touch `/mt/data1/ransomware.LOL!`: Permission denied

Great! By enabling FPolicy, BlueXP created a policy that blocks any file operation done through NFS and SMB clients on files with known ransomware extensions. And remember, NetApp Snapshots are immutable, so in case there was a security breach and ransomware wreaked havoc, you can easily and instantly recover your data to a point-in-time (like you did in Restoring From NetApp Snapshots section).

Interested in knowing more? check out this guidebook.

Congratulations!

In this lab, you honed your NetApp Cloud solutions skills by learning how your mission-critical data can be fully protected using Cloud Volume ONTAP and BlueXP. You have also learned how to manage and leverage NetApp Snapshots and strengthen your anti-ransomware posture. In addition, you have learned through the Disaster Recovery video, how you can replicate data cross-cloud and cross-region for disaster recovery scenarios.

Next steps / Learn more

Be sure to check out the following to receive more hands-on practice with NetApp:

Google Cloud training and certification

...helps you make the most of Google Cloud technologies. Our classes include technical skills and best practices to help you get up to speed quickly and continue your learning journey. We offer fundamental to advanced level training, with on-demand, live, and virtual options to suit your busy schedule. Certifications help you validate and prove your skill and expertise in Google Cloud technologies.

Manual Last Updated July 28, 2023

Lab Last Tested July 28, 2023

Copyright 2024 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.

This content is not currently available

We will notify you via email when it becomes available

Great!

We will contact you via email if it becomes available