arrow_back

Getting Started with Elasticsearch on Google Cloud

Join Sign in

Getting Started with Elasticsearch on Google Cloud

1 hour 1 Credit

This lab was developed with our partner, Elastic. Your personal information may be shared with Elastic, the lab sponsor, if you have opted in to receive product updates, announcements, and offers in your Account Profile.

GSP817

Google Cloud selp-paced labs logo

Overview

In this lab will focus on creating a simple Elasticsearch deployment, leveraging Google Cloud. This process involves installing Elasticsearch, Kibana, and Beats. Elasticsearch and Kibana will be installed on the same system, though in production, the environments can be separated. Then you will install Beats, which are Elastic's lightweight data shippers, which sends operational data, i.e. logs and metrics, from systems and applications to Elasticsearch which are later visualized in Kibana.

This lab focuses on Beats; Metricbeat (system metrics) and Filebeat (log data), though there are many others available out-of-the-box. More information about getting started with Elasticsearch, Kibana, and Beats can found here.

Objectives

In this lab, you will learn how to perform the following tasks:

  • Install a self-managed Elastic Stack (Elasticsearch and Kibana)

  • Install and configure Metricbeat and Filebeat to send system metrics and logs to Elasticsearch

  • Launch and learn about Kibana to visualize the ingested data

Prerequisites

This is an introductory level lab. No prior knowledge of Elastic and its various products and features are required, though basic system configuration, such as knowing how to run commands via a Linux based command line, and editing text files using a Unix based text editor is advantageous, i.e Vim.

There is a Vim Tips section in this lab guide in case you need a little help or refresher to get the standard text editing accomplished.

Caution: Be cautious when modifying YAML files due to spaces and other types of formatting irregularities leading to a potential failed configuration file. It is best not to copy and paste quotation marks between systems, as they can cause an issue with these types of configuration files as well. Please edit the files directly, and there is very little to edit.

Setup and Requirements

Before you click the Start Lab button

Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources will be made available to you.

This hands-on lab lets you do the lab activities yourself in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials that you use to sign in and access Google Cloud for the duration of the lab.

To complete this lab, you need:

  • Access to a standard internet browser (Chrome browser recommended).
Note: Use an Incognito or private browser window to run this lab. This prevents any conflicts between your personal account and the Student account, which may cause extra charges incurred to your personal account.
  • Time to complete the lab---remember, once you start, you cannot pause a lab.
Note: If you already have your own personal Google Cloud account or project, do not use it for this lab to avoid extra charges to your account.

How to start your lab and sign in to the Google Cloud Console

  1. Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is the Lab Details panel with the following:

    • The Open Google Console button
    • Time remaining
    • The temporary credentials that you must use for this lab
    • Other information, if needed, to step through this lab
  2. Click Open Google Console. The lab spins up resources, and then opens another tab that shows the Sign in page.

    Tip: Arrange the tabs in separate windows, side-by-side.

    Note: If you see the Choose an account dialog, click Use Another Account.
  3. If necessary, copy the Username from the Lab Details panel and paste it into the Sign in dialog. Click Next.

  4. Copy the Password from the Lab Details panel and paste it into the Welcome dialog. Click Next.

    Important: You must use the credentials from the left panel. Do not use your Google Cloud Skills Boost credentials. Note: Using your own Google Cloud account for this lab may incur extra charges.
  5. Click through the subsequent pages:

    • Accept the terms and conditions.
    • Do not add recovery options or two-factor authentication (because this is a temporary account).
    • Do not sign up for free trials.

After a few moments, the Cloud Console opens in this tab.

Note: You can view the menu with a list of Google Cloud Products and Services by clicking the Navigation menu at the top-left. Navigation menu icon

Activate Cloud Shell

Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.

  1. Click Activate Cloud Shell Activate Cloud Shell icon at the top of the Google Cloud console.

  2. Click Continue.

It takes a few moments to provision and connect to the environment. When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. The output contains a line that declares the PROJECT_ID for this session:

Your Cloud Platform project in this session is set to YOUR_PROJECT_ID

gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.

  1. (Optional) You can list the active account name with this command:

gcloud auth list

Output:

ACTIVE: * ACCOUNT: student-01-xxxxxxxxxxxx@qwiklabs.net To set the active account, run: $ gcloud config set account `ACCOUNT`
  1. (Optional) You can list the project ID with this command:

gcloud config list project

Output:

[core] project = <project_ID>

Example output:

[core] project = qwiklabs-gcp-44776a13dea667a6 Note: For full documentation of gcloud, in Google Cloud, refer to the gcloud CLI overview guide.

Vim Tips

  1. You can go to a specific line by entering the :(colon sign), the line number, then the Enter key.

  2. Use your arrows to move up, down, left, or right.

  3. Use the letter x to remove letters or symbols, unless in insert mode, where you will use the keyboard's normal delete/backspace keys.

  4. Enter the letter i to enter into insert mode where you will then be able to add/remove text.

  5. Once your edit is complete press the Esc key to exit out of insert mode, where you will then write/save the file by entering everything everything below, including the colon sign:

:wq

This is to say, you enter a : then wq which writes the file and quits the vim editor as soon as pressing the enter key.

Install Elasticsearch

Elasticsearch is a real-time, distributed storage, search, and analytics engine. It can be used for many purposes, but one context where it excels is indexing streams of semi-structured data, such as logs or decoded network packets.

In this lab, the Elasticsearch package has already been downloaded on a VM for you. In this secton, you will modify the configuration files and proceed with the installation.

  1. In the Navigation Menu, go to Compute Engine > VM Instances. You should see a VM named elasticsearch-vm running. Since this lab has an internal IP structure, similar to many other organizations, you must modify the elasticsearch.yml text file.

  2. Click SSH to connect to the VM.

  3. In the SSH window, use the Vim editor to edit the file by executing the following command:

sudo vim /etc/elasticsearch/elasticsearch.yml
  1. Uncomment the following lines by removing the beginning # (hash sign), leaving no space.

cluster.name: my-application node.name: node-1
  1. Navigate to the following setting, which is around line number 55.

network.host: 192.168.0.1
  1. Uncomment the line and modify it so it resembles the following:

network.host: 0.0.0.0
  1. Next, navigate to the following settings which are located around line numbers 70 and 74.

discovery.seed_hosts: ["host1", "host2"] cluster.initial_master_nodes: ["node-1", "node-2"]
  1. Uncomment the lines and modify them so that they resemble the following, replacing <INTERNAL-IP> with the Internal IP of the deployed Elasticsearch VM.

discovery.seed_hosts: ["<INTERNAL-IP>"] cluster.initial_master_nodes: ["node-1"]
  1. Once you have updated the elasticsearch.yml file, you can save your changes by first using the ESC (Escape) button, then typing :wq. This will write the file and quit the Vim editor when you press the enter key.

Start the Elasticsearch service

  1. Run the following command to start the Elasticsearch service.

sudo service elasticsearch start
  1. You can validate the service by typing the following:

sudo service elasticsearch status

You should see output similar to the following:

start-elasticsearch.png

If you need to change the configuration file, you can stop the service by typing sudo service elasticsearch stop, then restarting it as before.
  1. Verify the Elasticsearch service is running and able to be connected to by executing the following command, replacing <INTERNAL-IP> with the Internal IP of the deployed VM.

curl http://<INTERNAL-IP>:9200

You should see output similar to the following:

{ "name" : "Elasticsearch", "cluster_name" : "elasticsearch", "cluster_uuid" : "rY0Hf96cSBqNZovOU4bTzw", "version" : { "number" : "7.9.3", "build_flavor" : "default", "build_type" : "rpm", "build_hash" : "083627f112ba94dffc1232e8b42b73492789ef91", "build_date" : "2020-09-01T21:22:21.964974Z", "build_snapshot" : false, "lucene_version" : "8.6.2", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" }

Click Check my progress to verify the objective. Install Elasticsearch

Install Kibana

Kibana is an open source analytics and visualization platform designed to work with Elasticsearch. You use Kibana to search, view, and interact with data stored in Elasticsearch indices. You can easily perform advanced data analysis and visualize your data in a variety of charts, tables, and maps.

In this lab, the Kibana package has already been downloaded. In this secton, you will modify the configuration files and proceed with the installation.

Configure Kibana

Since this lab has an internal IP structure, similar to many other organizations, you must modify the kibana.yml text file.

  1. Edit the kibana.yml file in Vim by typing the following command:

sudo vim /etc/kibana/kibana.yml
  1. Navigate to the following setting, which is around line number 2. Uncomment it by removing the beginning # (hash sign), leaving no space.

server.port: 5601
  1. Next, navigate to the following setting, which is around line number 7. Uncomment it by removing the beginning # (hash sign), leaving no space.

server.host: "localhost"
  1. On the same setting, replace "localhost" with the Internal IP of the deployed Elasticsearch VM.

server.host: "<INTERNAL-IP>"
  1. Navigate the following setting, which is around line 32. Uncomment it by removing the beginning # (hash sign), leaving no space.

elasticsearch.hosts: ["http://localhost:9200"]
  1. On the same setting, replace "localhost" with the Internal IP of the deployed Elasticsearch VM.

elasticsearch.hosts: ["http://<INTERNAL-IP>:9200"]
  1. Once you have updated the kibana.yml file, you can save your changes by first using the ESC (Escape) button, then typing :wq.

Start the Kibana service

Starting Kibana will be a continuous process visualized on the screen. You will see important information, such as which IP the Kibana server is running and how to access it on port 5601.

  1. Run the following command to start the Kibana service.

sudo -i service kibana start If you need to change the configuration file, you can stop the service by typing sudo service kibana stop, then restarting it as before.

Launch the Kibana web interface

  1. Launch the Kibana web interface by pointing your browser to the External IP of the elasticsearch-vm instance, using port 5601. The IP address is visible on the Compute Engine > VM Instances page.

http://<EXTERNAL-IP>:5601

The first time you launch Kibana, you are prompted to either Add integrations, which is an excellent way to take advantage of real world data, coming together in predeveloped dashboards, or Explore on my own, which does not load sample data, predeveloped visualizations, or dashboards.

For this lab's purposes, you will choose to load sample data, so you can explore more than just the few visualizations you will be loading with system metrics.

Click Check my progress to verify the objective. Launch the Kibana web interface

  1. Click Try sample data.

  2. Under Sample flight data, click Add data.

  3. Once loaded, click View data > Dashboard, where you will be able to see the amazing visualizations that can come together under one view.

elastic-homepage.png

Install Beats

Beats are open source data shippers that you install as agents on your servers to send operational data to Elasticsearch. Beats can send data directly to Elasticsearch or via Logstash, where you can further process and enhance the data.

Each Beat is a separately installable product. In this section, you will learn how to install and run Metricbeat and Filebeat, each with the system module, enabled to collect system metrics and logs, coming with helpful predeveloped visualizations and dashboards, similar to what is seen with the sample data.

Install and Configure Metricbeat

In a production environment, you will be able to utilize Kibana to setup through the instructions on installing and configuring Metricbeat. From Kibana Home, you click Add metric data, choose System metrics, then the operating system option you are running. In this lab, however, the Metricbeat package has already been downloaded to the Linux host, therefore, you can just proceed with the installation.

Once the package has been installed, you will need to enable the appropriate module. Metricbeat ships with many different modules, many with predeveloped visualizations and dashboards such as the Google Cloud module.

The system module collects system-level metrics, such as CPU usage, memory, file system, disk IO, and network IO statistics, as well as top-like statistics for every process running on your system.

  1. Back in the SSH window, run the following command to set up the system module and start collecting system metrics:

sudo metricbeat modules enable system
  1. You can validate the module is enabled, as well as view all available modules by running the following command - the enabled modules are at the beginning of the output list.

sudo metricbeat modules list

Connect to the Elastic Stack

Connections to Elasticsearch and Kibana are required to set up Metricbeat. You will need to modify the metricbeat.yml file in order to accomplish this task.

  1. Enter the Vim editor to edit the file by executing the following command:

sudo vim /etc/metricbeat/metricbeat.yml
  1. Navigate to the following setting, which is around line number 67.

host: "localhost:5601"
  1. Remove the beginning # (hash sign), replacing "localhost" with the Internal IP of the deployed Elasticsearch VM.

host: "<INTERNAL-IP>:5601"
  1. Next, navigate to the following setting, which is around line 94.

hosts: ["localhost:9200"]`
  1. If the line is commented out, remove the beginning # (hash sign). Next, replace "localhost" with the Internal IP of the deployed Elasticsearch VM.

hosts: ["<INTERNAL-IP>:9200"]
  1. Once you have updated the metricbeat.yml file, you can save your changes by first using the ESC (Escape) button, then typing :wq.

  2. Then, set up the initial environment, which will load the predeveloped Kibana dashboards:

sudo metricbeat setup -e Note: The -e flag is optional and sends output to standard error instead of syslog so you see if there is an issue. Please wait a couple of minutes for this command to finish before continuing.

Click Check my progress to verify the objective. Install Beats

  1. Lastly, the final step in the Metricbeat configuration is to start Metricbeat:

sudo service metricbeat start

Install Filebeat

In this lab, the Filebeat package has already been downloaded, so you can just proceed with the installation.

Enable the system module

Once Filebeat has been installed and configured, you will need to enable the appropriate module. Filebeat ships with many different modules, many with predeveloped visualizations and dashboards, such as the Google Cloud module.

The system module collects and parses logs created by the system logging service of common Unix/Linux based distributions.

  1. Run the following command to set up the system module and start collecting system metrics:

sudo filebeat modules enable system
  1. You can validate the module is enabled, as well as view all available modules by running the following command - the enabled modules are at the beginning of the output list.

sudo filebeat modules list

Connect to the Elastic Stack

Connections to Elasticsearch and Kibana are required to set up Filebeat. You will need to modify the filebeat.yml file in order to accomplish this task.

  1. Enter the Vim editor to edit the file by executing the following command:

sudo vim /etc/filebeat/filebeat.yml
  1. Navigate to the following setting, which is around line number 107.

host: "localhost:5601"
  1. Remove the beginning # (hash sign), replacing "localhost" with the Internal IP of the deployed Elasticsearch VM.

host: "<INTERNAL-IP>:5601"
  1. Next, navigate to the following setting, which is around line 134.

hosts: ["localhost:9200"]
  1. Remove the beginning # (hash sign), replacing "localhost" with the Internal IP of the deployed Elasticsearch VM.

hosts: ["<INTERNAL-IP>:9200"]
  1. Once you have updated the filebeat.yml file, you can save your changes by first using the ESC (Escape) button, then typing :wq.

  2. Now you can setup the initial environment, which will load the predeveloped Kibana dashboards:

sudo filebeat setup -e Note: The -e flag is optional and sends output to standard error instead of syslog so you see if there is an issue. Please wait a couple of minutes for this command to finish before continuing.

Click Check my progress to verify the objective. Install Filebeat

  1. Lastly, the final step in the Filebeat configuration is to start Filebeat:

sudo service filebeat start

View metrics and logs in Kibana

  1. Launch the Kibana web interface if closed or not previously opened by pointing your browser to the External IP of the elasticsearch-vm instance, using port 5601.

http://<EXTERNAL-IP>:5601
  1. Click Dashboard from the left-side navigation menu.

b61f22721750cb6e.png

  1. In the Kibana Dashboard search bar, enter host overview.

  2. Click on [Metricbeat System] Host overview ECS.

This will display many important metrics for the systems installed with the Metricbeat system module.

30bf3fc0d0b1e246.png

In this lab, you have only installed the module on one host system, but you can see the insights Kibana provides out-of-the-box already. Filtering by host is as easy as a few clicks. Now you can check out what the preconfigured Filebeat dashboards look like.

  1. Click Dashboard at the top of the window.

80f54f10696df118.png

  1. Again in the Kibana Dashboard search bar, type system.

  2. This time, click [Filebeat System] Sudo commands ECS where you will see a list of sudo commands executed in this lab.

  3. Check out the [Filebeat System] Syslog dashboard ECS and [Filebeat System] SSH login attempts ECS as well, to see how Kibana has the ability to bring all sorts of metrics and logs together in one place.

Congratulations!

In this lab you installed and configured Elasticsearch and Kibana. You then deployed Metricbeat and Filebeat system modules on a Linux host, to see a few types of metrics and logs that this predeveloped system module provides.

Next Steps / Learn More

  • Elastic on the Google Cloud Marketplace!
  • Elastic Cloud offers a fully managed experience so that you can focus on your data instead of managing the cluster. You can check it out here and play with a free trial.
  • Check out the Elastic Integrations page and see how easily you can stream in logs, metrics, traces, content, and more from your apps, endpoints, infrastructure, cloud, network, workplace tools, and every other common source in your ecosystem.
  • Leverage the out-of-the-box solutions to give your Elastic project a jump start!

Google Cloud training and certification

...helps you make the most of Google Cloud technologies. Our classes include technical skills and best practices to help you get up to speed quickly and continue your learning journey. We offer fundamental to advanced level training, with on-demand, live, and virtual options to suit your busy schedule. Certifications help you validate and prove your skill and expertise in Google Cloud technologies.

Manual Last Updated May 25, 2022

Lab Last Tested May 25, 2022

Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.