arrow_back

Vertex AI PaLM API: Qwik Start

Sign in Join
Test and share your knowledge with our community!
done
Get access to over 700 hands-on labs, skill badges, and courses

Vertex AI PaLM API: Qwik Start

Lab 1 hour universal_currency_alt 1 Credit show_chart Introductory
Test and share your knowledge with our community!
done
Get access to over 700 hands-on labs, skill badges, and courses

GSP1155

Google Cloud self-paced labs logo

Overview

The Vertex AI PaLM API enables you to test, customize, and deploy instances of Google's PaLM large language models (LLM) so that you can leverage the capabilities of PaLM in your applications. The PaLM family of models supports text completion and multi-turn chat.

In this lab, you will explore how to leverage Vertex AI for the use cases mentioned above.

Objectives

In this lab, you will learn how to use the PaLM family of models to suport:

  • Text completion use cases
  • Multi-turn chat integration

Setup and requirements

Before you click the Start Lab button

Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources will be made available to you.

This hands-on lab lets you do the lab activities yourself in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials that you use to sign in and access Google Cloud for the duration of the lab.

To complete this lab, you need:

  • Access to a standard internet browser (Chrome browser recommended).
Note: Use an Incognito or private browser window to run this lab. This prevents any conflicts between your personal account and the Student account, which may cause extra charges incurred to your personal account.
  • Time to complete the lab---remember, once you start, you cannot pause a lab.
Note: If you already have your own personal Google Cloud account or project, do not use it for this lab to avoid extra charges to your account.

How to start your lab and sign in to the Google Cloud console

  1. Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is the Lab Details panel with the following:

    • The Open Google Cloud console button
    • Time remaining
    • The temporary credentials that you must use for this lab
    • Other information, if needed, to step through this lab
  2. Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).

    The lab spins up resources, and then opens another tab that shows the Sign in page.

    Tip: Arrange the tabs in separate windows, side-by-side.

    Note: If you see the Choose an account dialog, click Use Another Account.
  3. If necessary, copy the Username below and paste it into the Sign in dialog.

    {{{user_0.username | "Username"}}}

    You can also find the Username in the Lab Details panel.

  4. Click Next.

  5. Copy the Password below and paste it into the Welcome dialog.

    {{{user_0.password | "Password"}}}

    You can also find the Password in the Lab Details panel.

  6. Click Next.

    Important: You must use the credentials the lab provides you. Do not use your Google Cloud account credentials. Note: Using your own Google Cloud account for this lab may incur extra charges.
  7. Click through the subsequent pages:

    • Accept the terms and conditions.
    • Do not add recovery options or two-factor authentication (because this is a temporary account).
    • Do not sign up for free trials.

After a few moments, the Google Cloud console opens in this tab.

Note: To view a menu with a list of Google Cloud products and services, click the Navigation menu at the top-left. Navigation menu icon

Enable the Vertex AI API

  1. In the Google Cloud Console, enter Vertex AI API in the top search bar.

  2. Click on the result for Vertex AI API under Marketplace.

  3. Click Enable.

Click Check my progress to verify the objectives.

Enable the Vertex AI API

Text Prompts

The following table shows the parameters that you need to configure for the Vertex AI PaLM API for text:

Parameter Description Acceptable values

prompt

Text input to generate model response. Prompts can include preamble, questions, suggestions, instructions, or examples. Text

temperature

The temperature is used for sampling during the response generation, which occurs when topP and topK are applied. Temperature controls the degree of randomness in token selection. Lower temperatures are good for prompts that require a more deterministic and less open-ended or creative response, while higher temperatures can lead to more diverse or creative results. A temperature of 0 is deterministic: the highest probability response is always selected. For most use cases, try starting with a temperature of 0.2.

0.0–1.0

Default: 0

maxOutputTokens

Maximum number of tokens that can be generated in the response. Specify a lower value for shorter responses and a higher value for longer responses.

A token may be smaller than a word. A token is approximately four characters. 100 tokens correspond to roughly 60-80 words.

1–1024

Default: 0

topK

Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature).

For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling.

Specify a lower value for less random responses and a higher value for more random responses.

1–40

Default: 40

topP

Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95.

Specify a lower value for less random responses and a higher value for more random responses.

0.0–1.0

Default: 0.95

Task 1. Sample text prompts

You will perform the tasks in this lab using curl within Cloud Shell.

  1. In the Google Cloud Console, open Cloud Shell.

Summarization prompt example

  1. Run the following curl command in Cloud Shell:
MODEL_ID="text-bison" PROJECT_ID=$DEVSHELL_PROJECT_ID curl \ -X POST \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ https://us-central1-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/us-central1/publishers/google/models/${MODEL_ID}:predict -d \ $'{ "instances": [ { "prompt": "Provide a summary with about two sentences for the following article: The efficient-market hypothesis (EMH) is a hypothesis in financial \ economics that states that asset prices reflect all available \ information. A direct implication is that it is impossible to \ \\"beat the market\\" consistently on a risk-adjusted basis since market \ prices should only react to new information. Because the EMH is \ formulated in terms of risk adjustment, it only makes testable \ predictions when coupled with a particular model of risk. As a \ result, research in financial economics since at least the 1990s has \ focused on market anomalies, that is, deviations from specific \ models of risk. The idea that financial market returns are difficult \ to predict goes back to Bachelier, Mandelbrot, and Samuelson, but \ is closely associated with Eugene Fama, in part due to his \ influential 1970 review of the theoretical and empirical research. \ The EMH provides the basic logic for modern risk-based theories of \ asset prices, and frameworks such as consumption-based asset pricing \ and intermediary asset pricing can be thought of as the combination \ of a model of risk with the EMH. Many decades of empirical research \ on return predictability has found mixed evidence. Research in the \ 1950s and 1960s often found a lack of predictability (e.g. Ball and \ Brown 1968; Fama, Fisher, Jensen, and Roll 1969), yet the \ 1980s-2000s saw an explosion of discovered return predictors (e.g. \ Rosenberg, Reid, and Lanstein 1985; Campbell and Shiller 1988; \ Jegadeesh and Titman 1993). Since the 2010s, studies have often \ found that return predictability has become more elusive, as \ predictability fails to work out-of-sample (Goyal and Welch 2008), \ or has been weakened by advances in trading technology and investor \ learning (Chordia, Subrahmanyam, and Tong 2014; McLean and Pontiff \ 2016; Martineau 2021). Summary: "} ], "parameters": { "temperature": 0.2, "maxOutputTokens": 256, "topK": 40, "topP": 0.95 } }'

You should see a response that summarizes the content using the parameters defined earlier in this lab. The maxOutputTokens parameter can be modified to shorten or lengthen the content summary that the PaLM API generates for the reply.

Ideation prompt example

The Vertex AI PaLM API can be used for ideation in addition to summarization.

  1. Run the following curl command in Cloud Shell to review the output that the PaLM API responds with.
MODEL_ID="text-bison" PROJECT_ID=$DEVSHELL_PROJECT_ID curl \ -X POST \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ https://us-central1-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/us-central1/publishers/google/models/${MODEL_ID}:predict -d \ $'{ "instances": [ { "prompt": "Give me ten interview questions for the role of program manager."} ], "parameters": { "temperature": 0.2, "maxOutputTokens": 1024, "topK": 40, "topP": 0.8 } }'

You should see a response from the API call that provides ten questions related to the role of program manager.

  1. Run the following commands to save response values of the API to a text file:
MODEL_ID="text-bison" PROJECT_ID=$DEVSHELL_PROJECT_ID curl \ -X POST \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ https://us-central1-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/us-central1/publishers/google/models/${MODEL_ID}:predict -d \ $'{ "instances": [ { "prompt": "Provide a summary with about two sentences for the following article: The efficient-market hypothesis (EMH) is a hypothesis in financial \ economics that states that asset prices reflect all available \ information. A direct implication is that it is impossible to \ \\"beat the market\\" consistently on a risk-adjusted basis since market \ prices should only react to new information. Because the EMH is \ formulated in terms of risk adjustment, it only makes testable \ predictions when coupled with a particular model of risk. As a \ result, research in financial economics since at least the 1990s has \ focused on market anomalies, that is, deviations from specific \ models of risk. The idea that financial market returns are difficult \ to predict goes back to Bachelier, Mandelbrot, and Samuelson, but \ is closely associated with Eugene Fama, in part due to his \ influential 1970 review of the theoretical and empirical research. \ The EMH provides the basic logic for modern risk-based theories of \ asset prices, and frameworks such as consumption-based asset pricing \ and intermediary asset pricing can be thought of as the combination \ of a model of risk with the EMH. Many decades of empirical research \ on return predictability has found mixed evidence. Research in the \ 1950s and 1960s often found a lack of predictability (e.g. Ball and \ Brown 1968; Fama, Fisher, Jensen, and Roll 1969), yet the \ 1980s-2000s saw an explosion of discovered return predictors (e.g. \ Rosenberg, Reid, and Lanstein 1985; Campbell and Shiller 1988; \ Jegadeesh and Titman 1993). Since the 2010s, studies have often \ found that return predictability has become more elusive, as \ predictability fails to work out-of-sample (Goyal and Welch 2008), \ or has been weakened by advances in trading technology and investor \ learning (Chordia, Subrahmanyam, and Tong 2014; McLean and Pontiff \ 2016; Martineau 2021). Summary: "} ], "parameters": { "temperature": 0.2, "maxOutputTokens": 256, "topK": 40, "topP": 0.95 } }' > summarization_prompt_example.txt MODEL_ID="text-bison" PROJECT_ID=$DEVSHELL_PROJECT_ID curl \ -X POST \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ https://us-central1-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/us-central1/publishers/google/models/${MODEL_ID}:predict -d \ $'{ "instances": [ { "prompt": "Give me ten interview questions for the role of program manager."} ], "parameters": { "temperature": 0.2, "maxOutputTokens": 1024, "topK": 40, "topP": 0.8 } }' > ideation_prompt_example.txt
  1. Run the following command to copy these file to a pre-created Cloud Storage bucket to track your progress:
export PROJECT_ID=$(gcloud config get-value project) gsutil cp *.txt gs://$PROJECT_ID

Click Check my progress to verify the objectives.

Sample text prompts

Task 2. Sample chat prompts

For chat API calls, the context, examples, and messages combine to form the prompt. The following table shows the parameters that you need to configure for the Vertex AI PaLM API for text:

Parameter Description Acceptable values

context

(optional)

Context shapes how the model responds throughout the conversation. For example, you can use context to specify words the model can or cannot use, topics to focus on or avoid, or the response format or style. Text

examples

(optional)

List of structured messages to the model to learn how to respond to the conversation.
List[Structured Message]
   "input": {"content": "provide content"},
   "output": {"content": "provide content"}

messages

(required)

Conversation history provided to the model in a structured alternate-author form. Messages appear in chronological order: oldest first, newest last. When the history of messages causes the input to exceed the maximum length, the oldest messages are removed until the entire prompt is within the allowed limit.
List[Structured Message]
     "author": "user",
     "content": "user message",

temperature

The temperature is used for sampling during the response generation, which occurs when topP and topK are applied. Temperature controls the degree of randomness in token selection. Lower temperatures are good for prompts that require a more deterministic and less open-ended or creative response, while higher temperatures can lead to more diverse or creative results. A temperature of 0 is deterministic: the highest probability response is always selected. For most use cases, try starting with a temperature of 0.2.

0.0–1.0

Default: 0

maxOutputTokens

Maximum number of tokens that can be generated in the response. Specify a lower value for shorter responses and a higher value for longer responses.

A token may be smaller than a word. A token is approximately four characters. 100 tokens correspond to roughly 60-80 words.

1–1024

Default: 0

topK

Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature).

For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling.

Specify a lower value for less random responses and a higher value for more random responses.

1–40

Default: 40

topP

Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95.

Specify a lower value for less random responses and a higher value for more random responses.

0.0–1.0

Default: 0.95

  1. Run the following curl call to query the model of the PaLM API for the chat prompt provided:
MODEL_ID="chat-bison" PROJECT_ID=$DEVSHELL_PROJECT_ID curl \ -X POST \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ https://us-central1-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/us-central1/publishers/google/models/${MODEL_ID}:predict -d \ '{ "instances": [{ "context": "My name is Ned. You are my personal assistant. My favorite movies are Lord of the Rings and Hobbit.", "examples": [ { "input": {"content": "Who do you work for?"}, "output": {"content": "I work for Ned."} }, { "input": {"content": "What do I like?"}, "output": {"content": "Ned likes watching movies."} }], "messages": [ { "author": "user", "content": "Are my favorite movies based on a book series?", }], }], "parameters": { "temperature": 0.3, "maxDecodeSteps": 200, "topP": 0.8, "topK": 40 } }'

You should see a response from the PaLM API confirming that your favorite movies are based on a book series. You can modify the inputs and the content of the curl call to experiment with this API further.

  1. Run the following command to save response values of the API to a text file:
MODEL_ID="chat-bison" PROJECT_ID=$DEVSHELL_PROJECT_ID curl \ -X POST \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ https://us-central1-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/us-central1/publishers/google/models/${MODEL_ID}:predict -d \ '{ "instances": [{ "context": "My name is Ned. You are my personal assistant. My favorite movies are Lord of the Rings and Hobbit.", "examples": [ { "input": {"content": "Who do you work for?"}, "output": {"content": "I work for Ned."} }, { "input": {"content": "What do I like?"}, "output": {"content": "Ned likes watching movies."} }], "messages": [ { "author": "user", "content": "Are my favorite movies based on a book series?", }], }], "parameters": { "temperature": 0.3, "maxDecodeSteps": 200, "topP": 0.8, "topK": 40 } }' > sample_chat_prompts.txt
  1. Run the following command to copy this file to a pre-created Cloud Storage bucket to track your progress:
export PROJECT_ID=$(gcloud config get-value project) gsutil cp sample_chat_prompts.txt gs://$PROJECT_ID

Click Check my progress to verify the objectives.

Sample chat prompts

Congratulations!

In this lab you learned how to use the Vertex AI PaLM API to create text and chat responses. You have taken the first step to start your journey using Vertex AI's PaLM API and other Generative AI development tools!

Next Steps

Google Cloud training and certification

...helps you make the most of Google Cloud technologies. Our classes include technical skills and best practices to help you get up to speed quickly and continue your learning journey. We offer fundamental to advanced level training, with on-demand, live, and virtual options to suit your busy schedule. Certifications help you validate and prove your skill and expertise in Google Cloud technologies.

Manual Last Updated November 21, 2023

Lab Last Tested: November 21, 2023

Copyright 2024 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.