Checkpoints
Generate text from text prompts
/ 10
Print the chat history
/ 10
Verify the image
/ 10
Generate text from local image and text
/ 10
Generate text from text and image(s)
/ 20
Perform few-shot prompting
/ 20
Generate text from a video file
/ 20
Getting Started with the Gemini API in Vertex AI
GSP1209
Overview
Gemini is a family of generative AI models developed by Google DeepMind that is designed for multimodal use cases. The Gemini API gives you access to the Gemini Pro Vision and Gemini Pro models. In this lab, you will learn how to use the Gemini API in Vertex AI to interact with the Gemini 1.0 Pro (gemini-1.0-pro
) model and the Gemini 1.0 Pro Vision (gemini-1.0-pro-vision
) model.
Gemini API in Vertex AI
The Gemini API in Vertex AI provides a unified interface for interacting with Gemini models. There are currently two models available in the Gemini API:
-
Gemini 1.0 Pro model (
gemini-1.0-pro
): Designed to handle natural language tasks, multi-turn text and code chat, and code generation. -
Gemini 1.0 Pro Vision model (
gemini-1.0-pro-vision
): Supports multimodal prompts. You can include text, images, and video in your prompt requests and get text or code responses.
You can interact with the Gemini API using the following methods:
- Use the Vertex AI Studio for quick testing and command generation
- Use cURL commands
- Use the Vertex AI SDK
This lab focuses on using the Vertex AI SDK for Python to call the Gemini API in Vertex AI.
For more information, see the Generative AI on Vertex AI documentation.
Prerequisites
Before starting this lab, you should have familiarity with the following concepts:
- Basic understanding of Python programming
- General knowledge of how APIs work
- Running Python code in a Jupyter notebook in Vertex AI Workbench
Objectives
In this lab, you will learn how to perform the following tasks:
- Install the Vertex AI SDK for Python
- Use the Gemini 1.0 Pro (
gemini-1.0-pro
) model to generate text - Use the Gemini 1.0 Pro Vision (
gemini-1.0-pro-vision
) multimodal model to generate text from a combination of text, images, and video
Setup and requirements
Before you click the Start Lab button
Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources will be made available to you.
This hands-on lab lets you do the lab activities yourself in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials that you use to sign in and access Google Cloud for the duration of the lab.
To complete this lab, you need:
- Access to a standard internet browser (Chrome browser recommended).
- Time to complete the lab---remember, once you start, you cannot pause a lab.
How to start your lab and sign in to the Google Cloud console
-
Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is the Lab Details panel with the following:
- The Open Google Cloud console button
- Time remaining
- The temporary credentials that you must use for this lab
- Other information, if needed, to step through this lab
-
Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).
The lab spins up resources, and then opens another tab that shows the Sign in page.
Tip: Arrange the tabs in separate windows, side-by-side.
Note: If you see the Choose an account dialog, click Use Another Account. -
If necessary, copy the Username below and paste it into the Sign in dialog.
{{{user_0.username | "Username"}}} You can also find the Username in the Lab Details panel.
-
Click Next.
-
Copy the Password below and paste it into the Welcome dialog.
{{{user_0.password | "Password"}}} You can also find the Password in the Lab Details panel.
-
Click Next.
Important: You must use the credentials the lab provides you. Do not use your Google Cloud account credentials. Note: Using your own Google Cloud account for this lab may incur extra charges. -
Click through the subsequent pages:
- Accept the terms and conditions.
- Do not add recovery options or two-factor authentication (because this is a temporary account).
- Do not sign up for free trials.
After a few moments, the Google Cloud console opens in this tab.
Task 1. Open the notebook in Vertex AI Workbench
-
In the Google Cloud Console, on the Navigation menu, click Vertex AI > Workbench.
-
Find the
instance and click on the Open JupyterLab button.
The JupyterLab interface for your Workbench instance will open in a new browser tab.
Task 2. Set up the notebook
-
Click on the
file. -
Run through the Getting Started and Import libraries sections of the notebook.
- For Project ID, use
, and for the Location, use .
- For Project ID, use
In the following sections, you will run through the notebook cells to see how to use the Gemini API in Vertex AI.
Task 3. Use the Gemini 1.0 Pro model
The Gemini 1.0 Pro (gemini-1.0-pro
) model is designed to handle natural language tasks, multi-turn text and code chat, and code generation. In this task, run through the notebook cells to see how to use the Gemini 1.0 Pro model to generate text from text prompts.
Generate text from text prompts
Send a text prompt to the model. The Gemini 1.0 Pro (gemini-1.0-pro
) model provides a streaming response mechanism. With this approach, you don't need to wait for the complete response; you can start processing fragments as soon as they're accessible.
- Run through the Generate text from text prompts section of the notebook.
Click Check my progress to verify the objectives.
Task 4. Use the Gemini 1.0 Pro Vision model
Gemini 1.0 Pro Vision (gemini-1.0-pro-vision
) is a multimodal model that supports multimodal prompts. You can include text, image(s), and video in your prompt requests and get text or code responses. In this task, run through the notebook cells to see how to use the Gemini 1.0 Pro Vision model to generate text from text and image prompts, and then generate text from a video file.
Generate text from local image and text
- Run through the Generate text from local image and text section of the notebook.
Click Check my progress to verify the objective.
Generate text from text and image prompts
-
Run through the Generate text from text & image(s) section of the notebook.
Generate text from text and image(s).
Combining multiple images and text prompts for few-shot prompting
-
Run through the Combining multiple images and text prompts for few-shot prompting section of the notebook.
Perform few-shot prompting.
Generate text from a video file
-
Run through the Generate text from a video file section of the notebook.
Generate text from a video file.
Congratulations!
In this lab, you delved into the utilization of the Gemini API in Vertex AI along with the Vertex AI SDK for Python to interact with two models - the Gemini 1.0 Pro (gemini-1.0-pro
) model and the Gemini 1.0 Pro Vision (gemini-1.0-pro-vision
) model. Through these exercises, you gained practical insights into the capabilities of the Gemini API in Vertex AI and its seamless integration with the Python SDK.
Next steps / learn more
- Check out the Generative AI on Vertex AI documentation.
- Learn more about Generative AI on the Google Cloud Tech YouTube channel.
- Google Cloud Generative AI official repo
- Example Gemini notebooks
Google Cloud training and certification
...helps you make the most of Google Cloud technologies. Our classes include technical skills and best practices to help you get up to speed quickly and continue your learning journey. We offer fundamental to advanced level training, with on-demand, live, and virtual options to suit your busy schedule. Certifications help you validate and prove your skill and expertise in Google Cloud technologies.
Manual Last Updated November 21, 2024
Lab Last Tested November 31, 2024
Copyright 2024 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.