In this lab, we will create containerized training applications for ML models in TensorFlow, PyTorch, XGBoost, and Scikit-learn. Will will then use these images as ops in a KubeFlow pipeline and train multiple models in parallel. We will then set up recurring runs of our KubeFlow pipeline in the UI.
Objectives
Create the training script
Package training script into a Docker Image
Build and push training image to Google Cloud Container Registry
Build a Kubeflow pipeline that queries BigQuery to create training/validation splits and export results as sharded CSV files in GCS
Launch AI Platform training jobs with the four containerized training applications, using the exported CSV data as input
Setup and requirements
For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.
Sign in to Qwiklabs using an incognito window.
Note the lab's access time (for example, 1:15:00), and make sure you can finish within that time.
There is no pause feature. You can restart if needed, but you have to start at the beginning.
When ready, click Start lab.
Note your lab credentials (Username and Password). You will use them to sign in to the Google Cloud Console.
Click Open Google Console.
Click Use another account and copy/paste credentials for this lab into the prompts.
If you use other credentials, you'll receive errors or incur charges.
Accept the terms and skip the recovery resource page.
Activate Cloud Shell
Cloud Shell is a virtual machine that contains development tools. It offers a persistent 5-GB home directory and runs on Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources. gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab completion.
Click the Activate Cloud Shell button () at the top right of the console.
Click Continue.
It takes a few moments to provision and connect to the environment. When you are connected, you are also authenticated, and the project is set to your PROJECT_ID.
Click Check my progress to verify the objective.
Add the Editor permission for Cloud Build service account
Task 3. Create an instance of AI Platform Pipelines
In the Google Cloud Console, on the Navigation menu (), scroll down to AI Platform and pin the section for easier access later in the lab.
Navigate to AI Platform > Pipelines.
Then click New Instance.
Click Configure.
To create cluster select Zone as then check Allow access to the following Cloud APIs, leave the name as is, and then click Create New Cluster.
Note:
The cluster creation will take 3 - 5 minutes. You need to wait until this step completes before you proceed to the next step.
Scroll to the bottom of the page, accept the marketplace terms, and click Deploy. You will see the individual services of KFP deployed to your GKE cluster. Wait for the deployment to finish before proceeding to the next task.
In Cloud Shell, run the following to configure kubectl command line access
In Cloud Shell, run the following to get the ENDPOINT of your KFP deployment
kubectl describe configmap inverse-proxy-config | grep googleusercontent.com
Important: In task 6, you will need to set the endpoint for your KFP in one of the cells in your notebook. Remember to use the above output as your ENDPOINT.
Click Check my progress to verify the objective.
Creating an instance of AI Platform Pipelines
Task 4. Create an instance of Vertex AI Platform Notebooks
An instance of Vertex AI Platform Notebooks is used as a primary experimentation/development workbench.
In the Cloud Console, on the Navigation menu, click Vertex AI > Workbench.
Click ENABLE NOTEBOOKS API if it is not enabled yet.
On the Workbench page, click CREATE NEW.
In the New instance dialog, select as the Region, and select as the Zone.
Next, select Debian 10 as the Operating system, and select Python 3 (with Intel MKL and CUDA 11.3) as the Environment.
Leave all other fields as default, and then click Create.
Note: It may take 5 minutes for the notebook instance to appear.
Note: Wait for the instance to become available before proceeding to the next step.
Click Open JupyterLab. A JupyterLab window will open in a new tab.
Once the “Build recommended” pop up displays, click Build. If you see the build failed, ignore it.
Click Check my progress to verify the objective.
Create an instance of AI Platform Notebooks
Task 5. Clone the example repo within your AI Platform Notebooks instance
To clone the mlops-on-gcp notebook in your JupyterLab instance:
In JupyterLab, click the Terminal icon to open a new terminal.
At the command-line prompt, type in the following command and press Enter:
git clone https://github.com/GoogleCloudPlatform/mlops-on-gcp
Note: If the cloned repo does not appear in the JupyterLab UI, you can use the top line menu and under Git > Clone a repository, clone the repo (https://github.com/GoogleCloudPlatform/mlops-on-gcp) using the UI.
Confirm that you have cloned the repository by double clicking on the mlops-on-gcp directory and ensuring that you can see its contents. The files for all the Jupyter notebook-based labs throughout this course are available in this directory.
Run the following command to install necessary packages.
pip install kfp==0.2.5 fire gcsfs requests-toolbelt==0.10.1
Task 6. Navigate to the lab notebook
In JupyterLab UI, navigate to mlops-on-gcp/continuous_training/kubeflow/labs and open multiple_frameworks_lab.ipynb.
Clear all the cells in the notebook (look for the Clear button on the notebook toolbar) and then Run the cells one by one. Note the some cells have a #TODO for you to write the code before you run the cell.
When prompted, come back to these instructions to check your progress.
If you need more help, you may take a look at the complete solution by navigating to mlops-on-gcp/continuous_training/kubeflow/solutions open multiple_frameworks_kubeflow.ipynb.
Task 7. Run your training job in the cloud
Test completed tasks - Create bigquery dataset, table and build the images and push it to your project's Container Registry
Click Check my progress to verify the objective.
Create bigquery dataset, table and build the images and push it to your project's Container Registry
Test completed tasks - Deploy your KubeFlow Pipeline
Click Check my progress to verify the objective.
Deploy your KubeFlow Pipeline
Test completed tasks - Create Pipeline Runs
Click Check my progress to verify the objective.
Create Pipeline Runs
Congratulations!
In this lab you've learned how to develop, package as a docker image, and run on AI Platform Training to training application.
End your lab
When you have completed your lab, click End Lab. Qwiklabs removes the resources you’ve used and cleans the account for you.
You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.
The number of stars indicates the following:
1 star = Very dissatisfied
2 stars = Dissatisfied
3 stars = Neutral
4 stars = Satisfied
5 stars = Very satisfied
You can close the dialog box if you don't want to provide feedback.
For feedback, suggestions, or corrections, please use the Support tab.
Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.
Moduły tworzą projekt Google Cloud i zasoby na określony czas.
Moduły mają ograniczenie czasowe i nie mają funkcji wstrzymywania. Jeśli zakończysz moduł, musisz go zacząć od początku.
Aby rozpocząć, w lewym górnym rogu ekranu kliknij Rozpocznij moduł.
Użyj przeglądania prywatnego
Skopiuj podaną nazwę użytkownika i hasło do modułu.
Kliknij Otwórz konsolę w trybie prywatnym.
Zaloguj się w konsoli
Zaloguj się z użyciem danych logowania do modułu. Użycie innych danych logowania może spowodować błędy lub naliczanie opłat.
Zaakceptuj warunki i pomiń stronę zasobów przywracania.
Nie klikaj Zakończ moduł, chyba że właśnie został przez Ciebie zakończony lub chcesz go uruchomić ponownie, ponieważ spowoduje to usunięcie wyników i projektu.
Ta treść jest obecnie niedostępna
Kiedy dostępność się zmieni, wyślemy Ci e-maila z powiadomieniem
Świetnie
Kiedy dostępność się zmieni, skontaktujemy się z Tobą e-mailem
Jeden moduł, a potem drugi
Potwierdź, aby zakończyć wszystkie istniejące moduły i rozpocząć ten
Aby uruchomić moduł, użyj przeglądania prywatnego
Uruchom ten moduł w oknie incognito lub przeglądania prywatnego. Dzięki temu unikniesz konfliktu między swoim kontem osobistym a kontem do nauki, co mogłoby spowodować naliczanie dodatkowych opłat na koncie osobistym.
In this lab, you will create containerized training applications for ML models in TensorFlow, PyTorch, XGBoost, and Scikit-learn. Will will then use these images as ops in a KubeFlow pipeline and train multiple models in parallel. We will then set up recurring runs of our KubeFlow pipeline in the UI.
Czas trwania:
Konfiguracja: 0 min
·
Dostęp na 120 min
·
Ukończono w 120 min