Joe Boyd
Member since 2023
Diamond League
37025 points
Member since 2023
This course equips machine learning practitioners with the essential tools, techniques, and best practices for evaluating both generative and predictive AI models. Model evaluation is a critical discipline for ensuring that ML systems deliver reliable, accurate, and high-performing results in production. Participants will gain a deep understanding of various evaluation metrics, methodologies, and their appropriate application across different model types and tasks. The course will emphasize the unique challenges posed by generative AI models and provide strategies for tackling them effectively. By leveraging Google Cloud's Vertex AI platform, participants will learn how to implement robust evaluation processes for model selection, optimization, and continuous monitoring.
This course introduces participants to MLOps tools and best practices for deploying, evaluating, monitoring and operating production ML systems on Google Cloud. MLOps is a discipline focused on the deployment, testing, monitoring, and automation of ML systems in production. Learners will get hands-on practice using Vertex AI Feature Store's streaming ingestion at the SDK layer.
This course introduces participants to MLOps tools and best practices for deploying, evaluating, monitoring and operating production ML systems on Google Cloud. MLOps is a discipline focused on the deployment, testing, monitoring, and automation of ML systems in production. Machine Learning Engineering professionals use tools for continuous improvement and evaluation of deployed models. They work with (or can be) Data Scientists, who develop models, to enable velocity and rigor in deploying the best performing models.
Complete the intermediate Develop Gen AI Apps with Gemini and Streamlit skill badge course to demonstrate skills in text generation, applying function calls with the Python SDK and Gemini API, and deploying a Streamlit application with Cloud Run. In this course, you learn Gemini prompting, test Streamlit apps in Cloud Shell, and deploy them as Docker containers in Cloud Run.
In this course, you learn how Gemini, a generative AI-powered collaborator from Google Cloud, helps you use Google products and services to develop, test, deploy, and manage applications. With help from Gemini, you learn how to develop and build a web application, fix errors in the application, develop tests, and query data. Using a hands-on lab, you experience how Gemini improves the software development lifecycle (SDLC). Duet AI was renamed to Gemini, our next-generation model.
In this course, you learn how Gemini, a generative AI-powered collaborator from Google Cloud, helps engineers manage infrastructure. You learn how to prompt Gemini to find and understand application logs, create a GKE cluster, and investigate how to create a build environment. Using a hands-on lab, you experience how Gemini improves the DevOps workflow. Duet AI was renamed to Gemini, our next-generation model.
In this course, you learn how Gemini, a generative AI-powered collaborator from Google Cloud, helps you secure your cloud environment and resources. You learn how to deploy example workloads into an environment in Google Cloud, identify security misconfigurations with Gemini, and remediate security misconfigurations with Gemini. Using a hands-on lab, you experience how Gemini improves your cloud security posture. Duet AI was renamed to Gemini, our next-generation model.
In this course, you learn how Gemini, a generative AI-powered collaborator from Google Cloud, helps network engineers create, update, and maintain VPC networks. You learn how to prompt Gemini to provide specific guidance for your networking tasks, beyond what you would receive from a search engine. Using a hands-on lab, you experience how Gemini makes it easier for you to work with Google Cloud VPC networks. Duet AI was renamed to Gemini, our next-generation model.
In this course, you learn how Gemini, a generative AI-powered collaborator from Google Cloud, helps analyze customer data and predict product sales. You also learn how to identify, categorize, and develop new customers using customer data in BigQuery. Using hands-on labs, you experience how Gemini improves data analysis and machine learning workflows. Duet AI was renamed to Gemini, our next-generation model.
In this course, you learn how Gemini, a generative AI-powered collaborator from Google Cloud, helps administrators provision infrastructure. You learn how to prompt Gemini to explain infrastructure, deploy GKE clusters and update existing infrastructure. Using a hands-on lab, you experience how Gemini improves the GKE deployment workflow. Duet AI was renamed to Gemini, our next-generation model.
In this course, you learn how Gemini, a generative AI-powered collaborator from Google Cloud, helps developers build applications. You learn how to prompt Gemini to explain code, recommend Google Cloud services, and generate code for your applications. Using a hands-on lab, you experience how Gemini improves the application development workflow. Duet AI was renamed to Gemini, our next-generation model.
This course introduces concepts of AI interpretability and transparency. It discusses the importance of AI transparency for developers and engineers. It explores practical methods and tools to help achieve interpretability and transparency in both data and AI models.
This course is dedicated to equipping you with the knowledge and tools needed to uncover the unique challenges faced by MLOps teams when deploying and managing Generative AI models, and exploring how Vertex AI empowers AI teams to streamline MLOps processes and achieve success in Generative AI projects.
This course introduces concepts of responsible AI and AI principles. It covers techniques to practically identify fairness and bias and mitigate bias in AI/ML practices. It explores practical methods and tools to implement Responsible AI best practices using Google Cloud products and open source tools.
Complete the intermediate Inspect Rich Documents with Gemini Multimodality and Multimodal RAG skill badge to demonstrate skills in the following: using multimodal prompts to extract information from text and visual data, generating a video description, and retrieving extra information beyond the video using multimodality with Gemini; building metadata of documents containing text and images, getting all relevant text chunks, and printing citations by using Multimodal Retrieval Augmented Generation (RAG) with Gemini. A skill badge is an exclusive digital badge issued by Google Cloud in recognition of your proficiency with Google Cloud products and services and tests your ability to apply your knowledge in an interactive hands-on environment. Complete this skill badge course and the final assessment challenge lab to receive a skill badge that you can share with your network.
Explore AI-powered search technologies, tools, and applications in this course. Learn semantic search utilizing vector embeddings, hybrid search combining semantic and keyword approaches, and retrieval-augmented generation (RAG) minimizing AI hallucinations as a grounded AI agent. Gain practical experience with Vertex AI Vector Search to build your intelligent search engine.
This course introduces Vertex AI Studio, a tool to interact with generative AI models, prototype business ideas, and launch them into production. Through an immersive use case, engaging lessons, and a hands-on lab, you’ll explore the prompt-to-product lifecycle and learn how to leverage Vertex AI Studio for Gemini multimodal applications, prompt design, prompt engineering, and model tuning. The aim is to enable you to unlock the potential of gen AI in your projects with Vertex AI Studio.
This course teaches you how to create an image captioning model by using deep learning. You learn about the different components of an image captioning model, such as the encoder and decoder, and how to train and evaluate your model. By the end of this course, you will be able to create your own image captioning models and use them to generate captions for images
This course introduces you to the Transformer architecture and the Bidirectional Encoder Representations from Transformers (BERT) model. You learn about the main components of the Transformer architecture, such as the self-attention mechanism, and how it is used to build the BERT model. You also learn about the different tasks that BERT can be used for, such as text classification, question answering, and natural language inference.This course is estimated to take approximately 45 minutes to complete.
This course gives you a synopsis of the encoder-decoder architecture, which is a powerful and prevalent machine learning architecture for sequence-to-sequence tasks such as machine translation, text summarization, and question answering. You learn about the main components of the encoder-decoder architecture and how to train and serve these models. In the corresponding lab walkthrough, you’ll code in TensorFlow a simple implementation of the encoder-decoder architecture for poetry generation from the beginning.
This course will introduce you to the attention mechanism, a powerful technique that allows neural networks to focus on specific parts of an input sequence. You will learn how attention works, and how it can be used to improve the performance of a variety of machine learning tasks, including machine translation, text summarization, and question answering. This course is estimated to take approximately 45 minutes to complete.
This course introduces diffusion models, a family of machine learning models that recently showed promise in the image generation space. Diffusion models draw inspiration from physics, specifically thermodynamics. Within the last few years, diffusion models became popular in both research and industry. Diffusion models underpin many state-of-the-art image generation models and tools on Google Cloud. This course introduces you to the theory behind diffusion models and how to train and deploy them on Vertex AI.
Complete the introductory Prompt Design in Vertex AI skill badge to demonstrate skills in the following: prompt engineering, image analysis, and multimodal generative techniques, within Vertex AI. Discover how to craft effective prompts, guide generative AI output, and apply Gemini models to real-world marketing scenarios.
As the use of enterprise Artificial Intelligence and Machine Learning continues to grow, so too does the importance of building it responsibly. A challenge for many is that talking about responsible AI can be easier than putting it into practice. If you’re interested in learning how to operationalize responsible AI in your organization, this course is for you. In this course, you will learn how Google Cloud does this today, together with best practices and lessons learned, to serve as a framework for you to build your own responsible AI approach.
This is an introductory-level microlearning course aimed at explaining what responsible AI is, why it's important, and how Google implements responsible AI in their products. It also introduces Google's 3 AI principles.
This is an introductory level micro-learning course that explores what large language models (LLM) are, the use cases where they can be utilized, and how you can use prompt tuning to enhance LLM performance. It also covers Google tools to help you develop your own Gen AI apps.
This is an introductory level microlearning course aimed at explaining what Generative AI is, how it is used, and how it differs from traditional machine learning methods. It also covers Google Tools to help you develop your own Gen AI apps.
This course covers how to implement the various flavors of production ML systems— static, dynamic, and continuous training; static and dynamic inference; and batch and online processing. You delve into TensorFlow abstraction levels, the various options for doing distributed training, and how to write distributed training models with custom estimators. This is the second course of the Advanced Machine Learning on Google Cloud series. After completing this course, enroll in the Image Understanding with TensorFlow on Google Cloud course.
This course takes a real-world approach to the ML Workflow through a case study. An ML team faces several ML business requirements and use cases. The team must understand the tools required for data management and governance and consider the best approach for data preprocessing. The team is presented with three options to build ML models for two use cases. The course explains why they would use AutoML, BigQuery ML, or custom training to achieve their objectives.
This course explores the benefits of using Vertex AI Feature Store, how to improve the accuracy of ML models, and how to find which data columns make the most useful features. This course also includes content and labs on feature engineering using BigQuery ML, Keras, and TensorFlow.
This course covers building ML models with TensorFlow and Keras, improving the accuracy of ML models and writing ML models for scaled use.
The course begins with a discussion about data: how to improve data quality and perform exploratory data analysis. We describe Vertex AI AutoML and how to build, train, and deploy an ML model without writing a single line of code. You will understand the benefits of Big Query ML. We then discuss how to optimize a machine learning (ML) model and how generalization and sampling can help assess the quality of ML models for custom training.
This course introduces the AI and machine learning (ML) offerings on Google Cloud that build both predictive and generative AI projects. It explores the technologies, products, and tools available throughout the data-to-AI life cycle, encompassing AI foundations, development, and solutions. It aims to help data scientists, AI developers, and ML engineers enhance their skills and knowledge through engaging learning experiences and practical hands-on exercises.