Loch Ness Living
Member since 2023
Silver League
26025 points
Member since 2023
This course introduces you to the Transformer architecture and the Bidirectional Encoder Representations from Transformers (BERT) model. You learn about the main components of the Transformer architecture, such as the self-attention mechanism, and how it is used to build the BERT model. You also learn about the different tasks that BERT can be used for, such as text classification, question answering, and natural language inference.This course is estimated to take approximately 45 minutes to complete.
This course gives you a synopsis of the encoder-decoder architecture, which is a powerful and prevalent machine learning architecture for sequence-to-sequence tasks such as machine translation, text summarization, and question answering. You learn about the main components of the encoder-decoder architecture and how to train and serve these models. In the corresponding lab walkthrough, you’ll code in TensorFlow a simple implementation of the encoder-decoder architecture for poetry generation from the beginning.
This course will introduce you to the attention mechanism, a powerful technique that allows neural networks to focus on specific parts of an input sequence. You will learn how attention works, and how it can be used to improve the performance of a variety of machine learning tasks, including machine translation, text summarization, and question answering. This course is estimated to take approximately 45 minutes to complete.
This course introduces diffusion models, a family of machine learning models that recently showed promise in the image generation space. Diffusion models draw inspiration from physics, specifically thermodynamics. Within the last few years, diffusion models became popular in both research and industry. Diffusion models underpin many state-of-the-art image generation models and tools on Google Cloud. This course introduces you to the theory behind diffusion models and how to train and deploy them on Vertex AI.
This course introduces the products and solutions to solve NLP problems on Google Cloud. Additionally, it explores the processes, techniques, and tools to develop an NLP project with neural networks by using Vertex AI and TensorFlow.
Complete the introductory Prompt Design in Vertex AI skill badge to demonstrate skills in the following: prompt engineering, image analysis, and multimodal generative techniques, within Vertex AI. Discover how to craft effective prompts, guide generative AI output, and apply Gemini models to real-world marketing scenarios.
This course describes different types of computer vision use cases and then highlights different machine learning strategies for solving these use cases. The strategies vary from experimenting with pre-built ML models through pre-built ML APIs and AutoML Vision to building custom image classifiers using linear models, deep neural network (DNN) models or convolutional neural network (CNN) models. The course shows how to improve a model's accuracy with augmentation, feature extraction, and fine-tuning hyperparameters while trying to avoid overfitting the data. The course also looks at practical issues that arise, for example, when one doesn't have enough data and how to incorporate the latest research findings into different models. Learners will get hands-on practice building and optimizing their own image classification models on a variety of public datasets in the labs they will work on.
This course covers how to implement the various flavors of production ML systems— static, dynamic, and continuous training; static and dynamic inference; and batch and online processing. You delve into TensorFlow abstraction levels, the various options for doing distributed training, and how to write distributed training models with custom estimators. This is the second course of the Advanced Machine Learning on Google Cloud series. After completing this course, enroll in the Image Understanding with TensorFlow on Google Cloud course.
This course takes a real-world approach to the ML Workflow through a case study. An ML team faces several ML business requirements and use cases. The team must understand the tools required for data management and governance and consider the best approach for data preprocessing. The team is presented with three options to build ML models for two use cases. The course explains why they would use AutoML, BigQuery ML, or custom training to achieve their objectives.
This course explores the benefits of using Vertex AI Feature Store, how to improve the accuracy of ML models, and how to find which data columns make the most useful features. This course also includes content and labs on feature engineering using BigQuery ML, Keras, and TensorFlow.
This course covers building ML models with TensorFlow and Keras, improving the accuracy of ML models and writing ML models for scaled use.
The course begins with a discussion about data: how to improve data quality and perform exploratory data analysis. We describe Vertex AI AutoML and how to build, train, and deploy an ML model without writing a single line of code. You will understand the benefits of Big Query ML. We then discuss how to optimize a machine learning (ML) model and how generalization and sampling can help assess the quality of ML models for custom training.
This course introduces the AI and machine learning (ML) offerings on Google Cloud that build both predictive and generative AI projects. It explores the technologies, products, and tools available throughout the data-to-AI life cycle, encompassing AI foundations, development, and solutions. It aims to help data scientists, AI developers, and ML engineers enhance their skills and knowledge through engaging learning experiences and practical hands-on exercises.
As the use of enterprise Artificial Intelligence and Machine Learning continues to grow, so too does the importance of building it responsibly. A challenge for many is that talking about responsible AI can be easier than putting it into practice. If you’re interested in learning how to operationalize responsible AI in your organization, this course is for you. In this course, you will learn how Google Cloud does this today, together with best practices and lessons learned, to serve as a framework for you to build your own responsible AI approach.
Earn a skill badge by completing the Introduction to Generative AI, Introduction to Large Language Models and Introduction to Responsible AI courses. By passing the final quiz, you'll demonstrate your understanding of foundational concepts in generative AI. A skill badge is a digital badge issued by Google Cloud in recognition of your knowledge of Google Cloud products and services. Share your skill badge by making your profile public and adding it to your social media profile.
This is an introductory-level microlearning course aimed at explaining what responsible AI is, why it's important, and how Google implements responsible AI in their products. It also introduces Google's 3 AI principles.
This is an introductory level micro-learning course that explores what large language models (LLM) are, the use cases where they can be utilized, and how you can use prompt tuning to enhance LLM performance. It also covers Google tools to help you develop your own Gen AI apps.
This is an introductory level microlearning course aimed at explaining what Generative AI is, how it is used, and how it differs from traditional machine learning methods. It also covers Google Tools to help you develop your own Gen AI apps.