Francesc Ges
成为会员时间:2024
钻石联赛
28975 积分
成为会员时间:2024
本课程展示了如何在 BigQuery 中使用 AI/机器学习模型处理生成式 AI 任务。通过一个涉及客户关系管理的实际应用场景,您将学习到使用 Gemini 模型解决业务问题的工作流程。为了便于理解,本课程还将通过使用 SQL 查询和 Python 笔记本的编码解决方案提供分步指导。
此课程将探索如何使用 AI 功能套件 Gemini in BigQuery 为“数据到 AI”工作流提供助力。其中涉及到的功能包括数据探索和准备、代码生成和问题排查,以及工作流发现和可视化。此课程包含概念解释、真实使用场景以及实操实验等内容,可帮助数据从业者提升效率并加快流水线开发速度。
完成入门技能徽章课程使用 Dataplex 构建数据网格,展示以下方面的技能:使用 Dataplex 构建数据网格, 以在 Google Cloud 上实现数据安全、治理和发现。您将在 Dataplex 中练习和测试自己在标记资产、分配 IAM 角色和评估数据质量方面的技能。 技能徽章是由 Google Cloud 颁发的专属数字徽章,旨在认可 您在 Google Cloud 产品与服务方面的熟练度;您需要在 交互式实操环境中参加考核,证明自己运用所学知识的能力后才能获得此徽章。完成此技能 徽章课程和作为最终评估的实验室挑战赛,即可获得数字徽章, 在您的人际圈中炫出自己的技能。
完成中级技能徽章课程使用 BigQuery 构建数据仓库,展示以下技能: 联接数据以创建新表、排查联接故障、使用并集附加数据、创建日期分区表, 以及在 BigQuery 中使用 JSON、数组和结构体。 技能徽章是 Google Cloud 颁发的专属数字徽章, 旨在认可您在 Google Cloud 产品与服务方面的熟练度; 您需要在交互式实操环境中参加考核,证明自己运用所学知识的能力后 才能获得。完成此技能徽章课程和作为最终评估的实验室挑战赛, 获得数字徽章,在您的人际圈中炫出自己的技能。
In the last installment of the Dataflow course series, we will introduce the components of the Dataflow operational model. We will examine tools and techniques for troubleshooting and optimizing pipeline performance. We will then review testing, deployment, and reliability best practices for Dataflow pipelines. We will conclude with a review of Templates, which makes it easy to scale Dataflow pipelines to organizations with hundreds of users. These lessons will help ensure that your data platform is stable and resilient to unanticipated circumstances.
In this second installment of the Dataflow course series, we are going to be diving deeper on developing pipelines using the Beam SDK. We start with a review of Apache Beam concepts. Next, we discuss processing streaming data using windows, watermarks and triggers. We then cover options for sources and sinks in your pipelines, schemas to express your structured data, and how to do stateful transformations using State and Timer APIs. We move onto reviewing best practices that help maximize your pipeline performance. Towards the end of the course, we introduce SQL and Dataframes to represent your business logic in Beam and how to iteratively develop pipelines using Beam notebooks.
This course is part 1 of a 3-course series on Serverless Data Processing with Dataflow. In this first course, we start with a refresher of what Apache Beam is and its relationship with Dataflow. Next, we talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favorite programming language with their preferred execution backend. We then show you how Dataflow allows you to separate compute and storage while saving money, and how identity, access, and management tools interact with your Dataflow pipelines. Lastly, we look at how to implement the right security model for your use case on Dataflow.
Processing streaming data is becoming increasingly popular as streaming enables businesses to get real-time metrics on business operations. This course covers how to build streaming data pipelines on Google Cloud. Pub/Sub is described for handling incoming streaming data. The course also covers how to apply aggregations and transformations to streaming data using Dataflow, and how to store processed records to BigQuery or Bigtable for analysis. Learners get hands-on experience building streaming data pipeline components on Google Cloud by using QwikLabs.
在本课程中,您将了解 Google Cloud 数据工程、数据工程师的角色和职责,以及相关的 Google Cloud 产品和服务。您还将了解如何应对数据工程挑战。
Data pipelines typically fall under one of the Extract and Load (EL), Extract, Load and Transform (ELT) or Extract, Transform and Load (ETL) paradigms. This course describes which paradigm should be used and when for batch data. Furthermore, this course covers several technologies on Google Cloud for data transformation including BigQuery, executing Spark on Dataproc, pipeline graphs in Cloud Data Fusion and serverless data processing with Dataflow. Learners get hands-on experience building data pipeline components on Google Cloud using Qwiklabs.
The two key components of any data pipeline are data lakes and warehouses. This course highlights use-cases for each type of storage and dives into the available data lake and warehouse solutions on Google Cloud in technical detail. Also, this course describes the role of a data engineer, the benefits of a successful data pipeline to business operations, and examines why data engineering should be done in a cloud environment. This is the first course of the Data Engineering on Google Cloud series. After completing this course, enroll in the Building Batch Data Pipelines on Google Cloud course.
This course helps learners create a study plan for the PDE (Professional Data Engineer) certification exam. Learners explore the breadth and scope of the domains covered in the exam. Learners assess their exam readiness and create their individual study plan.