Uladzimir Sernatski
Menjadi anggota sejak 2023
Menjadi anggota sejak 2023
Kubernetes adalah sistem orkestrasi container paling populer, dan Google Kubernetes Engine dirancang secara khusus untuk mendukung deployment Kubernetes terkelola di Google Cloud. Dalam kursus tingkat lanjut ini, Anda akan mendapatkan praktik langsung dalam mengonfigurasi Image Docker, container, serta men-deploy aplikasi Kubernetes Engine yang sepenuhnya lengkap dan siap produksi. Kursus ini akan mengajari Anda keterampilan praktis yang diperlukan untuk mengintegrasikan orkestrasi container ke dalam alur kerja Anda sendiri. Apakah Anda sedang mencari challenge lab interaktif untuk menunjukkan keterampilan Anda dan menguji pengetahuan yang dimiliki? Setelah menyelesaikan kursus ini, selesaikan Challenge Lab tambahan di akhir kursus Men-deploy Aplikasi Kubernetes di Google Cloud untuk menerima badge digital eksklusif Google Cloud.
This Quest is most suitable for those working in a technology or finance role who are responsible for managing Google Cloud costs. You’ll learn how to set up a billing account, organize resources, and manage billing access permissions. In the hands-on labs, you'll learn how to view your invoice, track your Google Cloud costs with Billing reports, analyze your billing data with BigQuery or Google Sheets, and create custom billing dashboards with Looker Studio. References made to links in the videos can be accessed in this Additional Resources document.
Data Catalog is deprecated and will be discontinued on January 30, 2026. You can still complete this course if you want to. For steps to transition your Data Catalog users, workloads, and content to Dataplex Catalog, see Transition from Data Catalog to Dataplex Catalog (https://cloud.google.com/dataplex/docs/transition-to-dataplex-catalog). Data Catalog is a fully managed and scalable metadata management service that empowers organizations to quickly discover, understand, and manage all of their data. In this quest you will start small by learning how to search and tag data assets and metadata with Data Catalog. After learning how to build your own tag templates that map to BigQuery table data, you will learn how to build MySQL, PostgreSQL, and SQLServer to Data Catalog Connectors.
In the last installment of the Dataflow course series, we will introduce the components of the Dataflow operational model. We will examine tools and techniques for troubleshooting and optimizing pipeline performance. We will then review testing, deployment, and reliability best practices for Dataflow pipelines. We will conclude with a review of Templates, which makes it easy to scale Dataflow pipelines to organizations with hundreds of users. These lessons will help ensure that your data platform is stable and resilient to unanticipated circumstances.
This course helps learners create a study plan for the PDE (Professional Data Engineer) certification exam. Learners explore the breadth and scope of the domains covered in the exam. Learners assess their exam readiness and create their individual study plan.
Selesaikan badge keahlian tingkat menengah Rekayasa Data untuk Pembuatan Model Prediktif dengan BigQuery ML untuk menunjukkan keterampilan Anda dalam hal berikut: membangun pipeline transformasi data ke BigQuery dengan Dataprep by Trifacta; menggunakan Cloud Storage, Dataflow, dan BigQuery untuk membangun alur kerja ekstrak, transformasi, dan pemuatan (ETL); serta membangun model machine learning menggunakan BigQuery ML. Badge keahlian adalah badge digital eksklusif yang diberikan oleh Google Cloud sebagai pengakuan atas kemahiran Anda dalam menggunakan produk dan layanan Google Cloud serta menguji kemampuan Anda dalam menerapkan pengetahuan di lingkungan praktik yang interaktif. Selesaikan kursus badge keahlian dan challenge lab penilaian akhir untuk menerima badge digital yang dapat Anda bagikan ke jaringan Anda.
Selesaikan badge keahlian tingkat menengah Membangun Data Warehouse dengan BigQuery untuk menunjukkan keterampilan Anda dalam hal berikut: menggabungkan data untuk membuat tabel baru, memecahkan masalah penggabungan, menambahkan data dengan union, membuat tabel berpartisi tanggal, serta menggunakan JSON, array, dan struct di BigQuery. Badge keahlian adalah badge digital eksklusif yang diberikan oleh Google Cloud sebagai pengakuan atas kemahiran Anda dalam menggunakan produk dan layanan Google Cloud serta menguji kemampuan Anda dalam menerapkan pengetahuan di lingkungan yang interaktif. Selesaikan kursus badge keahlian ini dan challenge lab penilaian akhir, untuk menerima badge keahlian yang dapat Anda bagikan dengan jaringan Anda.
Selesaikan badge keahlian pengantar Menyiapkan Data untuk ML API di Google Cloud untuk menunjukkan keterampilan Anda dalam hal berikut: menghapus data dengan Dataprep by Trifacta, menjalankan pipeline data di Dataflow, membuat cluster dan menjalankan tugas Apache Spark di Dataproc, dan memanggil beberapa ML API, termasuk Cloud Natural Language API, Google Cloud Speech-to-Text API, dan Video Intelligence API. Badge keahlian adalah badge digital eksklusif yang diberikan oleh Google Cloud s ebagai pengakuan atas kemahiran Anda dalam menggunakan produk dan layanan Google Cloud serta menguji kemampuan Anda dalam menerapkan pengetahuan di lingkungan praktis yang interaktif. Selesaikan kursus badge keahlian ini dan challenge lab penilaian akhir, untuk menerima badge keahlian yang dapat Anda bagikan dengan jaringan Anda.
Processing streaming data is becoming increasingly popular as streaming enables businesses to get real-time metrics on business operations. This course covers how to build streaming data pipelines on Google Cloud. Pub/Sub is described for handling incoming streaming data. The course also covers how to apply aggregations and transformations to streaming data using Dataflow, and how to store processed records to BigQuery or Bigtable for analysis. Learners get hands-on experience building streaming data pipeline components on Google Cloud by using QwikLabs.
In this second installment of the Dataflow course series, we are going to be diving deeper on developing pipelines using the Beam SDK. We start with a review of Apache Beam concepts. Next, we discuss processing streaming data using windows, watermarks and triggers. We then cover options for sources and sinks in your pipelines, schemas to express your structured data, and how to do stateful transformations using State and Timer APIs. We move onto reviewing best practices that help maximize your pipeline performance. Towards the end of the course, we introduce SQL and Dataframes to represent your business logic in Beam and how to iteratively develop pipelines using Beam notebooks.
Data pipelines typically fall under one of the Extract and Load (EL), Extract, Load and Transform (ELT) or Extract, Transform and Load (ETL) paradigms. This course describes which paradigm should be used and when for batch data. Furthermore, this course covers several technologies on Google Cloud for data transformation including BigQuery, executing Spark on Dataproc, pipeline graphs in Cloud Data Fusion and serverless data processing with Dataflow. Learners get hands-on experience building data pipeline components on Google Cloud using Qwiklabs.
Incorporating machine learning into data pipelines increases the ability to extract insights from data. This course covers ways machine learning can be included in data pipelines on Google Cloud. For little to no customization, this course covers AutoML. For more tailored machine learning capabilities, this course introduces Notebooks and BigQuery machine learning (BigQuery ML). Also, this course covers how to productionalize machine learning solutions by using Vertex AI.
The two key components of any data pipeline are data lakes and warehouses. This course highlights use-cases for each type of storage and dives into the available data lake and warehouse solutions on Google Cloud in technical detail. Also, this course describes the role of a data engineer, the benefits of a successful data pipeline to business operations, and examines why data engineering should be done in a cloud environment. This is the first course of the Data Engineering on Google Cloud series. After completing this course, enroll in the Building Batch Data Pipelines on Google Cloud course.
This course is part 1 of a 3-course series on Serverless Data Processing with Dataflow. In this first course, we start with a refresher of what Apache Beam is and its relationship with Dataflow. Next, we talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favorite programming language with their preferred execution backend. We then show you how Dataflow allows you to separate compute and storage while saving money, and how identity, access, and management tools interact with your Dataflow pipelines. Lastly, we look at how to implement the right security model for your use case on Dataflow.
This course introduces the Google Cloud big data and machine learning products and services that support the data-to-AI lifecycle. It explores the processes, challenges, and benefits of building a big data pipeline and machine learning models with Vertex AI on Google Cloud.