Join Sign in

Apply your skills in Google Cloud console

Pragna Katasani

Member since 2024

Building Batch Data Pipelines on Google Cloud Earned ינו 24, 2025 EST
Google Security Operations - Deep Dive Earned ספט 24, 2024 EDT
Google Security Operations - Fundamentals Earned ספט 23, 2024 EDT
Serverless Data Processing with Dataflow: Operations Earned אוג 22, 2024 EDT
Serverless Data Processing with Dataflow: Foundations Earned אוג 22, 2024 EDT
Modernizing Data Lakes and Data Warehouses with Google Cloud Earned אוג 20, 2024 EDT
Preparing for your Professional Data Engineer Journey Earned אוג 4, 2024 EDT

Data pipelines typically fall under one of the Extract and Load (EL), Extract, Load and Transform (ELT) or Extract, Transform and Load (ETL) paradigms. This course describes which paradigm should be used and when for batch data. Furthermore, this course covers several technologies on Google Cloud for data transformation including BigQuery, executing Spark on Dataproc, pipeline graphs in Cloud Data Fusion and serverless data processing with Dataflow. Learners get hands-on experience building data pipeline components on Google Cloud using Qwiklabs.

Learn more

Take the next steps in working with the Chronicle Security Operations Platform. Build on fundamental knowledge to go deeper on cusotmization and tuning.

Learn more

This course covers the basline skills needed for the Chronicle Security Operations Platform. The modules will cover specific actions and features that security engineers should become familiar with to start using the toolset.

Learn more

In the last installment of the Dataflow course series, we will introduce the components of the Dataflow operational model. We will examine tools and techniques for troubleshooting and optimizing pipeline performance. We will then review testing, deployment, and reliability best practices for Dataflow pipelines. We will conclude with a review of Templates, which makes it easy to scale Dataflow pipelines to organizations with hundreds of users. These lessons will help ensure that your data platform is stable and resilient to unanticipated circumstances.

Learn more

This course is part 1 of a 3-course series on Serverless Data Processing with Dataflow. In this first course, we start with a refresher of what Apache Beam is and its relationship with Dataflow. Next, we talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favorite programming language with their preferred execution backend. We then show you how Dataflow allows you to separate compute and storage while saving money, and how identity, access, and management tools interact with your Dataflow pipelines. Lastly, we look at how to implement the right security model for your use case on Dataflow.

Learn more

The two key components of any data pipeline are data lakes and warehouses. This course highlights use-cases for each type of storage and dives into the available data lake and warehouse solutions on Google Cloud in technical detail. Also, this course describes the role of a data engineer, the benefits of a successful data pipeline to business operations, and examines why data engineering should be done in a cloud environment. This is the first course of the Data Engineering on Google Cloud series. After completing this course, enroll in the Building Batch Data Pipelines on Google Cloud course.

Learn more

This course helps learners create a study plan for the PDE (Professional Data Engineer) certification exam. Learners explore the breadth and scope of the domains covered in the exam. Learners assess their exam readiness and create their individual study plan.

Learn more