Teilnehmen Anmelden

Ihre Kompetenzen in der Google Cloud Console anwenden

Laurence Carton

Mitglied seit 2023

Bronze League

18770 Punkte
Manage Kubernetes in Google Cloud Earned Feb 5, 2025 EST
Implement CI/CD Pipelines on Google Cloud Earned Jan 28, 2025 EST
Daten für ML-APIs in Google Cloud vorbereiten Earned Jun 4, 2024 EDT
Gemini für Data Scientists und Analysts Earned Mai 24, 2024 EDT
Serverless Data Processing with Dataflow: Operations Earned Okt 27, 2023 EDT
Serverless Data Processing with Dataflow: Foundations Earned Okt 3, 2023 EDT
Smart Analytics, Machine Learning, and AI on Google Cloud Earned Okt 2, 2023 EDT
Introduction to Vertex Forecasting and Time Series in Practice Earned Sep 29, 2023 EDT
Building Batch Data Pipelines on Google Cloud Earned Sep 29, 2023 EDT
Building Resilient Streaming Analytics Systems on Google Cloud Earned Sep 15, 2023 EDT
Modernizing Data Lakes and Data Warehouses with Google Cloud Earned Aug 11, 2023 EDT
Google Cloud Big Data and Machine Learning Fundamentals Earned Aug 4, 2023 EDT
Preparing for your Professional Data Engineer Journey Earned Jul 20, 2023 EDT

Complete the intermediate Manage Kubernetes in Google Cloud skill badge to demonstrate skills in the following: managing deployments with kubectl, monitoring and debugging applications on Google Kubernetes Engine (GKE), and continuous delivery techniques. A skill badge is an exclusive digital badge issued by Google Cloud in recognition of your proficiency with Google Cloud products and services and tests your ability to apply your knowledge in an interactive hands-on environment. Complete this Skill Badge, and the final assessment challenge lab, to receive a digital badge that you can share with your network.

Weitere Informationen

Earn the intermediate skill badge by completing the Implement CI/CD Pipelines on Google Cloud course where you will learn how to use Artifact Registry, Cloud Build, and Cloud Deploy. You will interact with the Cloud console, Google Cloud CLI, Cloud Run, and GKE. This course will teach you how to build continuous integration pipelines, store and secure artifacts, scan for vulnerabilities, attest to the validity of approved releases. Additionally, you'll get hands-on experience deploying applications to both GKE and Cloud Run. A skill badge is an exclusive digital badge issued by Google Cloud in recognition of your proficiency with Google Cloud products and services and tests your ability to apply your knowledge in an skillbadge hands-on environment. Complete this skill badge, and the final assessment challenge lab, to receive a digital badge that you can share with your network.

Weitere Informationen

Mit dem Skill-Logo zum Kurs Daten für ML-APIs in Google Cloud vorbereiten weisen Sie Grundkenntnisse in folgenden Bereichen nach: Bereinigen von Daten mit Dataprep von Trifacta, Ausführen von Datenpipelines in Dataflow, Erstellen von Clustern und Ausführen von Apache Spark-Jobs in Dataproc sowie Aufrufen von ML-APIs, einschließlich der Cloud Natural Language API, Cloud Speech-to-Text API und Video Intelligence API. Ein Skill-Logo ist ein exklusives digitales Abzeichen, das von Google Cloud ausgestellt wird und Ihre Kenntnisse über unsere Produkte und Dienste belegt. In diesem Zusammenhang wird auch die Fähigkeit bewertet, Ihr Wissen in einer interaktiven praxisnahen Geschäftssituation anzuwenden. Absolvieren Sie eine kursspezifische Aufgabenreihe und die Challenge-Lab-Prüfung, um ein Skill-Logo zu erhalten, das Sie in Ihrem Netzwerk posten können.

Weitere Informationen

In diesem Kurs erfahren Sie, wie Sie Gemini, ein auf generativer KI basierendes Produkt von Google Cloud, bei der Analyse von Kundendaten und der Prognose von Produktverkäufen unterstützen kann. Außerdem lernen Sie, wie Sie mithilfe von Kundendaten in BigQuery Neukunden identifizieren, kategorisieren und gewinnen können. In den praxisorientierten Labs erfahren Sie, wie Gemini Datenanalysen und Workflows für Machine Learning optimiert. Duet AI wurde umbenannt in Gemini, unser Modell der nächsten Generation.

Weitere Informationen

In the last installment of the Dataflow course series, we will introduce the components of the Dataflow operational model. We will examine tools and techniques for troubleshooting and optimizing pipeline performance. We will then review testing, deployment, and reliability best practices for Dataflow pipelines. We will conclude with a review of Templates, which makes it easy to scale Dataflow pipelines to organizations with hundreds of users. These lessons will help ensure that your data platform is stable and resilient to unanticipated circumstances.

Weitere Informationen

This course is part 1 of a 3-course series on Serverless Data Processing with Dataflow. In this first course, we start with a refresher of what Apache Beam is and its relationship with Dataflow. Next, we talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favorite programming language with their preferred execution backend. We then show you how Dataflow allows you to separate compute and storage while saving money, and how identity, access, and management tools interact with your Dataflow pipelines. Lastly, we look at how to implement the right security model for your use case on Dataflow.

Weitere Informationen

Incorporating machine learning into data pipelines increases the ability to extract insights from data. This course covers ways machine learning can be included in data pipelines on Google Cloud. For little to no customization, this course covers AutoML. For more tailored machine learning capabilities, this course introduces Notebooks and BigQuery machine learning (BigQuery ML). Also, this course covers how to productionalize machine learning solutions by using Vertex AI.

Weitere Informationen

This course is an introduction to building forecasting solutions with Google Cloud. You start with sequence models and time series foundations. You then walk through an end-to-end workflow: from data preparation to model development and deployment with Vertex AI. Finally, you learn the lessons and tips from a retail use case and apply the knowledge by building your own forecasting models.

Weitere Informationen

Data pipelines typically fall under one of the Extract and Load (EL), Extract, Load and Transform (ELT) or Extract, Transform and Load (ETL) paradigms. This course describes which paradigm should be used and when for batch data. Furthermore, this course covers several technologies on Google Cloud for data transformation including BigQuery, executing Spark on Dataproc, pipeline graphs in Cloud Data Fusion and serverless data processing with Dataflow. Learners get hands-on experience building data pipeline components on Google Cloud using Qwiklabs.

Weitere Informationen

Processing streaming data is becoming increasingly popular as streaming enables businesses to get real-time metrics on business operations. This course covers how to build streaming data pipelines on Google Cloud. Pub/Sub is described for handling incoming streaming data. The course also covers how to apply aggregations and transformations to streaming data using Dataflow, and how to store processed records to BigQuery or Bigtable for analysis. Learners get hands-on experience building streaming data pipeline components on Google Cloud by using QwikLabs.

Weitere Informationen

The two key components of any data pipeline are data lakes and warehouses. This course highlights use-cases for each type of storage and dives into the available data lake and warehouse solutions on Google Cloud in technical detail. Also, this course describes the role of a data engineer, the benefits of a successful data pipeline to business operations, and examines why data engineering should be done in a cloud environment. This is the first course of the Data Engineering on Google Cloud series. After completing this course, enroll in the Building Batch Data Pipelines on Google Cloud course.

Weitere Informationen

This course introduces the Google Cloud big data and machine learning products and services that support the data-to-AI lifecycle. It explores the processes, challenges, and benefits of building a big data pipeline and machine learning models with Vertex AI on Google Cloud.

Weitere Informationen

This course helps learners create a study plan for the PDE (Professional Data Engineer) certification exam. Learners explore the breadth and scope of the domains covered in the exam. Learners assess their exam readiness and create their individual study plan.

Weitere Informationen