Vebri Satriadi
Учасник із 2020
Срібна ліга
Кількість балів: 24630
Учасник із 2020
The Google Cloud Computing Foundations courses are for individuals with little to no background or experience in cloud computing. They provide an overview of concepts central to cloud basics, big data, and machine learning, and where and how Google Cloud fits in. By the end of the series of courses, learners will be able to articulate these concepts and demonstrate some hands-on skills. The courses should be completed in the following order: 1. Google Cloud Computing Foundations: Cloud Computing Fundamentals 2. Google Cloud Computing Foundations: Infrastructure in Google Cloud 3. Google Cloud Computing Foundations: Networking and Security in Google Cloud 4. Google Cloud Computing Foundations: Data, ML, and AI in Google Cloud This final course in the series reviews managed big data services, machine learning and its value, and how to demonstrate your skill set in Google Cloud further by earning Skill Badges.
In this self-paced training course, participants learn mitigations for attacks at many points in a Google Cloud-based infrastructure, including Distributed Denial-of-Service attacks, phishing attacks, and threats involving content classification and use. They also learn about the Security Command Center, cloud logging and audit logging, and using Forseti to view overall compliance with your organization's security policies.
Business professionals in non-technical roles have a unique opportunity to lead or influence machine learning projects. If you have questions about machine learning and want to understand how to use it, without the technical jargon, this course is for you. Learn how to translate business problems into machine learning use cases and vet them for feasibility and impact. Find out how you can discover unexpected use cases, recognize the phases of an ML project and considerations within each, and gain confidence to propose a custom ML use case to your team or leadership or translate the requirements to a technical team.
In this course, you learn how Gemini, a generative AI-powered collaborator from Google Cloud, helps analyze customer data and predict product sales. You also learn how to identify, categorize, and develop new customers using customer data in BigQuery. Using hands-on labs, you experience how Gemini improves data analysis and machine learning workflows. Duet AI was renamed to Gemini, our next-generation model.
In this course, you learn how Gemini, a generative AI-powered collaborator from Google Cloud, helps developers build applications. You learn how to prompt Gemini to explain code, recommend Google Cloud services, and generate code for your applications. Using a hands-on lab, you experience how Gemini improves the application development workflow. Duet AI was renamed to Gemini, our next-generation model.
The Generative AI Explorer - Vertex Quest is a collection of labs on how to use Generative AI on Google Cloud. Through the labs, you will learn about how to use the models in the Vertex AI PaLM API family, including text-bison, chat-bison, and textembedding-gecko. You will also learn about prompt design, best practices, and how it can be used for ideation, text classification, text extraction, text summarization, and more. You will also learn how to tune a foundation model by training it via Vertex AI custom training and deploy it to a Vertex AI endpoint.
Earn a skill badge by completing the Analyze Sentiment with Natural Language API quest, where you learn how the API derives sentiment from text.
Earn a skill badge by completing the Analyze Images with the Cloud Vision API quest, where you discover how to leverage the Cloud Vision API for various tasks, including extracting text from images.
Complete the intermediate Build Infrastructure with Terraform on Google Cloud skill badge to demonstrate skills in the following: Infrastructure as Code (IaC) principles using Terraform, provisioning and managing Google Cloud resources with Terraform configurations, effective state management (local and remote), and modularizing Terraform code for reusability and organization.
Complete the intermediate Manage Kubernetes in Google Cloud skill badge to demonstrate skills in the following: managing deployments with kubectl, monitoring and debugging applications on Google Kubernetes Engine (GKE), and continuous delivery techniques. A skill badge is an exclusive digital badge issued by Google Cloud in recognition of your proficiency with Google Cloud products and services and tests your ability to apply your knowledge in an interactive hands-on environment. Complete this Skill Badge, and the final assessment challenge lab, to receive a digital badge that you can share with your network.
Data Catalog is deprecated and will be discontinued on January 30, 2026. You can still complete this course if you want to. For steps to transition your Data Catalog users, workloads, and content to Dataplex Catalog, see Transition from Data Catalog to Dataplex Catalog (https://cloud.google.com/dataplex/docs/transition-to-dataplex-catalog). Data Catalog is a fully managed and scalable metadata management service that empowers organizations to quickly discover, understand, and manage all of their data. In this quest you will start small by learning how to search and tag data assets and metadata with Data Catalog. After learning how to build your own tag templates that map to BigQuery table data, you will learn how to build MySQL, PostgreSQL, and SQLServer to Data Catalog Connectors.
This course is part 1 of a 3-course series on Serverless Data Processing with Dataflow. In this first course, we start with a refresher of what Apache Beam is and its relationship with Dataflow. Next, we talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favorite programming language with their preferred execution backend. We then show you how Dataflow allows you to separate compute and storage while saving money, and how identity, access, and management tools interact with your Dataflow pipelines. Lastly, we look at how to implement the right security model for your use case on Dataflow.
Processing streaming data is becoming increasingly popular as streaming enables businesses to get real-time metrics on business operations. This course covers how to build streaming data pipelines on Google Cloud. Pub/Sub is described for handling incoming streaming data. The course also covers how to apply aggregations and transformations to streaming data using Dataflow, and how to store processed records to BigQuery or Bigtable for analysis. Learners get hands-on experience building streaming data pipeline components on Google Cloud by using QwikLabs.
The two key components of any data pipeline are data lakes and warehouses. This course highlights use-cases for each type of storage and dives into the available data lake and warehouse solutions on Google Cloud in technical detail. Also, this course describes the role of a data engineer, the benefits of a successful data pipeline to business operations, and examines why data engineering should be done in a cloud environment. This is the first course of the Data Engineering on Google Cloud series. After completing this course, enroll in the Building Batch Data Pipelines on Google Cloud course.