Felipe Mancilla (Zenta Group)
Member since 2022
Silver League
48005 points
Member since 2022
Bu kurs, MLOps ekiplerinin üretken yapay zeka modellerini dağıtırken ve yönetirken karşılaştığı zorlukların üstesinden gelmek için gereken bilgi ve araçları sağlamaktadır. Ayrıca yapay zeka ekiplerinin, MLOps süreçlerini kolaylaştırıp üretken yapay zeka projelerinde başarıya ulaşması için Vertex AI'ın nasıl yardımcı olduğunu öğrenmenizi amaçlamaktadır.
Earn the intermediate skill badge by completing the Build and Deploy Machine Learning Solutions on Vertex AI course, where you will learn how to use Google Cloud's Vertex AI platform, AutoML, and custom training services to train, evaluate, tune, explain, and deploy machine learning models. This skill badge course is for professional Data Scientists and Machine Learning Engineers. A skill badge is an exclusive digital badge issued by Google Cloud in recognition of your proficiency with Google Cloud products and services and tests your ability to apply your knowledge in an interactive hands-on environment. Complete this Skill Badge, and the final assessment challenge lab, to receive a digital badge that you can share with your network.
In this course, you will be learning from ML Engineers and Trainers who work with the state-of-the-art development of ML pipelines here at Google Cloud. The first few modules will cover about TensorFlow Extended (or TFX), which is Google’s production machine learning platform based on TensorFlow for management of ML pipelines and metadata. You will learn about pipeline components and pipeline orchestration with TFX. You will also learn how you can automate your pipeline through continuous integration and continuous deployment, and how to manage ML metadata. Then we will change focus to discuss how we can automate and reuse ML pipelines across multiple ML frameworks such as tensorflow, pytorch, scikit learn, and xgboost. You will also learn how to use another tool on Google Cloud, Cloud Composer, to orchestrate your continuous training pipelines. And finally, we will go over how to use MLflow for managing the complete machine learning life cycle.
This course introduces the products and solutions to solve NLP problems on Google Cloud. Additionally, it explores the processes, techniques, and tools to develop an NLP project with neural networks by using Vertex AI and TensorFlow.
This course introduces participants to MLOps tools and best practices for deploying, evaluating, monitoring and operating production ML systems on Google Cloud. MLOps is a discipline focused on the deployment, testing, monitoring, and automation of ML systems in production. Learners will get hands-on practice using Vertex AI Feature Store's streaming ingestion at the SDK layer.
This course introduces participants to MLOps tools and best practices for deploying, evaluating, monitoring and operating production ML systems on Google Cloud. MLOps is a discipline focused on the deployment, testing, monitoring, and automation of ML systems in production. Machine Learning Engineering professionals use tools for continuous improvement and evaluation of deployed models. They work with (or can be) Data Scientists, who develop models, to enable velocity and rigor in deploying the best performing models.
In this course, you apply your knowledge of classification models and embeddings to build a ML pipeline that functions as a recommendation engine. This is the fifth and final course of the Advanced Machine Learning on Google Cloud series.
This course describes different types of computer vision use cases and then highlights different machine learning strategies for solving these use cases. The strategies vary from experimenting with pre-built ML models through pre-built ML APIs and AutoML Vision to building custom image classifiers using linear models, deep neural network (DNN) models or convolutional neural network (CNN) models. The course shows how to improve a model's accuracy with augmentation, feature extraction, and fine-tuning hyperparameters while trying to avoid overfitting the data. The course also looks at practical issues that arise, for example, when one doesn't have enough data and how to incorporate the latest research findings into different models. Learners will get hands-on practice building and optimizing their own image classification models on a variety of public datasets in the labs they will work on.
This course covers how to implement the various flavors of production ML systems— static, dynamic, and continuous training; static and dynamic inference; and batch and online processing. You delve into TensorFlow abstraction levels, the various options for doing distributed training, and how to write distributed training models with custom estimators. This is the second course of the Advanced Machine Learning on Google Cloud series. After completing this course, enroll in the Image Understanding with TensorFlow on Google Cloud course.
Complete the intermediate Engineer Data for Predictive Modeling with BigQuery ML skill badge to demonstrate skills in the following: building data transformation pipelines to BigQuery using Dataprep by Trifacta; using Cloud Storage, Dataflow, and BigQuery to build extract, transform, and load (ETL) workflows; and building machine learning models using BigQuery ML. A skill badge is an exclusive digital badge issued by Google Cloud in recognition of your proficiency with Google Cloud products and services and tests your ability to apply your knowledge in an interactive hands-on environment. Complete the skill badge course, and final assessment challenge lab, to receive a digital badge that you can share with your network.
This course explores the benefits of using Vertex AI Feature Store, how to improve the accuracy of ML models, and how to find which data columns make the most useful features. This course also includes content and labs on feature engineering using BigQuery ML, Keras, and TensorFlow.
This course covers building ML models with TensorFlow and Keras, improving the accuracy of ML models and writing ML models for scaled use.
The course begins with a discussion about data: how to improve data quality and perform exploratory data analysis. We describe Vertex AI AutoML and how to build, train, and deploy an ML model without writing a single line of code. You will understand the benefits of Big Query ML. We then discuss how to optimize a machine learning (ML) model and how generalization and sampling can help assess the quality of ML models for custom training.
This course introduces the AI and machine learning (ML) offerings on Google Cloud that build both predictive and generative AI projects. It explores the technologies, products, and tools available throughout the data-to-AI life cycle, encompassing AI foundations, development, and solutions. It aims to help data scientists, AI developers, and ML engineers enhance their skills and knowledge through engaging learning experiences and practical hands-on exercises.
Giriş düzeyindeki Google Cloud'da Makine Öğrenimi API'leri İçin Veri Hazırlama beceri rozetini tamamlayarak şu konulardaki becerilerinizi gösterin: Dataprep by Trifacta ile veri temizleme, Dataflow'da veri ardışık düzenleri çalıştırma, Dataproc'ta küme oluşturma ve Apache Spark işleri çalıştırma ve makine öğrenimi API'lerini (Cloud Natural Language API, Google Cloud Speech-to-Text API ve Video Intelligence API dahil olmak üzere) çağırma. Beceri rozeti, Google Cloud ürün ve hizmetlerindeki uzmanlık düzeyiniz karşılığında Google Cloud tarafından verilen özel bir dijital rozettir. Bilgilerinizi, etkileşimli ve uygulamalı bir ortamda kullanma becerinizi test eder. Ağınızla paylaşabileceğiniz bir beceri rozeti kazanmak için bu beceri rozeti kursunu ve son değerlendirme niteliğindeki yarışma laboratuvarını tamamlayın.
Complete the intermediate Build a Data Warehouse with BigQuery skill badge to demonstrate skills in the following: joining data to create new tables, troubleshooting joins, appending data with unions, creating date-partitioned tables, and working with JSON, arrays, and structs in BigQuery. A skill badge is an exclusive digital badge issued by Google Cloud in recognition of your proficiency with Google Cloud products and services and tests your ability to apply your knowledge in an interactive hands-on environment. Complete the skill badge course, and final assessment challenge lab, to receive a digital badge that you can share with your network.
In the last installment of the Dataflow course series, we will introduce the components of the Dataflow operational model. We will examine tools and techniques for troubleshooting and optimizing pipeline performance. We will then review testing, deployment, and reliability best practices for Dataflow pipelines. We will conclude with a review of Templates, which makes it easy to scale Dataflow pipelines to organizations with hundreds of users. These lessons will help ensure that your data platform is stable and resilient to unanticipated circumstances.
In this second installment of the Dataflow course series, we are going to be diving deeper on developing pipelines using the Beam SDK. We start with a review of Apache Beam concepts. Next, we discuss processing streaming data using windows, watermarks and triggers. We then cover options for sources and sinks in your pipelines, schemas to express your structured data, and how to do stateful transformations using State and Timer APIs. We move onto reviewing best practices that help maximize your pipeline performance. Towards the end of the course, we introduce SQL and Dataframes to represent your business logic in Beam and how to iteratively develop pipelines using Beam notebooks.
This course introduces the Google Cloud big data and machine learning products and services that support the data-to-AI lifecycle. It explores the processes, challenges, and benefits of building a big data pipeline and machine learning models with Vertex AI on Google Cloud.
This course helps learners create a study plan for the PDE (Professional Data Engineer) certification exam. Learners explore the breadth and scope of the domains covered in the exam. Learners assess their exam readiness and create their individual study plan.
This course is part 1 of a 3-course series on Serverless Data Processing with Dataflow. In this first course, we start with a refresher of what Apache Beam is and its relationship with Dataflow. Next, we talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favorite programming language with their preferred execution backend. We then show you how Dataflow allows you to separate compute and storage while saving money, and how identity, access, and management tools interact with your Dataflow pipelines. Lastly, we look at how to implement the right security model for your use case on Dataflow.
Incorporating machine learning into data pipelines increases the ability to extract insights from data. This course covers ways machine learning can be included in data pipelines on Google Cloud. For little to no customization, this course covers AutoML. For more tailored machine learning capabilities, this course introduces Notebooks and BigQuery machine learning (BigQuery ML). Also, this course covers how to productionalize machine learning solutions by using Vertex AI.
Processing streaming data is becoming increasingly popular as streaming enables businesses to get real-time metrics on business operations. This course covers how to build streaming data pipelines on Google Cloud. Pub/Sub is described for handling incoming streaming data. The course also covers how to apply aggregations and transformations to streaming data using Dataflow, and how to store processed records to BigQuery or Bigtable for analysis. Learners get hands-on experience building streaming data pipeline components on Google Cloud by using QwikLabs.
Data pipelines typically fall under one of the Extract and Load (EL), Extract, Load and Transform (ELT) or Extract, Transform and Load (ETL) paradigms. This course describes which paradigm should be used and when for batch data. Furthermore, this course covers several technologies on Google Cloud for data transformation including BigQuery, executing Spark on Dataproc, pipeline graphs in Cloud Data Fusion and serverless data processing with Dataflow. Learners get hands-on experience building data pipeline components on Google Cloud using Qwiklabs.
The two key components of any data pipeline are data lakes and warehouses. This course highlights use-cases for each type of storage and dives into the available data lake and warehouse solutions on Google Cloud in technical detail. Also, this course describes the role of a data engineer, the benefits of a successful data pipeline to business operations, and examines why data engineering should be done in a cloud environment. This is the first course of the Data Engineering on Google Cloud series. After completing this course, enroll in the Building Batch Data Pipelines on Google Cloud course.
Bu kursta Vertex AI Studio tanıtılmaktadır. Bu araç, üretken yapay zeka modelleriyle etkileşime geçmek, kurumsal fikirlerin prototipini oluşturmak ve bunları gerçek hayatta uygulamak için kullanılır. Gerçek hayattan kullanım alanları, etkileşimli dersler ve uygulamalı laboratuvarlar aracılığıyla, ilk istemden son ürüne uzanan yaşam döngüsünü keşfedecek ve çoklu format destekli Gemini uygulamaları, istem tasarımı, istem mühendisliği ve model ayarlama konularında Vertex AI Studio'dan nasıl yararlanabileceğinizi öğreneceksiniz. Bu kursun amacı, Vertex AI Studio'yu kullanarak projelerinizde üretken yapay zekadan yararlanabilmenizi sağlamaktır.
Earn a skill badge by completing the Introduction to Generative AI, Introduction to Large Language Models and Introduction to Responsible AI courses. By passing the final quiz, you'll demonstrate your understanding of foundational concepts in generative AI. A skill badge is a digital badge issued by Google Cloud in recognition of your knowledge of Google Cloud products and services. Share your skill badge by making your profile public and adding it to your social media profile.
Bu kurs, derin öğrenmeyi kullanarak görüntülere altyazı ekleme modeli oluşturmayı öğretmektedir. Kurs sırasında görüntülere altyazı ekleme modelinin farklı bileşenlerini (ör. kodlayıcı ve kod çözücü) ve modelinizi eğitip değerlendirmeyi öğreneceksiniz. Bu kursu tamamlayan öğrenciler, kendi görüntülere altyazı ekleme modellerini oluşturabilecek ve bu modelleri görüntülere altyazı oluşturmak için kullanabilecek.
Bu kursta, kodlayıcı-kod çözücü mimarisi özet olarak anlatılmaktadır. Bu mimari; makine çevirisi, metin özetleme ve soru yanıtlama gibi "sıradan sıraya" görevlerde yaygın olarak kullanılan, güçlü bir makine öğrenimi mimarisidir. Kursta, kodlayıcı-kod çözücü mimarisinin ana bileşenlerini ve bu modellerin nasıl eğitilip sunulacağını öğreneceksiniz. Laboratuvarın adım adım açıklamalı kılavuz bölümünde ise sıfırdan şiir üretmek için TensorFlow'da kodlayıcı-kod çözücü mimarisinin basit bir uygulamasını yazacaksınız.
Bu kursta, görüntü üretme alanında gelecek vadeden bir makine öğrenimi modelleri ailesi olan "difüzyon modelleri" tanıtılmaktadır. Difüzyon modelleri fizikten, özellikle de termodinamikten ilham alır. Geçtiğimiz birkaç yıl içinde, gerek araştırma gerekse endüstri alanında difüzyon modelleri popülerlik kazandı. Google Cloud'daki son teknoloji görüntü üretme model ve araçlarının çoğu, difüzyon modelleri ile desteklenmektedir. Bu kursta, difüzyon modellerinin ardındaki teori tanıtılmakta ve bu modellerin Vertex AI'da nasıl eğitilip dağıtılacağı açıklanmaktadır.
Bu kurs, sorumlu yapay zekanın ne olduğunu, neden önemli olduğunu ve Google'ın sorumlu yapay zekayı ürünlerinde nasıl uyguladığını açıklamayı amaçlayan giriş seviyesinde bir mikro öğrenme kursudur. Ayrıca Google'ın 7 yapay zeka ilkesini de tanıtır.
Bu kurs, dönüştürücü mimarisini ve dönüştürücülerden çift yönlü kodlayıcı temsilleri (BERT - Encoder Representations from Transformers) modelini tanıtmaktadır. Kursta, öz dikkat mekanizması gibi dönüştürücü mimarisinin ana bileşenlerini ve BERT modelini oluşturmak için dönüştürücünün nasıl kullanıldığını öğreneceksiniz. Ayrıca sınıflandırma, soru yanıtlama ve doğal dil çıkarımı gibi BERT'in kullanılabileceği çeşitli görevler hakkında da bilgi sahibi olacaksınız. Kursun tahmini süresi 45 dakikadır.
Bu kursta nöral ağların, giriş sırasının belirli bölümlerine odaklanmasına olanak tanıyan güçlü bir teknik olan dikkat mekanizması tanıtılmaktadır. Kursta, dikkat mekanizmasının çalışma şeklini ve makine öğrenimi, metin özetleme ve soru yanıtlama gibi çeşitli makine öğrenimi görevlerinin performansını artırmak için nasıl kullanılabileceğini öğreneceksiniz.
Bu giriş seviyesi mikro öğrenme kursunda büyük dil modelleri (BDM) nedir, hangi kullanım durumlarında kullanılabileceği ve büyük dil modelleri performansını artırmak için nasıl istem ayarlaması yapabileceğiniz keşfedilecektir. Ayrıca kendi üretken yapay zeka uygulamalarınızı geliştirmenize yardımcı olacak Google araçları hakkında bilgi verilecektir.
Bu, üretken yapay zekanın ne olduğunu, nasıl kullanıldığını ve geleneksel makine öğrenme yöntemlerinden nasıl farklı olduğunu açıklamayı amaçlayan giriş seviyesi bir mikro öğrenme kursudur. Ayrıca kendi üretken yapay zeka uygulamalarınızı geliştirmenize yardımcı olacak Google Araçlarını da kapsar.