Leonardo Vigil
회원 가입일: 2020
브론즈 리그
400포인트
회원 가입일: 2020
Cloud Hero is played around the world, in person and online. Today, you have the opportunity to become your a cloud hero! This game is all about how GCP helps you get the most out of your data. You will compete to see who can finish the game with the highest score. Earn the points by completing the steps in the lab.... and get bonus points for speed! Be sure to click "End" when you're done with each lab to get the maximum points. All players will be awarded the game badge.
SQL만으로 몇 시간이 아닌 몇 분 만에 머신러닝 모델을 빌드하고 싶으신가요? BigQuery ML은 데이터 분석가가 기존 SQL 도구와 기술을 사용하여 머신러닝 모델을 만들고, 학습시키고, 평가하고, 예측할 수 있게 하여 머신러닝을 범용화합니다. 이 실습 시리즈에서는 다양한 모델 유형을 실험하고 좋은 모델을 만드는 요소를 알아봅니다.
In this advanced-level quest, you will learn how to harness serious Google Cloud computing power to run big data and machine learning jobs. The hands-on labs will give you use cases, and you will be tasked with implementing big data and machine learning practices utilized by Google’s very own Solutions Architecture team. From running Big Query analytics on tens of thousands of basketball games, to training TensorFlow image classifiers, you will quickly see why Google Cloud is the go-to platform for running big data and machine learning jobs.
Big data, machine learning, and scientific data? It sounds like the perfect match. In this advanced-level quest, you will get hands-on practice with GCP services like Big Query, Dataproc, and Tensorflow by applying them to use cases that employ real-life, scientific data sets. By getting experience with tasks like earthquake data analysis and satellite image aggregation, Scientific Data Processing will expand your skill set in big data and machine learning so you can start tackling your own problems across a spectrum of scientific disciplines.
This advanced-level Quest builds on its predecessor Quest, and offers hands-on practice on the more advanced data integration features available in Cloud Data Fusion, while sharing best practices to build more robust, reusable, dynamic pipelines. Learners get to try out the data lineage feature as well to derive interesting insights into their data’s history.