Checkpoints
Create a Cloud Storage Bucket
/ 40
Create the BigQuery dataset
/ 40
Loading genomic variants for analysis
/ 20
Cloud Life Sciences: Variant Transforms Tool
GSP478
Overview
Cloud Life Sciences helps the life science community organize the world's genomic information and make it accessible and useful. Big genomic data is here today, with petabytes rapidly growing toward exabytes. Through extensions to Google Cloud Platform, you can apply the same technologies that power Google Search and Maps to securely store, process, explore, and share large, complex datasets.
Variant Transforms is an open-source tool used with Cloud Life Sciences. It is based on Apache Beam and uses Cloud Dataflow. Variant Transforms allows you to transform and load hundreds of thousands of files, millions of samples, and billions of records in a scalable manner. The tool also has a preprocessor which you can use to validate VCF files and identify inconsistencies.
The typical workflow for using the tool comprises the following:
- Storing raw VCF files in Cloud Storage.
- Using the Variant Transforms tool to load the VCF files from Cloud Storage into BigQuery.
You can then use BigQuery to analyze the variants.
In this lab you will use the Variant Transforms tool to transform and load VCF files from Cloud Storage into BigQuery.
Setup and requirements
Before you click the Start Lab button
Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources will be made available to you.
This hands-on lab lets you do the lab activities yourself in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials that you use to sign in and access Google Cloud for the duration of the lab.
To complete this lab, you need:
- Access to a standard internet browser (Chrome browser recommended).
- Time to complete the lab---remember, once you start, you cannot pause a lab.
How to start your lab and sign in to the Google Cloud Console
-
Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is the Lab Details panel with the following:
- The Open Google Console button
- Time remaining
- The temporary credentials that you must use for this lab
- Other information, if needed, to step through this lab
-
Click Open Google Console. The lab spins up resources, and then opens another tab that shows the Sign in page.
Tip: Arrange the tabs in separate windows, side-by-side.
Note: If you see the Choose an account dialog, click Use Another Account. -
If necessary, copy the Username from the Lab Details panel and paste it into the Sign in dialog. Click Next.
-
Copy the Password from the Lab Details panel and paste it into the Welcome dialog. Click Next.
Important: You must use the credentials from the left panel. Do not use your Google Cloud Skills Boost credentials. Note: Using your own Google Cloud account for this lab may incur extra charges. -
Click through the subsequent pages:
- Accept the terms and conditions.
- Do not add recovery options or two-factor authentication (because this is a temporary account).
- Do not sign up for free trials.
After a few moments, the Cloud Console opens in this tab.
Activate Cloud Shell
Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.
- Click Activate Cloud Shell
at the top of the Google Cloud console.
When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. The output contains a line that declares the PROJECT_ID for this session:
gcloud
is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
-
(Optional) You can list the active account name with this command:
-
Click Authorize.
-
Your output should now look like this:
Output:
-
(Optional) You can list the project ID with this command:
Output:
Example output:
gcloud
, in Google Cloud, refer to the gcloud CLI overview guide.
Task 1. Enable the APIs
-
Open the Navigation menu and select APIs & Services > Library and search for "lifesciences".
-
Click on the Google Cloud Life Sciences API tile, then click Enable.
Task 2. Create a Cloud Storage bucket
-
In Cloud Shell, set a variable equal to your Project ID:
-
Use the make bucket command to create a new regional bucket in the
us-central1
region within your project:
Click Check my progress to verify the objective.
-
After storing raw VCF files, use the Variant Transforms tool to load them into BigQuery.
Task 3. Create the BigQuery dataset
-
From the Navigation menu (
) in the Console, click on BigQuery. Then Click Done.
-
In the BigQuery UI, click on your project name, then click on Create Dataset.
-
Name the Dataset
genomicstest
. Click Create dataset.
Click Check my progress to verify the objective.
Task 4. Loading genomic variants for analysis
Next you will use the Variant Transforms tool to transform and load VCF files directly into BigQuery for large-scale storage and analysis. .
Transforming VCF files and importing into BigQuery
Running the tool
You can run the tool using a Docker image that has all of the necessary binaries and dependencies installed.
-
Run the following script to start the tool. For the Output table, substitute:
-
Your own Project ID for the
GOOGLE_CLOUD_PROJECT
variable -
Replace the
BIGQUERY_DATASET
withgenomicstest
-
Give your
BIGQUERY_TABLE
the name "deepvariant":
- When specifying the location of your VCF files in a Cloud Storage bucket, you can specify a single file or use a wildcard (*) to load multiple files at once. Acceptable file formats include GZIP, BZIP, and VCF. For more information, see Loading multiple files.
- The tool runs more slowly for compressed files because compressed files cannot be sharded. If you want to merge samples across files, see Variant merging.
- Note that the TEMP_LOCATION directory is used to store temporary files needed to run the tool. It can be any directory in Cloud Storage to which you have write access.
The command returns an operation ID in the format:
Depending on several factors, such as the size of your data, it can take anywhere from several minutes to an hour or more for the job to complete. For this lab you’re using a small dataset, so once the Dataflow job starts, it should be finished in around 10 minutes, and the overall job should complete in around 20 minutes.
If the job fails, you can check the runner_logs
file in the temp folder of the storage bucket that was created to see a detailed message. Sometimes it can take a few minutes for the Dataflow API to be fully enabled and to create the service account, so if the job fails the first time or the error logs say "Please ensure that the Dataflow API is enabled for your project." wait a couple minutes and try running the docker run [...]
command again.
-
Because the tool uses Cloud Dataflow, go to Navigation menu > Dataflow to see a detailed view of the job. You will need to wait a couple of minutes for Dataflow to get started.
-
On the Dataflow screen, click on the vcf-to-bigquery name to see the number of records processed, the number of workers, and detailed error logs.
In Cloud Shell the returned data will show pipeline execution completed when the operation is finished. A detailed description of the Operation resource can be found in the API documentation.
- While the job is running, read more about the script you just ran.
Click Check my progress to verify the objective.
Loading multiple files
You specified the VCF files you want to load into BigQuery using the --input_pattern flag in the script above.
-
For example, to load all VCF files in the my-bucket Cloud Storage bucket, set the flag to the following:
When loading multiple files with the Variant Transforms tool, the following operations occur:
- A merged BigQuery schema is created that contains data from all matching VCF files listed in the --input_pattern flag. For example, the INFO and FORMAT fields shared between the VCF files are merged. This step assumes that fields defined in multiple files with the same key are compatible.
- Records from all of the VCF files are loaded into a single table. Any missing fields are set to null in their associated column.
When loading the VCF files, their field definitions and values must be consistent, or else the tool will fail. The tool can attempt to fix these inconsistencies if set to do so. For more information, see Handling malformed files.
-
After the job completes, run the following command to list all of the tables in your dataset, replacing the
GOOGLE_CLOUD_PROJECT
andBIGQUERY_DATASET
with your names:
Check that the new table containing your VCF data is in the list.
bq show --format=pretty ${GOOGLE_CLOUD_PROJECT}:${BIGQUERY_DATASET}.deepvariant__chr20
Task 5. Analyzing variants with BigQuery
This page describes how to use BigQuery to analyze variants. The example below shows how to compute the ratio of transitions to transversions in SNPs in each chromosome for each sample.
Analyzing variants
The following example uses the data that was just imported with Variant Transforms.
To analyze the variants in the table:
-
Go to the BigQuery UI.
-
In the Query Editor add the following query. Replace PROJECT_ID in the FROM line with your
projectid
:
- Click Run. Your table of results should look like this:
The titv
column shows the transition to transversion ratio.
Congratulations!
You have now learned how to use Variant Transforms and BigQuery to perform analysis of genomic variants. BigQuery can be used to perform all types of analysis of variant data such as QA/QC, joint genotyping, population analysis, and comparing genotypic data to phenotype information hosted in other datasets.
Variant Transforms provides a way to import this data from a variety of sources, ranging from single sample exomes to multi-sample whole genomes, in a fast and scalable way. Variant Transforms is a gateway to unlocking the power of BigQuery to facilitate fast, reproducible, and massive scale analysis of genomic data.
Next steps / Learn more
- Troubleshooting the Variant Transforms Tool https://cloud.google.com/life-sciences/docs/how-tos/troubleshooting-variant-transforms.
- Google Cloud Life Sciences Discussion Group: gcp-life-sciences-discuss@googlegroups.com
Google Cloud training and certification
...helps you make the most of Google Cloud technologies. Our classes include technical skills and best practices to help you get up to speed quickly and continue your learning journey. We offer fundamental to advanced level training, with on-demand, live, and virtual options to suit your busy schedule. Certifications help you validate and prove your skill and expertise in Google Cloud technologies.
Manual Last Updated October 10, 2022
Lab Last Tested July 30, 2021
Copyright 2023 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.