• Docs
  • Plugins
  • Destinations
  • BigQuery
  • Overview

BigQuery Destination Plugin

Latest: v2.0.0

The BigQuery plugin syncs data from any CloudQuery source plugin(s) to a BigQuery database running on Google Cloud Platform.

The plugin currently only supports a streaming mode through the legacy streaming API. This is suitable for small- to medium-sized datasets, and will stream the results directly to the BigQuery database. A batch mode of operation is being developed to support larger datasets, but this is not currently supported.

Streaming is not available for the Google Cloud free tier.

Before you begin

  1. Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project.
  2. Create a BigQuery dataset that will contain the tables synced by CloudQuery. CloudQuery will automatically create the tables as part of a migration run on the first sync.
  3. Ensure that you have write access to the dataset. See Required Permissions for details.

Example config

The following config reads the values for project_id and dataset_id from environment variables:

kind: destination
spec:
  name: bigquery
  path: cloudquery/bigquery
  version: "v2.0.0"
  write_mode: "append"
  spec:
    project_id: ${PROJECT_ID}
    dataset_id: ${DATASET_ID}

Note that the BigQuery plugin only supports the append write mode.

Authentication

The GCP plugin authenticates using your Application Default Credentials. Available options are all the same options described here in detail:

Local Environment:

  • gcloud auth application-default login (recommended when running locally)

Google Cloud cloud-based development environment:

  • When you run on Cloud Shell or Cloud Code credentials are already available.

Google Cloud containerized environment:

Google Cloud services that support attaching a service account:

  • Services such as Compute Engine, App Engine and functions supporting attaching a user-managed service account which will CloudQuery will be able to utilize.

On-premises or another cloud provider

  • The suggested way is to use Workload identity federation
  • If not available you can always use service account keys and export the location of the key via GOOGLE_APPLICATION_CREDENTIALS. (Not recommended as long-lived keys are a security risk)

BigQuery Spec

This is the top-level spec used by the BigQuery destination plugin.

  • project_id (string) (required)

    The id of the project where the destination BigQuery database resides.

  • dataset_id (string) (required)

    The name of the BigQuery dataset within the project, e.g. my_dataset. This dataset needs to be created before running a sync or migration.

  • dataset_location (string) (optional)

    The data location of the BigQuery dataset. If set, will be used as the default location for job operations. Pro-tip: this can solve "dataset not found" issues for newly created datasets.

  • time_partitioning (string) (options: none, hour, day) (default: none)

    The time partitioning to use when creating tables. The partition time column used will always be _cq_sync_time so that all rows for a sync run will be partitioned on the hour/day the sync started.

  • service_account_key_json (string) (default: empty).

    GCP service account key content. This allows for using different service accounts for the GCP source and BigQuery destination. If using service account keys, it is best to use environment or file variable substitution.

Underlying library

We use the official cloud.google.com/go/bigquery package for database connection.