Skip to main content
Fireconduit

Getting Started

Set up your first Firestore to BigQuery pipeline

This guide walks you through setting up Fireconduit and creating your first pipeline to backfill Firestore data to BigQuery.

How Fireconduit Works

Fireconduit runs Dataflow jobs directly in your GCP project. You deploy a small Cloud Function that receives Pub/Sub trigger messages from Fireconduit via Eventarc and launches Dataflow jobs. This means:

  • Your data never leaves your GCP project
  • You pay Google directly for Dataflow compute costs
  • Full control over network configuration and data residency

Prerequisites

Before you begin, make sure you have:

  1. A Fireconduit accountSign up if you haven’t already
  2. A GCP project with Firestore data and BigQuery enabled
  3. Terraform installed (installation guide)
  4. gcloud CLI installed and authenticated (installation guide)

Step 1: Deploy Infrastructure

First, deploy the required infrastructure to your GCP project.

Get Your API Key

  1. Sign in to Fireconduit
  2. Go to Settings
  3. Copy your API Key

Run the Setup Wizard

The Fireconduit CLI generates Terraform configuration for you:

npx fireconduit init

The wizard will:

  • Check prerequisites (Terraform, gcloud, authentication)
  • Prompt for your GCP project ID and region
  • Prompt for your API key
  • Generate Terraform files in a ./.fireconduit directory

Deploy with Terraform

cd .fireconduit
terraform init
terraform apply

When Terraform completes, it automatically registers your Cloud Function with Fireconduit. Your GCP project will appear in your dashboard.

See Connect Your GCP Project for detailed configuration options.

Step 2: Create Your First Pipeline

With infrastructure deployed, create a pipeline to define what data to backfill:

  1. Navigate to Pipelines in the dashboard
  2. Click New Pipeline
  3. Follow the wizard steps:

Basics

Give your pipeline a name and optional description.

Source & Destination

  • Firestore Collection: Enter the collection path (e.g., users, orders)
  • BigQuery Dataset: Enter the dataset name
  • BigQuery Table: Enter the table name (created automatically if it doesn’t exist)

Schema

Configure how Firestore documents map to BigQuery columns:

  • Start with a template (Firebase Extension or Simple)
  • Add metadata columns (document ID, path, timestamps)
  • Map specific document fields to typed columns

See the Schema Mapping guide for detailed options.

Trigger

Choose when backfills run:

  • Manual: Run backfills on-demand from the dashboard
  • Scheduled: Set up recurring backfills at regular intervals

Dataflow Settings

Configure Dataflow settings like region, worker machine type, and maximum workers. The defaults work well for most use cases.

  1. Click Create Pipeline to save your configuration

Step 3: Run Your First Backfill

With your pipeline created, you’re ready to backfill data:

  1. Go to your pipeline’s detail page
  2. Click Run Backfill
  3. Confirm to start the job

The backfill job will appear in the Jobs tab. You can monitor progress in real-time, including documents processed, throughput, and any errors.

Next Steps