BigQuery is a fast, serverless, and easy-to-use data warehouse built by Google. Because the pricing model is pay-as-you go, BigQuery can be cost-effective for startups and enterprises. Dataform allows you to manage all data processes happening in your Bigquery warehouse, turning raw data into datasets that power your company’s analytics.
Dataform provides a powerful alternative to BigQuery scheduled queries. Schedules can be triggered by API, webhook or a time of your choosing. Success and failure alerts are sent to your team by Slack or email. Detailed run logs show exactly which SQL statements ran when, making debugging simple. And our parallel execution strategy minimises schedule durations and simplifies dependency management.
BigQuery costs $5 for every terabyte processed, so it’s important to keep track of the volume of data your pipelines are processing each day. Dataform provides simple reports for each of your schedules detailing how much each individual query within the schedule cost. When costs start to rise, use Dataform’s incremental tables to reduce your query costs with a few lines of code.
The Dataform web IDE is natively integrated with GitHub and GitLab. Version controlling your SQL has never been easier: create branches, commit changes, revert files and create pull requests without ever needing to touch the command line.
Being able to produce analytics tables that we are confident in the output of (because of assertions) and are as up to date as we need them to be (because of scheduling) makes our lives really easy. The UI is incredibly easy and intuitive to use, meaning we spend little of our time setting these things up, and most of our time writing SQL!
I love the dependency tree in Dataform. For me this is a central place for sanity checking my data flows, understanding if I'm reimplementing a dataset which already exists, and verifying logic. Secondly, I love SQLX for generating SQL of a similar structure again and again, it really speeds up development and let's your abstract away logic.
Having modeled data using other tools in the past, this is much simpler and an easier environment to code in. The code compiles in real time and lets you know if there are errors in the syntax. It also helps generate a dependency graph for the data pipeline which is insanely useful.