Cloud data warehousing is changing the way companies approach data management and analytics. Cloud warehouses which store and process data cost effectively means more and more companies are moving away from an ETL approach and towards an ELT approach for managing analytical data.
Source data is extracted from the original data source in an unstructured format. In traditional ETL processes this data is put into a temporary staging repository such as S3.
In the ELT model, data is copied then pasted directly into the data warehouse without significant modification. In an ETL model, data would be transformed into a suitable format before loading it into the warehouse.
Once loaded into the data warehouse, additional transformation must be done to clean and model data before it can be practically useful for other analytics applications, particularly when following the ELT model.
ETL requires the transformations to happen before the loading process. ETL extracts data from data sources and then deposits it into a staging area. Data is then cleaned, enriched, transformed and finally loaded into the data warehouse.
ELT is a modern variation of ETL where data cleaning, enrichement and transformation happen after the loading process. This is enabled by the fact that modern cloud data warehouses are extremely scalable and seperate storage from compute resources.
Being able to produce analytics tables that we are confident in the output of (because of assertions) and are as up to date as we need them to be (because of scheduling) makes our lives really easy. The UI is incredibly easy and intuitive to use, meaning we spend little of our time setting these things up, and most of our time writing SQL!
I love the dependency tree in Dataform. For me this is a central place for sanity checking my data flows, understanding if I'm reimplementing a dataset which already exists, and verifying logic. Secondly, I love SQLX for generating SQL of a similar structure again and again, it really speeds up development and let's your abstract away logic.
Having modeled data using other tools in the past, this is much simpler and an easier environment to code in. The code compiles in real time and lets you know if there are errors in the syntax. It also helps generate a dependency graph for the data pipeline which is insanely useful.