Our team provides consulting services focusing on the most critical business issues.
The implementation for ES at a high level takes audit and historical data from a variety of sources on-premise which first land into Azure Storage Blobs. Polybase is then used to load data in parallel quickly into the Azure SQL Data Warehouse where it is then processed and transformed into dimension and fact tables. Afterwards, these dimension and fact tables are moved into Analysis Services and SQL Server IaaS instances to support quick and highly concurrent access a variety of business users. Across this solution, Azure Data Factory acts as the orchestrating ELT framework, allowing for a single interface to control the data flow for the majority of the pipeline.
8 Tips for SQL Data Warehouse
Putting together a solution like this, while performant, is not a trivial task. Listed below is some guidance straight from the ES team on designing solutions with Azure SQL Data Warehouse:
1. Stage the data in Azure SQL DW:
One of the guiding principles of our warehouse has been to stage the data in its native form, i.e. the way it is available in source. There are various reasons as to why we persist a copy of the source like performance, data quality, data persistence for validation, etc... because of staging the data, we are able to distribute data as per our needs and ensure we have minimal data skew. Rarely, if at all, have we used round robin mechanism to store data.
2. Arriving at a common distribution key:
The data staged into the Azure SQL Data Warehouse instance was ingested from a 3NF data source. This helped us to slightly change the schema from source and include the base table’s foreign key as part of all tables, which are distributed on the same key. During our fact load we join the tables on these set of keys thereby minimizing all DMS operations and in some cases, no DMS operations occur. Hence this gives us an edge in terms of performance. As an example, the data in source systems have one to many relationships between tables. However, in our SQL DW we have inserted a common distribution key across all tables, based on business logic and because this key gets loaded when ELT runs.
However, we would recommend checking the data skew before going ahead with this approach as the distribution key must be chosen completely based on the skew that one may see. When data skew is high or the joins are not compatible we create an interim table, which is distributed on the same key as the other join table. We us CTAS to accomplish this which incurs one DMS operation to get all keys but improves performance when there are complex joins.
3. Vertical Partitioning of Wide Tables:
We had a row size limitation of 32KB in Azure SQL DW. Since we had several wide tables with 150+ columns and many with varchar(4000) we came up with an approach to vertically partition the table on the same distribution key. This helped us to overcome the challenge of 32KB and at the same time provide the required performance while joining the two tables as the distribution key was the same.
Note: SQL Data Warehouse now supports 1MB wide rows
4. Use the right resource class:
In several cases we had complex facts that would need more resources (memory and CPU) to speed-up the process of fact loads. Not just facts, we also had dimensions which had complex business rules and type 2 kind of implementations. We designed our ETL in such a way that lesser complex facts and dimensions would run on smallrc resource class providing for more parallelism, whereas the more complex facts which would need more resources would run using largerc resource class.
5. Use the primary key as distribution column for master tables:
In the source from where we ingest data into SQL DW, we have many master tables that we use in SQL DW to look up these tables for building our facts. In such a case, we have made these tables with reasonable amount of data (>1 million rows) being distributed on the primary key, which is a unique integer. This has given us the advantage having even data distribution (minimal to no data skew), thereby making our look up queries really fast.
6. Using Dynamic Scale up and Scale down for saving costs:
Our ELTs using ADF are designed in such a way that prior to the scheduled ELT kick off, we scale up our instance from 100DWU to 600DWU. This has led to huge cost savings. Our ELT runs for nearly 4-5 hours during this time and the DWU usage is capped at 600 DWU. During month end when there is a faster need for processing and businesses need data faster, we have the option of scaling to 1000 DWU. All this is done as part of our ELT’s and no manual intervention is needed.
7. Regular Maintenance:
In our case, we included updating statistics and index rebuilds as part of the ELT process. No sooner the dimension and fact load is completed, we check for all the CCI’s where the fragmentation is > 5% and rebuild the index. Similarly, for the key tables, we are updating statistics to ensure best performance.
8. Leverage SQL Server 2016 for data marts:
Azure SQL DW is the primary data processing engine, whereas we chose to have SQL Server 2016 running on DS 14 IAAS VM’s are our primary source of data distribution. This has enabled us to leverage high concurrency provided by SQL Server and use the power of Azure SQL DW for processing. We have close to 1000+ users who would be using the data provisioned from Azure SQL DW.
Using Azure SQL Data Warehouse as part of their solution, Microsoft Enterprise Services was able to reduce run times by up to 40% and provide insights to business users with reduced latency.