Often database administrators experience decreases in query performance with Amazon Redshift as the size of data increases. The problem becomes even more significant when we start talking about regularly executed queries. Design time decisions that were made with the given the statistics at the start of the project may not make sense with the new data and query patterns. So, this means that the Amazon Redshift cluster requires some optimization. See how these Amazon Redshift performance tuning tips can improve query execution times when users start noticing performance issues.
Amazon introduced a feature called Redshift Optimization for the AWS Schema Conversion Tool (SCT) with the November 17, 2016 release. You can now use the Schema Conversion Tool the optimization of existing Amazon Redshift deployments! This dramatically reduces the manual effort of addressing performance issues with your Redshift deployments.
Here’s why you should consider taking advantage of the Redshift Optimization option in the AWS Schema Conversion Tool.
Preparing the environment
NOTE: We are only using HammerDB for comparison purposes to illustrate how changes in the configuration of Redshift impacts performance. This cannot be considered as an official TPC-H benchmark run that you can use to compare with other database platforms.
In HammerDB, we selected the TPC-H configuration which creates 8 tables for a modified run against Amazon Redshift with 13 queries out of the standard 22 queries used for an official TPC-H run.
To build the schema in Hammer DB as shown below, we created all tables with ‘even’ distribution style and without sorting keys initially.
Adding the data
After creating empty database schemas, we used the application’s Datagen option to generate the data files with 1000 scale factor. The result was a set of CSV files for each table that were approximately 1.1 terabytes in size.
On the next step we packed these generated data files into the archives, using the following command:
gzip.exe -r D:\HammerDB\*.tbl
This reduced the size of the initial data set to around 320 GB.
We utilized the S3 browser to copy these archive files to the Amazon S3 located in the same region as the Redshift server to minimize network latency. We then created specific scripts to upload the data to Redshift using the COPY command. It was nice that Redshift uses the LZOP compression for uploaded files that resulted in a data set of just 200 GB.
Now everything was set to start the HammerDB benchmark.
What happens if your initial optimization strategy results in poor performance
In the first run, we configured the TPC-H driver to run with just one user and one query set for the 13 queries as shown below.
The first run took a little over 27 hours to complete! Here are the results for each of the queries in the first run. Note, there was no warm up run prior to running the benchmark.
Hammerdb Log @ Sun May 28 18:15:41 -0000 2017
Hammerdb Log @ Sun May 28 18:16:23 -0000 2017
Vuser 1:Executing &lt;strong&gt;Query (1&lt;/strong&gt; OF 13)
Vuser 1:10 ROWS returned IN 355.67 seconds
Vuser 1:Executing &lt;strong&gt;Query (2&lt;/strong&gt; OF 13)
Vuser 1:1 ROWS returned IN 229.854 seconds
Vuser 1:Executing &lt;strong&gt;Query (3&lt;/strong&gt; OF 13)
Vuser 1:1 ROWS returned IN 253.078 seconds
Vuser 1:Executing &lt;strong&gt;Query (4&lt;/strong&gt; OF 13)
Vuser 1:1 ROWS returned IN 4925.949 seconds
Vuser 1:Executing &lt;strong&gt;Query (5&lt;/strong&gt; OF 13)
Vuser 1:1 ROWS returned IN 29.741 seconds
Vuser 1:Executing &lt;strong&gt;Query (6&lt;/strong&gt; OF 13)
Vuser 1:100 ROWS returned IN 1104.394 seconds
Vuser 1:Executing &lt;strong&gt;Query (7&lt;/strong&gt; OF 13)
Vuser 1:100 ROWS returned IN 31989.634 seconds
Vuser 1:Executing &lt;strong&gt;Query (8&lt;/strong&gt; OF 13)
Vuser 1:0 ROWS returned IN 0.829 seconds
Vuser 1:Executing &lt;strong&gt;Query (9&lt;/strong&gt; OF 13)
Vuser 1:0 ROWS returned IN 1.075 seconds
Vuser 1:Executing &lt;strong&gt;Query (10&lt;/strong&gt; OF 13)
Vuser 1:0 ROWS returned IN 26868.327 seconds
Vuser 1:Executing &lt;strong&gt;Query (11&lt;/strong&gt; OF 13)
Vuser 1:0 ROWS returned IN 26372.198 seconds
Vuser 1:Executing Query (12 OF 13)
Vuser 1:100 ROWS returned IN 6070.607 seconds
Vuser 1:Executing &lt;strong&gt;Query (13&lt;/strong&gt; OF 13)
Vuser 1:3480 ROWS returned IN 343.742 seconds
Vuser 1:Completed query SET IN &lt;strong&gt;98545&lt;/strong&gt; seconds
Vuser 1:Geometric mean OF query times returning ROWS IS 846.62011
Using the Amazon Redshift optimizer
From the perspective of our extensive experience, we’ve noticed that this benchmark usually runs much faster on the hardware of the m4.xlarge cluster by choosing the appropriate distribution style and sort keys. To see what the Redshift optimizer suggests for each of the problematic queries, you can run the following query:
SUBSTRING(event,0,200) AS event,
SUBSTRING(solution,0,200) AS solution
WHERE query IN
ORDER BY query;
Here’s what we discovered:
What the Amazon Redshift optimizer does is to look for ways to minimize network latency between compute nodes and minimize file I/O latency when reading data.
The disk storage in Amazon Redshift for a compute node is divided into a number of slices. The number of slices per node depends on the node size of the cluster. When you load data into a table, Amazon Redshift distributes the rows of the table to each of the node slices according to the table’s distribution style that can have the following values: ‘even’, ‘key’ or ‘all’.
For an even distribution style, the node distributes data evenly regardless of the values in any particular column. The ‘all’ distribution option, makes a copy of each table will be distributed to every node. The ‘key’ distribution option distributes rows according to the values in one specified column. The idea is to pick a distribution key column that groups commonly accessed data used in a WHERE clause that keeps data together within node slices.
When you create a table, you can define one or more of its columns as sort keys. When data is initially loaded into the empty table, the rows are stored on disk in sorted order.
Amazon Redshift performance tuning tips to optimize cluster storage
These Amazon Redshift performance tuning tips using Redshift optimization requires several steps to optimize the Redshift Sort Keys and Optimization Strategy for you cluster storge.
Duplicating the original cluster. We created a snapshot of the original Redshift cluster in the AWS console. Then we created a new Redshift cluster and used the default option of the AWS console to restore the cluster from the snapshot. We now have the original and the newly created clusters as the source and the target respectively in the AWS SCT project. Moreover, we wouldn’t need to migrate the data because we have already restored it from the backup snapshot.
Collecting statistics from the original database. We used the offline statistics collection, and then uploaded the files to AWS SCT. However, SCT provides an option to collect statistics online, directly from the tool.
Running the optimization. Amazon recommends using different clusters as the source and target for the optimization project. Meanwhile, SCT allows you to select the migration strategy when you launch the optimization: you can opt for ‘Migration to a copy’ or ‘Migration to a clean slate’. In our case, we selected the first option because we have already created the copy of the original cluster.
Starting the optimization
To launch the optimization, we selected the ‘Run optimization’ option in the pop-up menu of the SCT. We used the default optimization strategy.
According to Amazon, the optimization parameters by default are set based on the best practices. So, we’ll have a chance to check the optimization result based on the default values.
We’ve set speeding up the ‘expensive’ or ‘heavy’ queries as the main goal of the optimization procedure.
The results of optimization included SQL scripts to recreate tables. Here’s an example of the script for one of the tables.
CREATE TABLE IF NOT EXISTS public.nation(
n_nationkey NUMERIC(18,0) NOT NULL,
n_comment CHARACTER VARYING(152)
As you can see, the generated SQL code now includes the sort keys and the distribution keys.
The results show more than 8 times performance increase!
In AWS Schema Conversion Tool, you may save the SQL script of the new tables or apply it directly to the target cluster. We selected the second option and applied the code to the target. This operation took around 2 hours, and now we could run the HammerDB benchmark once again, using the same parameters as in the first run on the non-optimized database.
Here are the results of the HammerDB benchmark compared to the first run.
As you can see, SCT dramatically improved performance of the ‘expensive’ or ‘heavy’ queries in the new cluster where the tables included the sort and the distribution keys. The HammerDB benchmark completed in about 3 hours. This was an 8.2 times performance gain!
It was great to see that SCT with the new Redshift Optimization feature made good decisions on distribution styles and sort keys on the queries that performed poorly. The fact that there were three queries that ran relatively slower was expected.
NOTE: You can’t always expect an 8 times performance increase using these Amazon Redshift performance tuning tips with Redshift Optimization. These results are based on a specific benchmark test and won’t reflect your actual database design, size, and queries. However, the Redshift Optimization eliminated the manual steps just about all the labor-intensive steps outlined in the Amazon Redshift tutorial for Tuning Table Design.
As you can see, using these Amazon Redshift performance tuning tips with the AWS Schema Conversion Tool Redshift Optimization feature greatly reduces the time needed to tune query performance with good results. In our case, we showed how the Redshift Optimization feature improved the performance of queries by 8.3 times by providing recommendations for distribution types and sort keys based on historical information of query execution.
It’s well worth the minimal effort to run the Redshift Optimization against your existing Amazon Redshift databases. If you do, please share your results with us!
If you have any questions about this great feature in AWS SCT, feel free to contact DB Best at [email protected]. DB Best has migrated more databases using SCT than any other partner in the world. So, if you want to learn more about how DB Best can help you with migration projects, check out our Jumpstart program at https://www.dbbest.com/services/migrate-to-aws/.