Improving HammerDB Performance with the New 2014 Cardinality Estimator
- Part 1: HammerDB and the New SQL Server 2014 Cardinality Estimator
- Part 2: Use the Query Store In SQL Server 2016 To Improve HammerDB Performance
T-SQL Tuesday #79 is being hosted by Michael J. Swart this month. His prompt was to check out what’s new with SQL Server 2016. I figured I would play around with the Query Store and our old friend HammerDB.
This post is a follow-up of sorts to my post HAMMERDB AND THE NEW SQL SERVER 2014 CARDINALITY ESTIMATOR. Read that first to get a basic overview of the problem we are trying to solve. Come back once you are finished.
Basic synopsis of that article if you don’t want to read it for yourself:
- With the new cardinality estimator introduced in SQL Server 2014, performance of the SLEV stored procedure used by HammerDB seriously regresses.
- Adding Trace Flag 9481 to the statement that regressed in the stored procedure brings the performance back to the same levels as SQL Server 2012.
The Workload on SQL 2016
First, we need to figure out if the HammerDB workload performs similarly to when we first upgraded to SQL 2014. I’m going to follow the same steps that I took when upgrading the database to 2014, except I will be changing the compatibility level to 130 (the new 2016 level) and at the end I will be enabling the query store on the database. (For my specific HammerDB build instructions, check out that original 2014 post).
/* Using this blog post from Thomas LaRock as a guideline for upgrading to 2014: thomaslarock.com/2014/06/upgrading-to-sql-server-2014-a-dozen-things-to-check/ */ USE [master] GO --Both data and log file on my C drive - not being fancy here RESTORE DATABASE [tpcc] FROM DISK = N'C:\2012INSTANCES\tpcc_blogtesting_10WH.bak' WITH FILE = 1 ,MOVE N'tpcc' TO N'C:\2016INSTANCES\DATA\tpcc.mdf' ,MOVE N'tpcc_log' TO N'C:\2016INSTANCES\LOGS\tpcc_log.ldf' ,NOUNLOAD ,STATS = 5 ,REPLACE GO --Be sure to set the database compatibililty_level up to the newest 2016 version! ALTER DATABASE [tpcc] SET COMPATIBILITY_LEVEL = 130; USE [tpcc] GO DBCC CHECKDB WITH DATA_PURITY; DBCC UPDATEUSAGE(tpcc); USE [tpcc] GO EXEC sp_MSforeachtable @command1='UPDATE STATISTICS ? WITH FULLSCAN'; USE [tpcc] GO --Enable Query Store on tpcc database ALTER DATABASE CURRENT SET QUERY_STORE = ON;
After the database is restored and upgraded to 2016, we can run the TPC-C workload. Here are the results and the perfmon graph from the TPC-C workload running on my local 2016 instance:
HammerDB Results: 32,497 Transactions Per Minute; 7,171 New Orders Per Minute
CPU Average: 76%; Batch Requests / Sec average: ~ 527
Here are the results from the same workload on SQL 2012, 2014, and 2016 all in one table. You can see that we have the same issue on SQL 2016 that we had when first upgrading to SQL 2014.
|Workload||TPM||NOPM||%CPU Avg||BatchRequests/Sec Avg|
|2014 (Pre TF Fix)||31,542||6,894||74%||492|
Sure enough, when we check the plan cache for the most expensive queries based on total_worker_time, we see the same query we would expect rise to the top. It is also using the same plan as it was on SQL 2014.
Can we use the new Query Store to force the ‘good 2012 plan’ for the query without modifying the stored procedure?
Let’s say, for whatever reason, that we don’t want to modify the stored procedure and manually insert the trace flag option onto the query. Is there still a way to force the ‘good plan’? Well, let’s find out.
First, I’m going to pull up the Top Resource Consuming Queries window in the Query Store.
You can see that our problem query is incredibly easy to find in the top left window based on total duration. Also notice that in the top right Plan summary window, there is currently only one available plan for the query (plan_id 49).
We need to figure out how we can get our ‘good plan’ using Trace Flag 9481 as an available plan that we can force using the Query Store.
The Query Store Fix
Here is the code we can use to manually execute the SLEV stored procedure using TF9481 at the session level. This will get us back to the ‘good plan’ and hopefully the query store will recognize the ‘good plan’ as an available plan that we can force. (IMPORTANT: Set Options will need to be identical to the options the existing query in cache used. You can find these pretty easily by looking at the XML of the original plan in cache.)
USE [tpcc] GO --SET OPTIONS MUST BE IDENTICAL TO THE EXISTING PLAN IN CACHE!! --(otherwise our 'good plan' won't show as available for the existing query) SET ANSI_NULLS ON SET ANSI_PADDING ON SET ANSI_WARNINGS ON SET ARITHABORT OFF SET CONCAT_NULL_YIELDS_NULL ON SET NUMERIC_ROUNDABORT OFF SET QUOTED_IDENTIFIER ON --clear the plan of the bad query using the plan_handle DBCC FREEPROCCACHE(0x05000500CE07091B409F80E73401000001000000000000000000000000000000000000000000000000000000) --enable TF 9481 at the session level to revert to the legacy 2012 Cardinality Estimator DBCC TRACEON(9481) BEGIN TRAN EXEC [dbo].[SLEV] @st_w_id = 8, @st_d_id = 9, @threshold = 10 ROLLBACK TRAN --we can even rollback the proc to prevent any actual data modification GO 10 --execute 10 times just for fun --disable the TF DBCC TRACEOFF(9481)
Sure enough, the ‘good plan’ is listed as part of the Plan Summary for our problem query (plan_id 74). Now we can easily force this plan using the GUI or t-sql:
--For query_id 49, force plan_id 74 EXEC sys.sp_query_store_force_plan @query_id = 49, @plan_id = 74;
Just so that we can get a cool perfmon graph, I’m going to start the HammerDB workload before forcing the ‘good plan’. After about 30 seconds or so, I’ll force the ‘good plan’ using the query store and we can see what impact it has on the workload performance.
You can see the plan was forced right around the 3:55:20 mark. It quickly brought batches / second up to around 1800 and close to the SQL Server 2012 benchmark that was in the original post.
The query is back down to around 700 reads per execution…
…and here are the stats after running a full workload with the Query Store fix in place. We still aren’t all the way back to the 2012 baseline levels, but we are much closer.
|Workload||TPM||NOPM||%CPU Avg||BatchRequests/Sec Avg|
|Query Store Fix||98,177||21,324||32%||1,633|
With new versions of SQL Server come new tools, and the Query Store is a very powerful tool that can be useful when you need to force a plan based on a trace flag but don’t want to necessarily hard code the trace flag into the query or stored procedure. I can’t wait to start using the Query Store on production databases.
Once again, always be sure to test workloads and applications before upgrading a production database to a newer version of SQL Server – especially when going from 2012 or older to 2014 and newer.