Viewing Always Encrypted Data in SSMS


One of the many new features introduced in SQL Server 2016 is Always Encrypted. In the words of Microsoft: “Always Encrypted provides a separation between those who own the data (and can view it) and those who manage the data (but should have no access).”

If there is PII in a table or database, all Database Administrators (or anyone tasked with administering that database that have the enough permissions) can view that data. Generally, those who are hired to manage that database should be trusted to not go digging through personal information; however, with Always Encrypted, you can completely eliminate having to even think about who may or may not have access to the sensitive data. Because having access to an encryption certificate is needed to view the data, it becomes very easy to control who has access.

As a DBA, I can say that not even having to worry about what data I may or may not see is a huge relief.

In this post, I will cover:
1. How to encrypt a column with sensitive data using the GUI in SSMS.
2. How to view Always Encrypted Data from SSMS on the machine where the Always Encrypted certificate was generated.
3. How to export/import the Always Encrypted certificate so that Always Encrypted data can be viewed from multiple machines.

Encrypting a Column

I will be using the trusty AdventureWorks database for this demonstration (I know, I know. WorldWideImporters is sooo much better), and I will be looking at the Sales.CreditCard table.

Here is what the data in that CreditCard table looks like. Our goal will be to encrypt the CardNumber column.

Use AdventureWorks
SELECT TOP 5 * FROM [Sales].[CreditCard];


First, let’s create a quick copy the CreditCard table. We’ll copy it over to a new table: CreditCard_Encrypted.

--create copy of the CreditCard table
DROP TABLE IF EXISTS Sales.CreditCard_Encrypted

Select * 
INTO Sales.CreditCard_Encrypted
from Sales.CreditCard


Since I am a lazy millennial, I’m just going to use the GUI to encrypt the CardNumber column.

First, right-click on the table and choose Encrypt Columns…


Next, select the column(s) you want to encrypt and choose the Encryption Type (Deterministic or Randomized) and the Encryption Key you want to use. I’m just going to be generating a new key for this demonstration.


Choose the Column Master Key. Once again, I’m generating a new master key and storing it in the Windows Certificate store.


Finish navigating through the next few screens and finish the Encryption tasks:


Once that is complete, we can try querying the CreditCard_Encrypted table:

Use AdventureWorks
SELECT TOP 5 * FROM [Sales].[CreditCard_Encrypted];


Our CardNumber column is now encrypted! That wasn’t too difficult.

We can also see the Master Key and Encryption Key that were created:


We can also script out the DDL for those keys…

--Master Key
	KEY_PATH = N'CurrentUser/my/F7D16243AF2D3AA32FAF06EC759F435AB44CB919'

--Encryption Key
	ENCRYPTED_VALUE = 0x016E000001630075007200720065006E00740075007300650072002F006D0079002F0066003700640031003600320034003300610066003200640033006100610033003200660061006600300036006500630037003500390066003400330035006100620034003400630062003900310039004CDE3BDBF014EAD0C066A97E5E93CCFFF36BE6F8A9B5EBB7F790659AD3CA626C48F0514D77C05052C4235A58BDF6908B054DF9DF9F01C2690AD89C5E71F5987A0054F913F161547DF740E7E9A3DF3695A8D0027BC5CAB779D1AB8CDDBED76649F56DC1EE107ACAC5A09BD4CCE8075707F0CCA3F00EF4376D428E56C35B12AEBF924978C8CA460F89F004AC1B9E0F58DBB3CE2B455DDC3F1C39526488A63AB7417CF0D5CD0D75E9F76F08736CF35DFF435BC1363C38C0FF3A0FB0C21D3CB4E0E3153F8C7928895E08F849274E3A16C146A48A4A9A0B574565D89E5727FA56ADFFFA77F50F6468FF21EBB03A3A7C432E18644D8EAFA91471E059729FEB9D6682A33E23A395B7915A425E24C75B1477302232C9657C1AE4F7ADF4439EC62E355944BFC9393458A319DF0191B2531E51965AA191D0EA1B4089DC5B426FB0511DD089ACE81F73A3AC3A45D11CA7F8EB5EC9026266DC596AB1E98A4F30194909BC464CF89978F0ABF6BF6BED36A0B5AC3ADC4B25C7AFCE3DC9AD0F0B20EE42871FA9713474C1D344879B6F0120B4FEDC6208296663C36823CDBA71AFBE1EDDBE87BCE451ED5A2ABD0275D203AA05310FAE4738FB7EEAF99CDB36C73BC3844224F96EDD9BFC5415575A42C54986775B36994C0B1C1F3B5C9EC51DF7D3F50B8CA74385A67E5549EF17DF010AAB0C7C529C5F82BC3C8837F0472C5CFF452CD2029AB46964

…and we can script out the CREATE statement for the table to see what the Always Encrypted syntax looks like:

CREATE TABLE [Sales].[CreditCard_Encrypted](
	[CreditCardID] [int] IDENTITY(1,1) NOT NULL,
	[CardType] [nvarchar](50) NOT NULL,
	[CardNumber] [nvarchar](25) COLLATE Latin1_General_BIN2 ENCRYPTED WITH 
	[ExpMonth] [tinyint] NOT NULL,
	[ExpYear] [smallint] NOT NULL,
	[ModifiedDate] [datetime] NOT NULL


So what do we do if we actually want to view the encrypted data in SSMS?

We’ll need to pass in this connection parameter in the SSMS connection options: Column Encryption Setting = Enabled


Now I’ll reconnect and try to select from the Encrypted table once again:

Use AdventureWorks
SELECT TOP 5 * FROM [Sales].[CreditCard];


I am now able to view the encrypted data. Why is this?

Because I am connecting to the database from the database server, I have access to the Encryption Certificate that was generated. I can verify this by opening certmgr.msc and browsing to Personal -> Certificates:


Without access to that certificate, I would not be able to see the encrypted data.

We can test this by attempting to connect to the database with the encrypted table from another machine and passing in the “Column Encryption Setting = Enabled” connection option in SSMS.

When trying to do a select on CreditCard_Encrypted, we receive this error message:

Msg 0, Level 11, State 0, Line 0
Failed to decrypt column 'CardNumber'.
Msg 0, Level 11, State 0, Line 0
Failed to decrypt a column encryption key using key store provider: 'MSSQL_CERTIFICATE_STORE'. The last 10 bytes of the encrypted column encryption key are: '36-B6-E0-8F-9F-54-AC-CF-9B-CB'.
Msg 0, Level 11, State 0, Line 0
Certificate with thumbprint 'BA85A469AE80E4001B1FD9E71D89165593DE9009' not found in certificate store 'My' in certificate location 'CurrentUser'. Verify the certificate path in the column master key definition in the database is correct, and the certificate has been imported correctly into the certificate location/store.
Parameter name: masterKeyPath

What about viewing the encrypted data from SSMS on another machine?

So how can we get a copy of the Always Encrypted certificate over to the other machine?

We must go back to the original machine or server where the certificate was generated. Re-open certmgr.msc, browse to the Always Encrypted Certificate, right-click -> All Tasks -> Export…


I’ve had success selecting these options:


Give the certificate a password if you wish:


Choose a file name for the exported certificate:


Finish the export wizard and browse to the destination folder. We now have a copy of the Always Encrypted certificate.


The certificate can now be copied to the secondary machine that needs access to the encrypted data. Open ‘Manage User Certificates’ (AKA certmgr.msc). Browse to Personal -> Certificates. Right click on the Certificates folder -> All Tasks -> Import…


Browse to and select the Always Encrypted certificate that was copied from the database server:


Enter the password if one was used. Also, check Include all extended properties.


Place the certificate in the Personal store.


Step through the final screens and finish the import process.

After the import is finished, open certmgr.msc and ensure the Always Encrypted certificate shows up under Personal -> Certificates.

Now that the certificate has been imported, we can try to query the Encrypted table again.


BOOM! We can now query the encrypted data on a different machine with SSMS.



In this blog post, I’ve shown how to easily encrypt a column using Always Encrypted. I’ve also shown how to enable users using SSMS to view that encrypted data.

I’m pretty excited about Always Encrypted and think that it could potentially be a great solution for encrypting sensitive data.


Use the Query Store In SQL Server 2016 To Improve HammerDB Performance (T-SQL Tuesday #79)

Improving HammerDB Performance with the New 2014 Cardinality Estimator


T-SQL Tuesday #79 is being hosted by Michael J. Swart this month. His prompt was to check out what’s new with SQL Server 2016. I figured I would play around with the Query Store and our old friend HammerDB.

This post is a follow-up of sorts to my post HAMMERDB AND THE NEW SQL SERVER 2014 CARDINALITY ESTIMATOR. Read that first to get a basic overview of the problem we are trying to solve. Come back once you are finished.

Basic synopsis of that article if you don’t want to read it for yourself:

  • With the new cardinality estimator introduced in SQL Server 2014, performance of the SLEV stored procedure used by HammerDB seriously regresses.
  • Adding Trace Flag 9481 to the statement that regressed in the stored procedure brings the performance back to the same levels as SQL Server 2012.

The Workload on SQL 2016

First, we need to figure out if the HammerDB workload performs similarly to when we first upgraded to SQL 2014. I’m going to follow the same steps that I took when upgrading the database to 2014, except I will be changing the compatibility level to 130 (the new 2016 level) and at the end I will be enabling the query store on the database. (For my specific HammerDB build instructions, check out that original 2014 post).

Using this blog post from Thomas LaRock as a guideline for upgrading to 2014:
USE [master]

--Both data and log file on my C drive - not being fancy here
FROM DISK = N'C:\2012INSTANCES\tpcc_blogtesting_10WH.bak'
,MOVE N'tpcc' TO N'C:\2016INSTANCES\DATA\tpcc.mdf'
,MOVE N'tpcc_log' TO N'C:\2016INSTANCES\LOGS\tpcc_log.ldf'
,STATS = 5

--Be sure to set the database compatibililty_level up to the newest 2016 version!

USE [tpcc]



USE [tpcc]
EXEC sp_MSforeachtable @command1='UPDATE STATISTICS ? WITH FULLSCAN';

USE [tpcc]
--Enable Query Store on tpcc database

After the database is restored and upgraded to 2016, we can run the TPC-C workload. Here are the results and the perfmon graph from the TPC-C workload running on my local 2016 instance:

HammerDB Results: 32,497 Transactions Per Minute;  7,171 New Orders Per Minute

CPU Average: 76%; Batch Requests / Sec average: ~ 527


Here are the results from the same workload on SQL 2012, 2014, and 2016 all in one table. You can see that we have the same issue on SQL 2016 that we had when first upgrading to SQL 2014.

Workload      TPM NOPM %CPU Avg BatchRequests/Sec Avg
2012 (baseline) 114,901 25,035 35% 1,750
2014 (Pre TF Fix) 31,542 6,894 74% 492
2016 32,497 7,171 76% 527

Sure enough, when we check the plan cache for the most expensive queries based on total_worker_time, we see the same query we would expect rise to the top. It is also using the same plan as it was on SQL 2014.



Can we use the new Query Store to force the ‘good 2012 plan’ for the query without modifying the stored procedure?

Let’s say, for whatever reason, that we don’t want to modify the stored procedure and manually insert the trace flag option onto the query. Is there still a way to force the ‘good plan’? Well, let’s find out.

First, I’m going to pull up the Top Resource Consuming Queries window in the Query Store.


You can see that our problem query is incredibly easy to find in the top left window based on total duration. Also notice that in the top right Plan summary window, there is currently only one available plan for the query (plan_id 49).

We need to figure out how we can get our ‘good plan’ using Trace Flag 9481 as an available plan that we can force using the Query Store.

The Query Store Fix

Here is the code we can use to manually execute the SLEV stored procedure using TF9481 at the session level. This will get us back to the ‘good plan’ and hopefully the query store will recognize the ‘good plan’ as an available plan that we can force. (IMPORTANT: Set Options will need to be identical to the options the existing query in cache used. You can find these pretty easily by looking at the XML of the original plan in cache.)

 USE [tpcc]

    --(otherwise our 'good plan' won't show as available for the existing query)


--clear the plan of the bad query using the plan_handle
DBCC FREEPROCCACHE(0x05000500CE07091B409F80E73401000001000000000000000000000000000000000000000000000000000000)

--enable TF 9481 at the session level to revert to the legacy 2012 Cardinality Estimator

EXEC [dbo].[SLEV]
  @st_w_id = 8,
  @st_d_id = 9,
  @threshold = 10
ROLLBACK TRAN --we can even rollback the proc to prevent any actual data modification

GO 10 --execute 10 times just for fun

--disable the TF

Now let’s check the Query Store Top Resource Consumers window again:

Sure enough, the ‘good plan’ is listed as part of the Plan Summary for our problem query (plan_id 74). Now we can easily force this plan using the GUI or t-sql:

--For query_id 49, force plan_id 74
EXEC sys.sp_query_store_force_plan @query_id = 49, @plan_id = 74;


Just so that we can get a cool perfmon graph, I’m going to start the HammerDB workload before forcing the ‘good plan’. After about 30 seconds or so, I’ll force the ‘good plan’ using the query store and we can see what impact it has on the workload performance.


You can see the plan was forced right around the 3:55:20 mark. It quickly brought batches / second up to around 1800 and close to the SQL Server 2012 benchmark that was in the original post.

The query is back down to around 700 reads per execution…


…and here are the stats after running a full workload with the Query Store fix in place. We still aren’t all the way back to the 2012 baseline levels, but we are much closer.

Workload TPM NOPM %CPU Avg BatchRequests/Sec Avg
Query Store Fix 98,177 21,324 32% 1,633
2012 (baseline) 114,901 25,035 35% 1,750


With new versions of SQL Server come new tools, and the Query Store is a very powerful tool that can be useful when you need to force a plan based on a trace flag but don’t want to necessarily hard code the trace flag into the query or stored procedure. I can’t wait to start using the Query Store on production databases.

Once again, always be sure to test workloads and applications before upgrading a production database to a newer version of SQL Server – especially when going from 2012 or older to 2014 and newer.

Easily Generate SQL Server Database Snapshot Create Statements

The other day, a developer came to me and requested a database backup be taken before we ran some upgrade scripts on the database provided by a vendor. This was only a test server so there wouldn’t be any traffic on the database. I thought about it for a second and figured that creating a snapshot of the database prior to the upgrade script being executed would probably be just as easy – and in the event something with the upgrade script errored out or went wrong, we could easily revert the database back to the snapshot which would be much quicker than an entire database restore.

I went to create the database snapshot and could not remember the syntax to save my life. I decided to make a simple script to help myself out in the future.

The entire script will be available down below. But I’ll show a quick example of how to get it setup. I will be using the StackOverflow database on which I have added two additional files/filegroups.


The only two changes that NEED to be made in the script are the @Target_DB and @Snap_Desc variables. Here, I am setting @Target_DB to ‘StackOverflow’ and @Snap_Desc to ‘Pre_Database_Upgrade_’ plus a timestamp. By default, the snapshot name will be a combination of these two variables (this can be easily changed if you like – but I like descriptive names).

/*--------SET THE VARIABLES HERE-----------------*/
--target database of which you are going to take the snapshot
set @Target_DB = 'StackOverflow'

--description or purpose of the snapshot (to make the snapshot name somewhat descriptive)
set @Snap_Desc = 'Pre_Database_Upgrade_' + CONVERT(varchar,GETDATE(), 114)

--Combining @Target_DB and @Snap_Desc to create a snapshot name.
--This can be changed to whatever you wish.
set @Snap_Name = @Target_DB + '_' + @Snap_Desc + '_ss'


Here is the example output from the script using the input variables from above. It generates the command to create the snapshot and also provides the command to revert the database back to the snapshot (just in case).

--Snapshot Name: StackOverflow_Pre_Database_Upgrade_14:55:28:607_ss
--Number of DB Files: 3

USE master;
CREATE DATABASE [StackOverflow_Pre_Database_Upgrade_14:55:28:607_ss] ON
( NAME = [StackOverflow],
,( NAME = [StackOverflow_ETL],
,( NAME = [StackOverflow_ReadOnly],
AS SNAPSHOT of [StackOverflow];

/*--------SNAPSHOT Revert Script------------/

USE master;
-- Reverting DB [StackOverflow] to [StackOverflow_Pre_Database_Upgrade_14:55:28:607_ss]
RESTORE DATABASE [StackOverflow] from
DATABASE_SNAPSHOT = 'StackOverflow_Pre_Database_Upgrade_14:55:28:607_ss';


As of now, the script places the snapshot sparse files (.ss) in the same locations as the existing database .mdf and .ndf files. This can be manually tweaked pretty easily, but I may add in some options to specify a new location for the sparse files at some point in the future. Also, this script doesn’t create the actual snapshot – it simply outputs the command needed to create the snapshot. You will still have to manually execute the output yourself.

Anyway, here is the entire script or click here for the raw code:


Author:		Justin Goodwin,

Date:		June 2nd, 2016

NOTES:		For now, this places the snapshot files (.ss) in the 
			same location as the original database files. The 
			command can be tweaked once it is generated if the 
			location needs to be changed.

  You may alter this code for your own *non-commercial* purposes. 
  You may republish altered code as long as you give due credit. 


use master

set nocount on; --remove '(x row(s) affected)' garbage

declare @Target_DB varchar(200) 
declare @Snap_Desc varchar(100) 

declare @Snap_Name varchar(300)

declare @Snap_Script varchar(max)
declare @File_Count_Script varchar(max)
declare @File_Count int

declare @filename varchar(100)
declare @physical_name varchar(300)
declare @cursorcount int
declare @cursorscript varchar(max)

/*--------SET THE VARIABLES HERE-----------------*/
--target database of which you are going to take the snapshot
set @Target_DB = 'Database_Name'

--description or purpose of the snapshot (to make the snapshot name somewhat descriptive)
set @Snap_Desc = 'Pre_Database_Upgrade_' + CONVERT(varchar,GETDATE(), 114)

--Combining @Target_DB and @Snap_Desc to create a snapshot name. 
	--This can be changed to whatever you wish.
set @Snap_Name = @Target_DB + '_' + @Snap_Desc + '_ss'


--check if the @Target_DB exists on the current server
IF NOT EXISTS(select 1 from sys.databases where name = @Target_DB)
	Print '@Target_DB ' + @Target_DB + ' does not exist on the server! Double Check the @Target_DB name.'

print '--Snapshot Name: ' + @Snap_Name

--create 2 temp holding tables
	if OBJECT_ID(N'tempdb..#tempcount') is not null
		drop table #tempcount
	create table #tempcount (count int)
	if OBJECT_ID(N'tempdb..#tempcursor') is not null
		drop table #tempcursor
	create table #tempcursor (
		name sysname
		,physical_name nvarchar(260)

--determining how many DB data files exist for @Target_DB
set @File_Count_Script = '
select COUNT(name)
from ' + QUOTENAME(@Target_DB) + '.sys.database_files
where type = 0 --Rows only
--by doing this insert..exec into a temp table, we can avoid having to be connected to the target_db
insert into #tempcount
exec (@File_Count_Script)

select @File_Count = count
from #tempcount

print '--Number of DB Files: ' + CAST(@File_Count as varchar(3)) + '


set @cursorcount = 1 --the iterative loop counter

--begin creation of the create snapshot script here...
set @Snap_Script = 'USE master; 

--if there is more than 1 database data file, we will need to iterate through each file....cursor time.
set @cursorscript = '
select name
from ' + QUOTENAME(@Target_DB) + '.sys.database_files
where type = 0 --Rows only

--more insert..exec...
insert into #tempcursor
exec (@cursorscript)

declare file_name_cursor cursor
select name
from #tempcursor

--start cursor
open file_name_cursor

fetch next
from file_name_cursor
into @filename

while @@fetch_status = 0
	if (@cursorcount > 1) --we need a leading comma for each new file after the first file
		set @Snap_Script = @Snap_Script + '
	--add each DB data file to the snapshot command being built
	set @Snap_Script = @Snap_Script + '( NAME = ' + QUOTENAME(@filename) + ', 
		FILENAME = ''' + REPLACE((REPLACE(@physical_name, '.mdf', '.ss')), '.ndf', '.ss') + ''')' --replace .mdf or .ndf with .ss

	set @cursorcount = @cursorcount + 1 --add to the loop counter after each data file

	fetch next
	from file_name_cursor
	into @filename

close file_name_cursor

deallocate file_name_cursor

--add the final piece to the snapshot create statement and build the snapshot revert statement
set @Snap_Script = @Snap_Script + '
AS SNAPSHOT of ' + QUOTENAME(@Target_DB) + ';

/*--------SNAPSHOT Revert Script------------/

USE master;  

-- Reverting DB '+QUOTENAME(@Target_DB)+' to '+QUOTENAME(@Snap_Name)+'  
DATABASE_SNAPSHOT = '''+@Snap_Name+''';  


--output the commands that we have built
print @Snap_Script

--cleanup temp tables
drop table #tempcount
drop table #tempcursor

FINISH: --if there is an error, skip to this label

SQL Server 2012 MEMORYCLERK_XE Memory Leak

Monday morning: a time to settle in, grab some coffee, and check on any alerts that might have sprung up over the weekend. On this particular Monday, we received an unusual alert from a Test server: “Job DBA_DatabaseIntegrityCheck has been executing for 53 hours.”

Well, that’s no good. I decided to investigate and quickly ran sp_WhoIsActive on the server to see what was going on.


This isn’t the original screenshot from when the job had been running for 53+ hours, but everything else was the same.

Right away I could see the DBCC CHECKDB statement running and waiting on RESOURCE_SEMAPHORE. So it appeared to be some sort of memory issue on the server. I decided to run the handy memory clerk usage query from Glenn Berry’s 2016 DMV Queries. Here is that query and the results that I received:

--From Glenn Berry's SQL Server Diagnostic Information Queries for May 2016
--* Copyright (C) 2016 Glenn Berry,
--* All rights reserved.
-- Memory Clerk Usage for instance  (Query 41) (Memory Clerk Usage)
SELECT TOP(10) mc.[type] AS [Memory Clerk Type],
CAST((SUM(mc.pages_kb)/1024.0) AS DECIMAL (15,2)) AS [Memory Usage (MB)]
FROM sys.dm_os_memory_clerks AS mc WITH (NOLOCK)
GROUP BY mc.[type]

Memory Clerk Type         Memory Usage (MB)
MEMORYCLERK_XE            1311.49
CACHESTORE_SQLCP          102.48
CACHESTORE_PHDR           37.34
CACHESTORE_OBJCP          22.79

Why would MEMORYCLERK_XE be taking up so much of the memory on the box? I knew that the MEMORYCLERK_XE clerk was the total amount of memory that the extended events sessions on the server were using. So, once again, time for further analysis.

select name
 ,total_buffer_size / 1024 as total_buffer_size_kb
from sys.dm_xe_sessions
order by total_buffer_size_kb desc

NAME                          total_buffer_size_kb
system_health                 4223
SpotlightResponseTimeWaitData 4223
SpotlightWorkloadAnalysis     4223
SpotlightDeadlockDetection    4031
sp_server_diagnostics session 383

You can see that there were 5 different XEvent sessions on the server (we use Dell Spotlight which opens up three XEvent sessions on every server it monitors) – and all of them combined were using about 16.5mb of buffer space in memory. That was nowhere near the 1GB+ of memory that MEMORYCLERK_XE was using! So what gives?

My next course of action was to simply drop all XEvent session on the instance and hope that memory usage cleared up. I dropped all three Spotlight sessions, and disabled the system_health session just for fun. MEMORYCLERK_XE usage dropped from 1312 mb to 1299 mb. It barely moved at all! What was going on?

After posting on dba.stackexchange and doing some googling, I finally came across this KB article from Microsoft. Apparently there was a known memory leak with Extended Events in early versions of SQL Server 2012. So I checked on the current version of the test server I had been working on:


Microsoft SQL Server 2012 (SP1) - 11.0.3437.0 (X64)
 May 29 2014 16:03:40
 Copyright (c) Microsoft Corporation
 Enterprise Edition: Core-based Licensing (64-bit) on Windows NT 6.3 <X64> (Build 9600: ) (Hypervisor)

Sure enough, it was only SP1. The KB article states there were CU fixes for both SP1 and SP2, but we decided to go ahead and just upgrade the instance to SP3 + CU3 to get to the most recent version we could.

After patching the instance, I setup a simple job to query the amount of memory MEMORYCLERK_XE was using and insert that value into a table every 30 minutes – that way I could track the usage history. The basic setup script for that job can be found here.



I was a little nervous for the first couple of days, because it looked like the memory usage was slowly climbing on its own. However, it eventually dropped off and seems to be maintaining a steady usage of somewhere in the 30-50 mb range. More importantly, I can now drop the event sessions on the instance and the MEMORYCLERK_XE memory usage drops to 0!


Hopefully this might help someone else who may be having the same problem that we encountered. This is also just another example of why you should patch your SQL Servers regularly.


sp_cdc_enable_table – Error 6517: Failed to create AppDomain

I was enabling Change Data Capture (CDC) on some tables in a database the other day, and ran into a strange error I hadn’t encountered before. 

Here is the basic script I was running:

USE [my_db]

--Enable CDC on the database
EXEC sys.sp_cdc_enable_db

--Enable CDC on the dbo.Table_1 table
EXEC sys.sp_cdc_enable_table @source_schema = N'dbo'
	,@source_name = N'Table_1'
	,@role_name = NULL
	,@supports_net_changes = 1;

--Change the retention time to 14 days
EXECUTE sys.sp_cdc_change_job @job_type = N'cleanup'
	,@retention = 20160; --20160 minutes = 14 days

Here is the error I received when executing the sp_cdc_enable_table procedure:

Msg 22832, Level 16, State 1, Procedure sp_cdc_enable_table_internal, Line 630
Could not update the metadata that indicates table [dbo].[Table_1] is enabled for Change Data Capture. The failure occurred when executing the command ‘sp_cdc_create_change_enumeration_functions’. The error returned was 6517: ‘Failed to create AppDomain “mssqlsystemresource.dbo[runtime].1”.
Exception has been thrown by the target of an invocation.’. Use the action and error to determine the cause of the failure and resubmit the request.

As you can see, this same error was also being written to the Error Log on the server:


After doing some light googling, I came across these two links:

Both links combined seemed to point to 3 different potential solutions:

  1. Update SQL Server to the latest service pack or Cumulative Update
  2. There are pending Windows Updates waiting on a machine restart
  3. The server was renamed recently and requires an update using sp_addserver

The common solution between both links was option #2, so I decided to check with our Windows admin to see about the Windows Updates. Sure enough, some Windows updates had recently been installed and the server was pending a reboot to complete the installation.

Luckily for us, we had already scheduled the server reboot during a maintenance window during the coming weekend, so we only had to wait a couple of days before attempting to run the CDC script again.

After the server was restarted, I was able to run the script to enable CDC on the table with no problems! According to Microsoft:

It’s usually about some inconsistency with the CLR/.NET itself. It means the CLR installation is corrupted (unlikely), or, more likely, a .NET patch was recently installed and the machine is pending a reboot.

Hopefully, this will help someone out there who happens to run across this same problem.

Override Bad Table Hints With a Plan Guide

Unfortunately, we’ve all seen some third party applications pass some pretty awful SQL into databases. The worst of these always seem to be queries that cannot be edited or modified from within the application. Table/index and MAXDOP hints are a couple of the things I don’t like to see. These hints might be perfectly fine for a while, but as data grows and tables get larger, certain hints can prevent the query optimizer from working properly and providing the optimal execution plan.

I will be using the AdventureWorks2012 database for this example, and as part of the setup, I’ve already created the index below on the [Sales].[SalesOrderDetail] table. (It will be required for the FORCESEEK hint in my examples to work):

USE [AdventureWorks2012]

ON [Sales].[SalesOrderDetail]
([ProductID] ASC)

Let’s take this simple query with and without an added FORCESEEK hint as an example.

--SalesOrderDetail - 372,956 reads
SELECT * FROM Sales.SalesOrderDetail s WITH (FORCESEEK)
INNER JOIN Production.Product p ON s.ProductID = p.ProductID;


--SalesOrderDetail - 1,246 reads
SELECT * FROM Sales.SalesOrderDetail s 
INNER JOIN Production.Product p ON s.ProductID = p.ProductID;


Using the WITH(FORCESEEK) hint, the query is doing 372,956 reads on the SalesOrderDetail table (with a pretty nasty Key Lookup), but without the hint, the query only does 1,246 reads (in this case, a scan is a good thing).

Let’s assume for this blog post that the application is passing in the query with the FORCESEEK hint, and that we cannot modify the application to remove the hint, even though we know, for a fact, that the query will perform much better without the hint. How are we supposed to remove the FORCESEEK hint and cut down on reads?

First, we need to take a look at adding another TABLE HINT to the query to override the existing FORCESEEK hint. There is a pretty nifty trick to do this. We can simply append ‘OPTION(TABLE HINT(s))’ to the end of the query and it’ll wipe out any prior TABLE HINT on the SalesOrderDetail table (aliased as ‘s’). Since we aren’t actually specifying any specific TABLE HINT, it basically resets all existing table hints on SalesOrderDetail. It brings us back down to 1,246 reads and gives us the query plan with the Clustered Index scan.

--SalesOrderDetail - 1,246 reads
SELECT * FROM Sales.SalesOrderDetail s WITH (FORCESEEK)
INNER JOIN Production.Product p ON s.ProductID = p.ProductID

Now that we’ve figured out how to add to the existing query to remove the FORCESEEK hint, we need to figure out how to automatically have this statement appended to this query every time it is executed. Two words: Plan Guide.

Here is the basic syntax for getting this plan guide added to the database. You must give it a name, specify which SQL statement the hint will be added to, the type of plan guide, and the hint that will be added to the SQL Statement. (NOTE: the SQL statement must be passed into @stmt EXACTLY as the application will pass it in to the database. This even includes trailing spaces – which have caused me headaches many times.)

USE [AdventureWorks2012]

EXEC sp_create_plan_guide 
@name = N'RemoveFORCESEEK', 
@stmt = N'SELECT * FROM Sales.SalesOrderDetail s WITH (FORCESEEK)
INNER JOIN Production.Product p ON s.ProductID = p.ProductID;', 
@type = N'SQL', 
@hints = N'OPTION(TABLE HINT (s))'


Now that the plan guide is created, we can test running the original query with the FORCESEEK hint.


We can see from the execution plan properties that the query is now using the Plan Guide and doing the Clustered Index scan on SalesOrderDetail for 1,246 reads. We’ve successfully circumvented the bad FORCESEEK hint and saved a ton of reads in the process.

Checkout this MSDN link for further reading on Query Hints:

Find All Queries Using a Plan Guide in Cache

The inspiration for this post came a few days ago when a developer wanted to figure out if his application’s queries were still using plan guides that had been setup a couple of years ago. I asked him if he knew the specific queries in question, and of course, he did not.

My first thought was to create an Extended Events session to capture the “sqlserver.plan_guide_successful” and “sqlserver.plan_guide_unsuccessful” events. This would tell me, upon execution, if a query was successful or unsuccessful at using a plan guide. Unfortunately, the developer wasn’t able to execute the queries at the time, so this wouldn’t work in this case.

My next thought was to drill down into the database in Object Explorer -> Programmability -> Plan Guides and look to see what plan guides actually existed. Then I could drill into each individual plan guide and find the query that plan guide referenced. I could then lookup that query in cache and find the execution plan to see if the query was indeed using the plan guide. (Note – you can also query sys.plan_guides to get a list of all plan guides created on the database)

This can usually be done by looking at the properties window of the left-most operator (the Select operator in this case) of the query plan. If the query is using a plan guide, there will be two Properties: PlanGuideDB and PlanGuideName.


This could be tedious and take a little bit of time if you needed to lookup queries for many different plan guides. There had to be another way.

If you look at the raw XML of the execution plan, you can see the same PlanGuideDB and PlanGuideName properties (see line #5 below):

<ShowPlanXML xmlns="" Version="1.2" Build="11.0.2100.60">
        <StmtSimple StatementText="select ? from Schema1.Object1" StatementId="1" StatementCompId="1" StatementType="SELECT" RetrievedFromCache="true" PlanGuideDB="AdventureWorks2012" PlanGuideName="PlanGuide_1" StatementSubTreeCost="0.00363" StatementEstRows="290" StatementOptmLevel="TRIVIAL" QueryHash="0x4578128A64C31C86" QueryPlanHash="0x4C7F74CCEDD0EAE0">
          <QueryPlan NonParallelPlanReason="MaxDOPSetToOne" CachedPlanSize="16" CompileTime="0" CompileCPU="0" CompileMemory="120">
            <MemoryGrantInfo SerialRequiredMemory="0" SerialDesiredMemory="0" />
            <OptimizerHardwareDependentProperties EstimatedAvailableMemoryGrant="207295" EstimatedPagesCached="51823" EstimatedAvailableDegreeOfParallelism="1" />
            <RelOp NodeId="0" PhysicalOp="Compute Scalar" LogicalOp="Compute Scalar" EstimateRows="290" EstimateIO="0" EstimateCPU="2.9e-005" AvgRowSize="11" EstimatedTotalSubtreeCost="0.00363" Parallel="0" EstimateRebinds="0" EstimateRewinds="0" EstimatedExecutionMode="Row">
                <ColumnReference Column="Expr1003" />
                    <ColumnReference Column="Expr1003" />
                    <ScalarOperator ScalarString="ScalarString1">
                      <Const ConstValue="Value1" />
                <RelOp NodeId="1" PhysicalOp="Index Scan" LogicalOp="Index Scan" EstimateRows="290" EstimateIO="0.003125" EstimateCPU="0.000476" AvgRowSize="9" EstimatedTotalSubtreeCost="0.003601" TableCardinality="290" Parallel="0" EstimateRebinds="0" EstimateRewinds="0" EstimatedExecutionMode="Row">
                  <OutputList />
                  <IndexScan Ordered="0" ForcedIndex="0" ForceSeek="0" ForceScan="0" NoExpandHint="0" Storage="RowStore">
                    <DefinedValues />
                    <Object Database="Database1" Schema="Schema1" Table="Object1" Index="Index1" IndexKind="NonClustered" />

Ah ha! We should be able to search through the execution plan xml to find all plans that contain %PlanGuideName% currently in cache. With that in mind I came up with the following query. (Note: This will only find queries using a plan guide that are currently cached. It is very possible that a query that is not in the procedure cache could use a plan guide, also. But if it isn’t cached, we won’t find it.)

--Could take a while to execute on a server with a large plan cache, be patient

SELECT qt.text
	,DB_NAME(qt.dbid) AS db_name
	,LEFT(SUBSTRING(CAST(qp.query_plan AS nvarchar(max)), CHARINDEX('PlanGuideName', 
	CAST(qp.query_plan AS nvarchar(max))) + 15, 100), 
	CHARINDEX('"', SUBSTRING(CAST(qp.query_plan AS nvarchar(max)), 
	CHARINDEX ('PlanGuideName', CAST(qp.query_plan AS nvarchar(max))) + 16, 100))) AS PlanGuideName
	,qs.total_logical_reads / qs.execution_count AS [Avg Logical Reads]
	,qs.total_elapsed_time / qs.execution_count / 1000 AS [Avg Elapsed ms]
	,qs.creation_time as Compile_Time
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.plan_handle) qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
WHERE CAST(qp.query_plan AS nvarchar(max)) LIKE ('%PlanGuideName%')
	AND qt.text NOT LIKE '' --filter out any DBA user queries that may be hitting system tables
ORDER BY qs.last_execution_time desc
OPTION (RECOMPILE, MAXDOP 1);--prevent this query from filling the cache & try to minimize the cpu impact

Sample output:


Voilà! Now we have a list of all the queries, currently in cache, that are using plan guides. Much easier than digging through each Plan Guide individually!