When it comes to deploying SQL Server onto an Azure IaaS VM , it is sometimes difficult to know the best option to deploy the SQL Server tempdb database.
In many of the SQL templates on the marketplace it is often deployed to the C:\ by default after which you should redeploy the database to either D:\ (local SSD) or to an attached premium disk (P10, P20, P30). The Microsoft SQL on IaaS Performance Best Practice article states both are possibilities under certain circumstances, however it does not provide empirical evidence as which to use when .
For those who have not seen the article – read here – https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-windows-sql-performance/
The key comment of interest is this…
For D-series, Dv2-series, and G-series VMs, the temporary drive on these VMs is SSD-based. If your workload makes heavy use of TempDB (e.g. for temporary objects or complex joins), storing TempDB on the D drive could result in higher TempDB throughput and lower TempDB latency.
For VMs that support Premium Storage (DS-series, DSv2-series, and GS-series), we recommend storing TempDB and/or Buffer Pool Extensions on a disk that supports Premium Storage with read caching enabled. There is one exception to this recommendation; if your TempDB usage is write-intensive, you can achieve higher performance by storing TempDB on the local D drive, which is also SSD-based on these machine sizes.
… AND SO – lets do some testing to validate this puppy!
Preparing the Environment
First Lets get everything we need to run our tests;
- The core of my tests were performed on an Azure DS3 VM (7GB RAM / 2 cores) from the marketplace with SQL Server 2016 Enterprise (RTM) .
- I deployed a single P30 premium disk (5000 IOPS / 200MB/sec) – mainly as they are significantly faster than P10 (500IOPS / 100MB/sec).
- Read caching is enabled on the disks.
- All disks are formatted using 64KB allocations
- Increased the TempDB size and added an additional data files (so we align 1x data file to 1x core). The SQL Script used to expand and move the TempDB database from C:\ to D:\ (which is the local SSD) is below.
- All other SQL Server services except the Database Service have been stopped.
NOTEthat before you do this you need to grant the account that runs the SQL Server database engine with FULL permissions on the root of D:\ so it can create the new database files
-- MOVE TEMP TO D:\ USE master; GO ALTER DATABASE tempdb MODIFY FILE (NAME = tempdev, FILENAME = 'D:\tempdb.mdf'); GO ALTER DATABASE tempdb MODIFY FILE (NAME = templog, FILENAME = 'D:\templog.ldf'); GO -- ADD NEW FILE AND RESIZE TEMPDB GO ALTER DATABASE [tempdb] ADD FILE ( NAME = N'temp', FILENAME = N'D:\tempdb2.ndf' , SIZE = 3170304KB , FILEGROWTH = 65536KB ) GO ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'tempdev', SIZE = 3170304KB ) GO ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'templog', SIZE = 3170304KB ) GO
Preparing and Running the Test
To test the throughput I will run a set test with the TempDB database on D:\ (local SSD) and then rerun the test again with the TempDB moved onto F:\ (P30 premium disk) . Between the tests SQL Server is restarted so we’re starting with a clean cache and state.
The test SQL script will create a temporary table and then run a series of insert, update, select and delete queries against that table. We’ll then capture and record the statistics and time .
I will use the classic SQLQueryStress tool to do my tests and stats collection – for those who want to grab this tool its here on github. The tool was originally developed by Adam Machanic. https://github.com/ErikEJ/SqlQueryStress
I am adapting a clever method highlighted by Brent Ozar for leveraging the tool to create a Production like performance load , however in my case I dont want it to be random but rather repeatable – see this article for more info – https://www.brentozar.com/archive/2015/05/how-to-fake-load-tests-with-sqlquerystress/
The SQLQueryStress tool is run on the server and is connected to the localhost and the TempDB database . Only a single thread and iteration are configured for the test.
The test SQL Test Script being run executes 1,000,000 iterations of the various database calls. Due to an odd quirk in the tool (which I couldn’t work out) I had to use a permanent temporary table instead of a # table.
-- RUN TEST begin SET NOCOUNT ON -- CREATE TEMP TABLE if not exists (select 1 from tempdb.dbo.sysobjects where type = 'U' and name like '%TempPerfTest%') begin create table tempdb.dbo.TempPerfTest ( MyID int NOT NULL IDENTITY(1, 1) PRIMARY KEY, MyText varchar(50) NOT NULL ); -- LOAD STARTING DATA INSERT INTO tempdb.dbo.TempPerfTest ( MyText ) SELECT TOP 1000000 REPLICATE('X', RAND(o1.object_id) * 50) FROM master.sys.objects o1 CROSS JOIN master.sys.objects o2 CROSS JOIN master.sys.objects o3 ; end DECLARE @Id INT = 0 WHILE (@Id < 100000) begin IF @Id % 4 = 0 -- SELECT SELECT * FROM tempdb.dbo.TempPerfTest WHERE MyID = (@Id * 10) ELSE IF @Id % 3 = 0 -- INSERT INSERT INTO tempdb.dbo.TempPerfTest (MyText) SELECT REPLICATE('X', RAND(@Id) * 50) ELSE IF @Id % 2 = 0 -- UPDATE UPDATE tempdb.dbo.TempPerfTest SET MyText = (SELECT REPLICATE('X', RAND(@Id) * 49)) WHERE MyID = (@Id * 6) ELSE -- DELETE DELETE FROM tempdb.dbo.TempPerfTest WHERE MyID = (@Id * 3) SET @Id = @Id + 1 end end
And so the results are in for the 1,000,000 iterations
|TEST||TIME (SEC)||LOGICAL READS|
|TempDB on D:\ (SSD)||25.00||336,493|
|TempDB on F:\ (P30)||26.30||336,493|
And then run again but with 10,000,000 iterations (10 times the initial workload)
|TEST||TIME (SEC)||LOGICAL READS|
|TempDB on D:\ (SSD)||208.00||3,010,789|
|TempDB on F:\ (P30)||226.00||3,010,789|
The findings show some interesting things.
- My 1M workload executed 5% faster on the local SSD than on the single P30 . The 10M workload is about 7% faster.
- The difference is fairly marginal so it really comes down to if you are looking for squeezing just a little bit more performance out of your application.
- I didn’t try multiple P30 in a set , however its a fair assumption that adding just an additional premium disk for TempDB will beat the local SSD performance .
- I didn’t try different workload mixes , such as all writes , or all reads , but again its a fair assumption that a different mix will produce different performance results .
And as I always say, please test this yourself as your workload may vary which of course means your mileage may vary!
So there you have it – the data does not lie!
Disclaimer:all content on Mr. Fox SQL blog is subject to the disclaimer found hereFiled under: Data Compression , DBA , Performance & Tuning , SQL Server