SQL Shootout: In-Memory Tables vs. Disk-Based Tables (Round 2)

Choosing the right index for Memory Optimized Tables (MOTs)

In Part 1 of our investigation into In-Memory Tables, we saw how changing a Hash Index to a regular Nonclustered index of a sample database table improved the performance of our Select operation. In this article, we’ll look deeper into the types of indexes we can apply to memory optimized tables, focusing on the more common hash and nonclustered indexes.

We hope to find differences in performance between these index types based on the types of queries we’re using to retrieve our sample table. Based on these measurements, we can infer which index is best suited for the types of queries our users will actually use in our real world data warehouses. We can then apply those inferred rules to provide better performing data applications.

As a corollary to the claim that memory-optimized tables are more performant than disk-based tables, we also expect to see significant differences in Select performance between these types of tables.

Setting up the test environment

For our test environment, we will be working with SQL Server 2016

– Microsoft SQL Server Management Studio      (13.0.16100.1)

– Microsoft Data Access Components (MDAC)      (10.0.10586.0)

– Microsoft .NET Framework      (4.0.30319.42000)

– Operating System      (6.3.10586)

The test database SQLLab is created with Simple recovery model, provisioned for In-Memory Table operations, and set for auto-updates for index statistics. (All scripts to create our test environment are provided as attachments to this document.)

Stating our goals

We intend to use 3 tables (TestTable type) for our comparisons:

1. An in-memory table with HASH indexes

2. An in-memory table with NONCLUSTERED indexes

3. A disk-based table with NONCLUSTERED indexes

These 3 tables will be loaded with the same number of rows of sample data, with the differences listed above.

We intend to apply 4 types of Select queries to our 3 test tables. All 4 queries will Select 6 columns from each table, but different WHERE clauses (Filter type):

1. Filter on 1 column, using the EQUALITY operator and a parameter

2. Filter  on 2 columns, using the EQUALITY operator and a parameter for each column

3. Filter on 1 column, using a RANGE operator with FROM and TO parameters

4. Filter on 4 columns, using the EQUALITY operator and a parameter for each column

The 3 tables will run each query type 10 times, and we will capture the CPU Time and Elapsed Time results from each run. The results will summarize our results according to TestTable type and Filter type.

Additionally, we will run the same queries using 3 sets of Joined tables:

– The in-memory table with HASH indexes LEFT-JOINED to 3 in-memory lookup tables with HASH indexes

– The in-memory table with NONCLUSTERED indexes LEFT-JOINED to 3 in-memory lookup tables with NONCLUSTERED indexes

– The disk-based table with NONCLUSTERED indexes LEFT-JOINED to 3 disk-based lookup tables with NONCLUSTERED indexes

It’s our goal to understand the differences in our scenarios using single table results. The multi-table scenario join will provide us additional and more complete insight into the  performance  of database tables  in our real-world production data-warehouse environments.

Building the main test tables 

The 3 main test tables are defined with the same basic structure: one primary key col1 and a reference column col2 of type integer, which is paired with the 3 codeX columns for filtering (col3 and col4 are used for auxliary tests not covered in this article) :

For the table that uses HASH indexes, the indexes are created on the column col2 and each of the codeX columns:

The same pattern applies to the tables (in-Memory and disk-based) that use NON-CLUSTERED indexes:

(See our .zip file that contains the script CreateSampleTablesForComparison.sql for more details.)

Loading the main test tables

The attached script (located in our .zip file) FillSampleTablesWithData.sql loads 200,000 rows of sample data into the 3 main tables. The sample consists of:

– Unique integers for columns col1 and col3

– Unique string values for column col4

– Repeating integer values, ranging from 1 to 11 for column col2

– Repeating string values for the codeX columns

Data warehouse developers will recognize a classic Fact table pattern that the main test tables are designed to emulate. Each row in the main test table represents a transaction made at some designated point. (I purposely avoided creating a datetime column to represent the “designated point”, focusing instead on merely providing a way to define unique rows in as generic a manner as possible.)

(See our .zip file that contains the script FillSampleTablesWithData.sql for more details.)

Running the tests and recording performance results

There are 6 scripts that are used to run the tests: 3 for Single table tests and 3 for Multiple table join tests. For each test script, you must enable SET STATISTICS TIME ON ( Query Options\Execution\Advanced ). This will print the CPU Time and Elapsed Time results of each test script in the Messages tab of the Results window in SSMS.

Each script contains the Select SQL statement for each test, a @Trial variable and 4 sets of filters. The @Trial variable sets the Filter type being measured. Setting @Trial to 1, and running the script will measure the performance of Filter type 1.

Single Table Tests

The single table tests are found in scripts SelectJoinInMemorytables_SingleTable.sal, SelectJoinInMemorytablesNC_SingleTable.sql, and SelectJoinRegularTables_SingleTable.sql.

Open the 3 scripts in SSMS.

1) Set the value of the @Trial variable in all scripts to 1, or the current Filter type you are measuring.

2) Run all 3 scripts:

– Click the Messages tab for each script, and copy the CPU Time and Elapsed Time to the Excel sheet InMemoryVsDiskTableShootout.xlsx.

– Choose the Trial-1 tab and the In-Memory Tables (HASH) table if you are posting results from script SelectJoinInMemorytables_SingleTable.sql.

– Choose the Trial-1 tab and the DISK TABLES table if you are posting results from script SelectJoinRegularTables_SingeTable.sql.

– Choose the Trial-1 tab and the In-Memory Tables (NONCLUSTERED) table if you are posting results from script SelectJoinInMemorytablesNC_SingleTable.sql.

3) Return to Step 1, set the @Trial value to the next value (remember, for the Single table sets, the highest value you can set @Trial is 4.)

4) Repeat steps 2 to 4 until all Single table trials are run.

Each script runs the Select statement ten (10) times which provides an ample set of measurements for analysis. At the end of the Single table tests, you should have 120 measurements total: 3 Table types x 4 Filter types x 10 test runs.

Multiple Table Tests

The multi-table tests are contained in SelectJoinInMemorytables_MultiTableJoin.sql, SelectJoinInMemorytablesNC_MultiTableJoin.sql, and SelectJoinRegularTables_MultiTableJoin.sql. These scripts measure Select performance for each Table type joined to 3 Lookup tables.

The steps to run these scripts are similar to the Single Table steps:

Open the 3 scripts in SSMS.

5) Set the value of the @Trial variable in all scripts to 5, or the current Filter type you are measuring.

6) Run all 3 scripts.

7) Click the Messages tab for each script, and copy the CPU Time and Elapsed Time to the Excel sheet InMemoryVsDiskTableShootout.xlsx.

– Choose the Trial-1 tab and the In-Memory Tables (HASH) table if you are posting results from script SelectJoinInMemorytables_MultiTableJoin.sql.

– Choose the Trial-1 tab and the DISK TABLES table if you are posting results from script SelectJoinRegularTables_MultiTableJoin.sql.

– Choose the Trial-1 tab and the In-Memory Tables (NONCLUSTERED) table if you are posting results from script SelectJoinInMemorytablesNC_MultiTableJoin.sql.

8) Return to Step 6, set the @Trial value to the next value (Remember, for the Single table sets, the highest value you can set @Trial is 8.)

9) Repeat steps 7 to 9 until all Multiple table trials are run.

Summarizing Test Results

Post the average of the 10 trials per table in the SummaryInMemoryVsDiskTableShoout Excel sheet.

The summary is divided into 2 sections: Single Table and Multi-Table. For each section, the average of the 10 measures is recorded for each of the 3 Table types and 4 Filter Types (labeled AVG MS, per row). The average of all trials is then displayed, identified by the Label AVERAGE DURATION MS. And finally, the average of all 3 table types in a section is summarized, identified by the Label MULTITABLE DURATION MS.

For the overall summary, comparing Single to Multiple Table join results, we show the percentage gain or drop in the section labeled AVG MS. PCT DIFF SINGLE VS MULTITABLE.

Analyzing Test Results

6 Key Takeaways

Here are 6 key takeaways derived from the results in our summary matrix:

1. In-Memory tables performed better than Disk-based tables.

2. In-Memory table with NONCLUSTERED index performed better than In-Memory table with HASH index in the Multiple table tests.

3. In-Memory table with NONCLUSTERED index and In-Memory table with HASH index performed equally well in the MULTIPLE table tests.

4. Multiple table tests show better performance results than Single table tests.

5. Two-column Filter performed better than One-column Filters for SINGLE In-Memory tables with a HASH index (Trial-1, Trial-2 for In-Memory HASH tests). The hash index is created using columns col2 and one of the columns: code1code2 or code3. According to the BOL, a hash index is not helpful unless the SELECT…WHERE clause has equality tests for every column in the key of the hash indexClearly, the filter with the 2 columns supports the higher performance levels.

6. Filter tests show better performance for In-Memory tables with NONCLUSTERED than In-Memory tables with HASH index (Trial-3 and Trial-7 for In-Memory tests). This is expected, and is explained in the BOL: nonclustered indexes, are optimized for range scans (inequality seek operations) and ordered scans.

Next, let’s address the elephant in the room: In-Memory tables with NONCLUSTERED indexes perform better than In-Memory tables with HASH indexes when they are joined with other tables (an average of 152  versus 166 milliseconds in our summary). In the Single table tests, we see our HASH indexed table outperforms our NONCLUSTERED table 166 versus 176 milliseconds. So we expected the same trend of numbers in our multi-table tests, right

Bucket Count (HASH Index) Analysis

Well, the answer lies in the data content of our tables, particularly with the columns that constitute our hash indexes. Since we’re joining the main table with 3 lookup tables using the columns col2 and code1code2, and code3, we then know where to look for the culprit that’s dragging down our HASH indexed tables and causing it to return rows at a slower rate than our NONCLUSTERED counterparts.

We can use the query below to get the details we need.

And our results look like this:

Notice the unusually high value for the avg_chain_length of our hash indexes: 20,000, 25,000 and 22,222. That tells us that we have a ton of duplicates in our content. And since our table joins use like-structured joins (hash index table left-joined to hash indexed lookups, and nonclustered index table left-joined to nonclustered indexed lookups), the joins in our hash indexed set act like equality filters which have to slog thru the duplicates in the row chains of the hash index to retrieve its rows.

The BOL recommends that when we see high counts in avg_chain_length during our bucket analysis of hash indexes, we should use nonclustered indexes instead.

To summarize our efforts, we created an effective test environment to study In-Memory table indexes in detail. Also, we confirmed the expected performance boost we get with In-Memory tables in SQL Server. We also learned how to utilize the bucket count query to examine the health of our hash index and explain any anomalies that may be detrimental to performance.

Leave a Reply

Your email address will not be published.