How to analyse Stored Procedure performance: part 2
Part 2 of this series is going to focus on Query Store and the execution statistics available to analyse performance. Without going into too much about query store, think of it as a flight recorder for the database its enabled on.
The quickest way to take a look inside query store (when its enabled) is to use the built in ssms reports :
Straight off the bat Top Resource Consuming Queries is a great report as it shows duration over time as well as execution plans involved.
Very simple to use, and quite flexible in the fact you can sort my numerous resource consumption criteria.
You can see object_name is present. That’s how you can link the query to the stored proc. The problem however is there is no way to search query store for the object name via the gui.
Via extended events you can quickly grab the t-sql behind these reports. The TSQL XEvent Profiler session is good enough here. Enable it and then open the report Top Resource Consuming Queries. Return back to the XE Live data screen and you can find the t-sql behind this report:
Copy the cell text and you have the t-sql driving these reports.
I will now demonstrate below how you can query the query store DMV’s effectively with an example proc using the StackOVerflow2013 database and show how to spot plan changes and force good plans.
I can see from the proc cache the query executes in 1.34 ms :
Query store shows the same and 1 plan :
Insert more rows into dbo.posts:
Transact-SQL
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
INSERTINTOdbo.posts_test
([id],
[acceptedanswerid],
[answercount],
[body],
[closeddate],
[commentcount],
[communityowneddate],
[creationdate],
[favoritecount],
[lastactivitydate],
[lasteditdate],
[lasteditordisplayname],
[lasteditoruserid],
[owneruserid],
[parentid],
[posttypeid],
[score],
[tags],
[title],
[viewcount])
SELECTTOP(500000)100as[id],–5000000
[acceptedanswerid],
[answercount],
[body],
[closeddate],
[commentcount],
[communityowneddate],
[creationdate],
[favoritecount],
[lastactivitydate],
[lasteditdate],
[lasteditordisplayname],
[lasteditoruserid],
[owneruserid],
[parentid],
[posttypeid],
[score],
[tags],
[title],
[viewcount]
FROMdbo.posts
whereId>20
Table Stats are the same, although sys.dm_db_stats_properties is showing row modifications:
Now i execute the proc with a different parameter value
Transact-SQL
1
execdbo.QS_Test@top=50000
Auto Statistics update is enabled on the database, hence the update stats operation happening before execution. It is important to note a statistics update is triggered when cost-based optimization finds that interesting statistics for the query are stale only.
We can now see the second execution in the cache and query store:
Over on the right you can see two Plan ID’s. This makes it really easy to spot plan regression. Once you have found the ‘optimal plan’, simply click the button force plan and that plan will be used in subsequent executions, unless plan forcing is prevented. This example only shows plan variation and how to force a good plan. Don’t assume Plan 3 is more optimal based on the example.
Moving back to the DMV’s, using a filter on object_id is the best way to find all executions with a given proc name
I liker to play around with this stuff. Aggregating query store isn’t the most intuitive. I am constantly updating these to see how I can get the most out of QS. Hopefully you can use these too.