Well, technically I think we would just call it Virginia, but I am speaking this weekend at SQL Saturday #61. My talk is ‘Performance Tuning Made Easy’ and you can read a little bit more about it here.
I am always trying to find ways to help people who are new to MSSQL to understand the basics and give them a solid foundation to build upon. Performance tuning is hard for a lot of people because there is little to no structure put around the idea of performance tuning. My talk helps to give some structure where none previously existed.
I even spend some time going over a process to help anyone (newbies and veterans) have some structure when trying to tune a particular query. I borrow the SQL Diagramming method from Dan Tow and summarize that into a 12-step process. Why 12 steps? Because every other program I have entered has 12 steps so it just seemed natural. In my mind I wanted to create a process that anyone, no matter how many years of experience, could follow and have the end result be a step in the right direction. I’ll give you the rundown:
- List all tables in the query (what??? I don’t start by examining an execution plan? NO! you need some details first in order to be efficient in your tuning process)
- Gather rowcounts for each table (yeah, you’ll need this. I see people who dive into execution plans only to come back later to ask ‘how many rows in that table anyway?’, so do yourself a favor and get the info first)
- Find all filters (get info on the JOIN and WHERE clause of the query, list out the columns used)
- Calculate the selectivity (remember the rowcounts? good. now using the info from the filters, figure out how many rows are being returned from each table. So if we have an orders table with 12,000 orders but we filter to only want 3,000 of those orders, then our selectivity is 3,000/12,000 or 0.25)
- Gather info on any additional columns used (look in the SELECT clause for this)
- Gather info on existing keys and indexes (some newbies may not have any idea about this stuff but now is the time to learn. Dive in and make certain you are aware what exists currently)
- Examine the execution plan (finally! go ahead and run the query and examine the execution plan, use SET STATISTICS IO ON and SET STATISTICS TIME ON as well, you’ll want those numbers)
- Record your results from step 7 (otherwise how would you ever know if things are getting better?)
- Adjust the indexes for tables with the lowest selectivity first (by ‘lowest’ I mean the tables that are closest to zero from the calculation in step 4)
- Rerun the query and examine the results and execution plans
- Rinse, lather, repeat on each table in increasing order by selectivity
- Continue onward, reducing your logical and physical reads (you can focus on logical reads, ideally you wouldn’t have any physical reads. now is a good time to remind you that this is simply a process to help people get some structure around performance tuning, it isn’t meant to be something that is infallible for each and every query in existence)
And now for the disclaimer: WARNING! ADDING ADDITIONAL INDEXES IS NOT ALWAYS THE RIGHT THING TO DO!
That’s right, you need to examine the other DUI (Deletes, Updates, Inserts) statements that are hitting those same tables. If you are working with tables that have lots of modifications being done then adding the additional indexes could hurt performance in other areas. Oh, sure, your query may run better, but you would have hosed someone else. And while I have come across MANY developers that don’t care about anyone else except their own performance I am here to tell you as a DBA it is your job to stand up and make certain that you help maintain a performance balance for all users, not just one user and one query.
I hope to see you this Saturday!
A new 12-step program. Great idea.
I so rarely see anyone mention the fact that adding indexes might (will) take away from performance somewhere else.
Thanks Karen, we could all use 12 steps I think…
I loved these steps, and used them to produce a document to share with coworkers. Gonna hang it in my office now.
Thanks TJay!
Your coworker, Dean Richards, did a nice presentation that included a nice example of using diagramming at #sqlsat58 – it was something I am looking forward to utilizing more going forward. (his example highlighted the difference between SSMS’s suggestion for an index and the one diagramming showed to be better)
Yes, and Dean was the first person I ever heard speak about this back in February at a SQL Saturday in Waltham, MA. When I joined COnfio I asked him if I could also use SQL Diagramming in a talk I wanted to put together.
I give a demo in my talk and I discuss how to use the missing index feature in SSMS and that you simply do not want to trust those details blindly.
Great list of steps. However, I think that if you mixed in some Powershell script you might get it down to 2 or 3 steps. =)
I will be there this weekend. See you there!
Good advice LaRock, thank you!
The worst thing is that I cannot attend at SQLSaturday61, the reason is the other continent!
Have a nice stay at Washington!