I hear it all the time. It’s a familiar refrain. Most of the time it is innocent enough, and understandable. But there are times when hearing those words are like hearing the screech of fingernails on a chalkboard:
“We can’t change the code.”
There you are, listening to someone complain to you about the database server, and wanting you to fix the problem. You examine the situation and can see an easy code change that would improve performance immediately only to be told “no code changes allowed”. Your heart sinks. You get frustrated that they won’t make any code changes. The user gets frustrated that you don’t have any other options to offer. You ask about changing a stored procedure, or rewriting a view definition, or helping to redesign a few tables from the ‘database’ that is nothing more than an Excel spreadsheet. “No”, “no”, and “no” you are told, along with “stop blaming the code, you always blame the code!”
You are at an impasse. You wish you had a handy list of things to try.
And now you do.
Here are six ways you can improve database performance without changing code, listed in no particular order. They have different levels of difficulty but they all have something in common: no code changes necessary. And your end user will be happy that you can provide them some options.
1. Utilize a proper indexing strategy
I know I said “no code changes”, but indexes are more of a schema change than a code change. And unless you have code that utilizes index hints, you shouldn’t have to worry too much about needing to change code to avoid using an index.
There are more than a handful of blogs out there that talk about missing or unused indexes. I will point you to the MSDN article on how to find missing indexes, as that is likely to be the most useful for you. I will also give you this caveat: adding indexes is not always the right thing to do! You need to be aware of all the activity hitting the underlying table. If you decide to create an index to improve a specific query run monthly to generate a report, you could be doing a disservice to the hourly DUIs (Deletes, Updates, Inserts) that are happening. Be aware of the overall usage before you just start adding indexes for performance.
As for eliminating unused indexes, I would caution you there to make certain you know that the index is truly not being used. If you have been looking at index usage for the past week and find a few hundred that haven’t been touched, don’t go around dropping them because they could very well be used on a monthly basis. Again, know the system and workloads a bit before making these changes. One thing you are likely to be able to do right away is to remove duplicate indexes.
A proper indexing strategy would involve the periodic investigation into finding indexes that are missing, that are duplicates, and are unused.
2. Add memory
Everything SQL Server does *has* to go through memory. Every query you run *has* to read/write pages from/to memory. So why not make certain you have enough memory on your server in order to store as many pages as possible? No, I can’t think of any reason either.
Here’s a link to MSDN that will get you started on memory architecture and proper memory configurations for SQL Server. I’ve seen cases where people have purchased extra memory but failed to configure SQL Server to take advantage of it properly. Don’t be one of them, spend a few minutes to read up on things and make certain you have an idea about what you are doing.
If your server memory settings are already configured properly for SQL then consider purchasing more. Usually swapping in new memory is an easy enough process, and in some cases can be done on the fly without the need to restart the server.
3. Check your Windows Power Plan Settings
Were you aware that your CPUs may be only operating at half of their designed speeds? Well, you are now. Of course this means you need to decide what is more important: the perception of being ‘green’ or not having your users standing in your cube complaining about bad performance. Sure, the idea regarding Windows Power Plans is a good one. I won’t argue that. But for database servers it can be a real detriment to performance. If your company has a policy of enabling the power plans by default, you should have them revisit that policy for the database servers. The amount of power savings you get is often not offset by the performance loss.
You can find a copy of CPU-Z for download here.
4. Optimize your tempdb for performance
This one is often overlooked as most folks wait around for a problem with tempdb performance before thinking about how best to optimize it for performance right from the start. This SQLCAT link mentions this as well, and there is info over at MSDN that you should review before making changes. And yes, you should consider moving tempdb to it’s own dedicated set of drives.
In fact, I would tell you that the placing of data files, log files, backup files, and tempdb onto separate disks should be a part of any standard database server build. I emphasize the ‘should’ because for some odd reason I still see shops where this is not true. If that is the case for you, and you need to boost performance, then roll up your sleeves and get started on the work that should have been done from the onset. Speaking of which…
5. Using distinct disks for data files, log files, and database backups
Stop putting everything you have onto one drive. It’s typically a bad thing to do for performance, and an even worse thing to do in terms of disaster recovery. And don’t think that putting six disks into a RAID5 array and then assigning three logical partitions is going to solve your performance (or possible disaster) issues, either. Drives fail, that’s what they all do eventually. And when they do you won’t want to have all your eggs in that basket.
Here’s a list of goodness from SQLCAT regarding best practices for SQL Server storage. Chances are you will find a handful of things on that list that will help with your performance.
6. Get faster disks, like really fast disks
As long as we are talking about disk configurations I might as well mention that you could upgrade to use solid-state disks (SSDs). They’re fast, and getting cheaper with each passing day. So if you are planning on monkeying around with your disks anyway, now would be a good time to see if purchasing some SSDs would make sense for your shop.
Just don’t forget to put your data files, log files, and database backups on separate drives. We’ve talked about this already, remember? Disasters happen, that’s why they are called disasters. Don’t by one big mack-daddy SSD and toss everything on there because you have performance to spare. And if you decide to do that anyway, get a head start on updating your resume now as well.
And there you go, six options for improving database performance without changing code. Look, I never said they were going to be easy, or cheap, or fast…just that you had some options.
Tom–in regards to power–you also need to check at the server bios level.
Thanks Joe!
Very nice post Thomas! Thanks for the link too.
You’re welcome, thanks for your original blog post regarding the issue with Power Plans.
And watch out for that pesky NORECOMPUTE clause…
Nice article. What about disk partition alignment which is often overlooked by disk admins and instant file initialization rights for the sql server service account?
Petur
Absolutely, disk partition alignment can boost your performance, no question about that.
As for instant file initialization, you are less likely to see a benefit in your query performance as a whole, but will see huge benefits in other areas.
Thanks for the comment!
Great post Tom you have a fantastic list of performance tuning steps without adding code changes. I would like to add one more item that could be done without changing code.
Recently, I fixed a performance problem by updating out of date statistics. Occasionally, I will come across databases that do not have auto update stats enabled or a really large table that may require a scheduled task to update statistics.
John
John,
Absolutely! Updating statistics could bring an immediate benefit, thanks for the comment!
Very nice post, thank you for sharing.
Another good option is tuning concurrency, which tends to be problematic in OLTP workloads when the code is not good. But that would require another whole post alone.
Gianluca
Gianluca,
Interesting angle! But especially true if the performance issues are related to blocking. Thanks!
Can I also add a 7th option – delete unwanted data ? I see many databases with no archival strategy – so data is being stored which is no longer required… Clearly this needs to get business/end-user approval, but as a very big generalization, your system should be more efficient if it reads less data…..
Steve,
Sure! The less data that needs to be moved around, the better!
OH how I wish I could get people to do and understand #5. I can’t even get some people to get backups onto a separate drive.