SQL University Archives - Thomas LaRock https://thomaslarock.com/category/sql-university/ Thomas LaRock is an author, speaker, data expert, and SQLRockstar. He helps people connect, learn, and share. Along the way he solves data problems, too. Mon, 16 May 2011 13:33:37 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://thomaslarock.com/wp-content/uploads/2015/07/gravatar.jpg SQL University Archives - Thomas LaRock https://thomaslarock.com/category/sql-university/ 32 32 18470099 SQL University – Internals Week https://thomaslarock.com/2011/05/sql-university-internals-week/ https://thomaslarock.com/2011/05/sql-university-internals-week/#respond Mon, 16 May 2011 13:33:37 +0000 http://thomaslarock.com/?p=5894 I am in Atlanta this week for Tech-Ed…right after I watch the launch of the Space Shuttle on Monday morning. Despite my commitments to both of those events I cannot ignore my SQL University duties, so here is this week’s post. We are going to talk about the internals of SQL Server, a topic that ... Read more

The post SQL University – Internals Week appeared first on Thomas LaRock.

]]>
I am in Atlanta this week for Tech-Ed…right after I watch the launch of the Space Shuttle on Monday morning. Despite my commitments to both of those events I cannot ignore my SQL University duties, so here is this week’s post. We are going to talk about the internals of SQL Server, a topic that could never be covered properly with a simple blog post. I will, however, do my best to help you get started with a basic understanding of the topic.

OK, let’s get started…SQL Server is filled internally with sugar and spice and everything nice, and Oracle is filled with frogs and snails and puppy dog tails. Lesson over. What? You want more details than that?

*HEAVY SIGH*

Fine.

tempdb

First thing you need to understand is that MS SQL SERVER IS NOT A BLACK BOX!

THIS IS NOT SQL SERVER!
THIS IS NOT SQL SERVER!

As a DBA you need to be aware of a few things. First and foremost would be that your role is to administer a piece of software that runs on top of an operating system that sits on top of some hardware. Next would be your awareness that you are not working with a black box, there is no magic involved here. The system does as it is instructed to do.

Speaking of that system, here is a little something you should know more about, the SQLOS:

Holy VLF, I had no idea this was in that black box!
Holy VLF, I had no idea this was in that black box!

Check out that diagram of the SQLOS and note that there are a LOT of moving parts. Most new DBAs have NO IDEA as to how complex things are under the hood. These are the internals you need to be aware of and how to make them work for you. [This is where I like to insert a joke about how in Soviet Russia *you* worked for the internals, but I won’t do that here.]

With so many moving parts it can be very difficult (especially for new or accidental DBAs) to understand everything. And that is why I like to just focus on the major resource bottlenecks: disk I/O, memory, CPU, and network. Focus on those four things and you are many steps ahead of most others.

msdb

After you have developed an awareness of the SQLOS and internals in general you need to shift your thinking a bit. Now you want to start thinking about how you can optimize your shop with regards to how the SQLOS internals are operating. Think of this as being the difference between being able to change the oil in your car to being able to give it a full tune-up. Once you learn the internals then you have the chance to start doing your own tune-ups.

For example, one part of the SQLOS is the scheduler. This is how queries get executed. Most queries will likely touch at least these three states in their lifetime: running, runnable, and suspended (or waiting). The idea behind the waits and queues whitepaper is that if you know what your queries are waiting for then you can focus on fixing that resource bottleneck in order to make your system perform better.

But take a closer look at the SQLOS diagram above. You should be able to see the connections to each of the four resource bottlenecks (disk I/O, memory, CPU, and network) in that diagram. Now think about how your shop, or individual servers, are configured. Got everything on one disk? Chances are you are bound by I/O every now and then (my favorite example of this is when a user tells you to not do any backups because it kills performance).

What about your NICs? Are they as good as can be? And how about the amount of memory available as well as the underlying O/S? There are *many* factors to consider when it comes to the SQLOS running as smoothly as possible, and you need to not only be aware of them but to be able to take action.

model

For most database professionals the opportunity to have servers built to a set of ideal specifications is lost. You are tossed into the fray and asked to tame an environment that is most likely feral. One server has one really big drive with everything on it, another server has two drives mirrored, and a third server has a single NIC card offering up a whopping 10 MB of traffic.

What this means is that at the model level you are often left to fight fires. This is necessary because of all the damage and neglect that has come before you as a result of a general lack of knowledge (either by you, or by others). You need to fight these fires before you will be given the opportunity to build your dream home (servers that are a perfect flavor of vanilla).

So you take your knowledge of SQL Server internals and you apply it to perform query tuning because that is the only real chance for you to demonstrate your value to your end users. Face it, the only time they want you to do something is when performance is not as expected, or needs to be better. That means when the time comes you need to be able to step up and offer some solutions as to how to resolve the bottleneck. And your suggestions will be based upon your knowledge of the SQLOS.

master

I believe that someone who is a master of the internals of SQL Server is someone that knows how to avoid or minimize problems by not allowing them to appear in the first place. You are able to have servers rolled out that are built to a set of specifications that have it optimized for SQL Server performance before SQL is even installed. You are then able to install and configure SQL with a standard set of configuration options, ensuring that your server instances are as alike as possible. Doing so allows for you to easily and quickly troubleshoot problems when they arise.

While you can find examples of standard builds and configurations of servers and instances online the simple fact is that many of those won’t do you any good. Every shop is unique and you need to apply that unique knowledge in order to build servers and instances that are right for you. You cannot just apply some random configuration and expect that it will be perfect solution for you. Having an understanding of the internals as well as the knowledge of your particular business is what will allow for you to be a master and build out your shop in a way to avoid problems before they have a chance to start.

resourcedb

When it comes to the internals of SQL Server I would point to these four individuals as a good starting point.

  • Kalen Delaney (blog | @SQLQueen)
  • Kevin Kline (blog | @kekline)
  • Paul Randal (blog | @PaulRandal)
  • Kimberly Tripp (blog | @KimberlyLTripp)

The post SQL University – Internals Week appeared first on Thomas LaRock.

]]>
https://thomaslarock.com/2011/05/sql-university-internals-week/feed/ 0 5894
SQL University – SSAS Week https://thomaslarock.com/2011/05/sql-university-ssas-week/ https://thomaslarock.com/2011/05/sql-university-ssas-week/#respond Mon, 09 May 2011 12:33:13 +0000 http://thomaslarock.com/?p=5892 I may be at SQL Rally this week but that doesn’t excuse me from my SQL University duties. Today we are going to talk about SQL Server Analysis Services. While most nearly everything in the BI stack is outside my comfort zone I should remind you all that I stayed at a Holiday Inn Express ... Read more

The post SQL University – SSAS Week appeared first on Thomas LaRock.

]]>
I may be at SQL Rally this week but that doesn’t excuse me from my SQL University duties. Today we are going to talk about SQL Server Analysis Services. While most nearly everything in the BI stack is outside my comfort zone I should remind you all that I stayed at a Holiday Inn Express last night and wrote most of this post while in the room there.

tempdb

It isn’t enough to just point to SSAS and tell someone “go learn that thing over there.” No, you need to understand more about why you would ever need to use SSAS in the first place. At what point should someone stop and think “yeah, this is exactly when I would want to be using SSAS.”

For most people that breaking point comes in the form of performance issues caused by queries that require an aggregation (a summation or calculating an average over a specified time duration). When you hit the point that your reporting needs are causing performance issues for other users than you need to start thinking about the use of SSAS.

It’s as good a line as any other to be drawn with regards to answering the question “is now the time we need to think about using SSAS?”

msdb

Let’s say you have hit that point where you feel the need to explore SSAS. How do you get started? Well, you could check out this article I wrote a while back (OMG! It’s been THREE YEARS since I wrote that!) Yes, it’s true, at one point I thought I would have the opportunity to dive deeper into all things BI. However, I never really got the chance (which also means I never really took the chance) to get started and dive as deep as I would have liked.

Anyway, the ideas in the article still hold true for someone looking to get started with SSAS, so have yourself a look. After looking, go and get your hands dirty. Follow the steps and build yourself an actual cube. Poke around SSAS using SQL Server Management Studio and get a feel for some of the security and administration aspects.

model

What else would I talk about here other than building a data model? No, not that kind of data model, this one. It is one thing to work your way to building a cube, it is a far greater thing to build yourself a data model that will satisfy actual business requirements. You’ll need more information on how to do that correctly than what you will learn from just a blog post.

As it so happens I have a handful of books in my library that will help get you started. Check out this book as well as this book. Put them into your library as well and use them as a reference.

master

By this point you should have enough experience and skills such that you can start teaching others. The teaching can be done internally, perhaps just for the developers in your shop at first. The teaching can also be external, perhaps with blog posts, paid articles, or even presenting at a SQL Saturday (or SQL Rally…or the PASS Summit…)

The point here is for you to become a master of SSAS you will need to start making an effort to help others learn.

resourcedb

Looking for more information? Start following these folks for all things BI, they are quite willing to help.

  • Stacia Misner (blog | @StaciaMisner)
  • Erika Bakse (blog | @BakseDoesBI)
  • Jen Stirrup (blog | @jenstirrup)
  • Julie Smith (blog | @datachix1)
  • Audrey Hammonds (blog | @datachix2)
  • David Stein (blog | @Made2Mentor)

 

The post SQL University – SSAS Week appeared first on Thomas LaRock.

]]>
https://thomaslarock.com/2011/05/sql-university-ssas-week/feed/ 0 5892
SQL University – Advanced SSIS Week Part Deux https://thomaslarock.com/2011/05/sql-university-advanced-ssis-week-part-deux/ https://thomaslarock.com/2011/05/sql-university-advanced-ssis-week-part-deux/#respond Mon, 02 May 2011 13:32:51 +0000 http://thomaslarock.com/?p=5890 So, once again I find myself needing to do a SQL University post on a topic we have already covered this semester. Which also means that once again I am going to mail it in and just point you to the post I wrote earlier, which is also a re-post of something I had already ... Read more

The post SQL University – Advanced SSIS Week Part Deux appeared first on Thomas LaRock.

]]>
So, once again I find myself needing to do a SQL University post on a topic we have already covered this semester. Which also means that once again I am going to mail it in and just point you to the post I wrote earlier, which is also a re-post of something I had already written.

I suppose at some point I should really take a look at that SQL University syllabus more then a day or two before my post is due.

 

The post SQL University – Advanced SSIS Week Part Deux appeared first on Thomas LaRock.

]]>
https://thomaslarock.com/2011/05/sql-university-advanced-ssis-week-part-deux/feed/ 0 5890
SQL University – PowerShell Week Part Deux https://thomaslarock.com/2011/04/sql-university-powershell-week-part-deux/ https://thomaslarock.com/2011/04/sql-university-powershell-week-part-deux/#comments Mon, 25 Apr 2011 13:32:19 +0000 http://thomaslarock.com/?p=5888 So, it looks like we have a second week of Powershell for SQL University. Honestly, I wish I could say I saw this coming in the lesson plan, but I didn’t. So I don’t have much new to share with you since my last post on Powershell. And since this isn’t sweeps week, I think ... Read more

The post SQL University – PowerShell Week Part Deux appeared first on Thomas LaRock.

]]>
So, it looks like we have a second week of Powershell for SQL University. Honestly, I wish I could say I saw this coming in the lesson plan, but I didn’t. So I don’t have much new to share with you since my last post on Powershell. And since this isn’t sweeps week, I think it is safe to do a rerun.

For those that didn’t see this before it will have a “IT’S NEW FOR YOU” feel. For those that have read it before it will feel more like “OH NOES, NOT AGAIN”. To make things more wormhole-like, it turns out that the post I did earlier this semester was ALREADY a repeat of a post I did last year.

Yeah, that’s right, I *am* mailing it in this week.

Enjoy!

 

The post SQL University – PowerShell Week Part Deux appeared first on Thomas LaRock.

]]>
https://thomaslarock.com/2011/04/sql-university-powershell-week-part-deux/feed/ 1 5888
SQL University – HA/DR Week https://thomaslarock.com/2011/04/sql-university-hadr-week-2/ https://thomaslarock.com/2011/04/sql-university-hadr-week-2/#comments Mon, 18 Apr 2011 13:31:41 +0000 http://thomaslarock.com/?p=5886 Welcome back to another week of SQL University. The winter season is behind us and Spring has sprung (finally) and that’s why I haven’t been wearing pants or shoes since March 21st. Give it a try yourself and you’ll be surprised as to how the promise of good weather can really improve your daily outlook. ... Read more

The post SQL University – HA/DR Week appeared first on Thomas LaRock.

]]>
Welcome back to another week of SQL University. The winter season is behind us and Spring has sprung (finally) and that’s why I haven’t been wearing pants or shoes since March 21st. Give it a try yourself and you’ll be surprised as to how the promise of good weather can really improve your daily outlook.

Today’s topic will be on HA/DR and I am here to help you get yourself familiar with what that means. Let’s get started!

tempdb

First things first, a few definitions to review:

HA – Stands for High Availability. The word you want to think about here is this: uptime. It’s that simple. If your servers have a high uptime percentage (five-nines) then they are highly available.

DR – Stands for Disaster Recovery. The word you want to think about here is this: recovery. It’s that simple. If you are able to recover your data then you have the makings of a DR plan.

Now, here is the very important piece of information that you need to know: HA IS NOT THE SAME AS DR. For the developers that might stumble upon this blog I would explain it like this: HA <> DR

There, I hope that clears everything up. You would be surprised as to how many people confuse these two terms. I know I was sure surprised that some folks would either confuse the terms, or try to classify issues as “events” versus a “disaster”. To me it doesn’t matter if one server or one hundred servers are wiped out, a disaster is a disaster and you need to be able to recover the data. That means you had best have a recovery plan along with a recovery point objective (RPO) and a recovery time objective (RTO).

For most folks the DR plan is simple: recover the server from a tape backup and restore the databases from backup files (also written to tape). Now, some folks will tell you that they have replication deployed as a DR solution. But I like to play a game called “what if?” So, if your shop is using SAN replication and claim it is their DR solution, ask some simple questions such as:

“What if a corruption happens at Site A and is replicated immediately to Site B?”

And see where that leads you. (HINT: it should lead you to your current DR solution, most likely recovering from tape.)

msdb

Now that you know the difference between HA and DR, it is time for you to know the difference between the many features of SQL Server that help you achieve either HA or DR. Depending upon your version of SQL Server you will be able to use clustering, database mirroring, log shipping, and even replication. And, COMING SOON in the next version of SQL Server is something shiny called AlwaysOn.

You want to know enough about these features so that you can help make an informed decision regarding the architecture needed for your shop, either for one system or even for all of them. For a great summary of the HA options available in SQL Server go here. While there I want you to notice how they only discuss those options in terms of HA. Do you know why? Because none of those options alone will help you in terms of DR. Know why? Because HA <> DR, that’s why.

Did I make that point clear yet?

model

Now that you have an idea about all the different features of SQL server you need to start using them. No, you don’t need to deploy each of them in your shop (despite what some of the worst job descriptions might have you believe), but you should at least try to get your hands dirty here. Find a way to practice with each of them in a test environment, even if that means building some VMs somewhere. You need to configure them in order to have an idea about what it takes to get them up and running as well as to remain stable.

Believe me, if you are sitting in a meeting and someone insists that you need to implement merge replication you had better have an idea about what it takes to get that beast up and running, what steps to take *when* merge replication fails you, and all the additional overhead that goes with merge replication (additional agents, transaction logs, network utilization, etc.). If database mirroring is a better solution for your situation AND you know you can have that up and running (and keep it running) with little administrative overhead then you will want to suggest mirroring and not let yourself be talked into merge replication simply because someone else in that meeting happens to know one buzzword.

But you can’t have that discussion unless you (1) know the differences and (2) have actually tried using the features you are talking about.

master

By now you are aware of the differences between HA and DR, you are familiar with many of the features of SQL Server, and you have even tried your hand at them. To be a master though, requires something a bit more. The best word to describe what that “more” would be is this: foresight.

You don’t need to be a master at each technology, having 5,000 hours of working with everything under the SQL sun. But you *do* need to have an awareness of each, having at least put your hands on the features to understand the strengths and weaknesses of each. And with the experience comes your ability to have some foresight into possible pitfalls.

For example, your company may be leaning towards implementing a particular solution, and you might even agree to it except for one thing: it won’t scale easily. So you take a moment to ask about the expected load for the next year, three years, and beyond. Then you take the time to document the discussion. From that point forward you will be able to help raise awareness for everyone else regarding the current technological needs, make sure they are met for the time being, and also make certain you can start taking steps to build for the future.

And that is what a master does: they help plot the course of actions that will need to be taken over the course of time. They are constantly being proactive and looking ahead, in order to avoid problems, as opposed to those around them that are strictly in a reactionary mode.

resourcedb

Here is a short list of SQL Server professionals that are well versed in all things HA/DR related.

  • Robert Davis (blog | @SQLSoldier)
  • Geoff Hiten (blog | @sqlcraftsman)
  • Allan Hirt (blog | @SQLHA)
  • Kendal Van Dyke (blog | @SQLDBA)
  • Denny Cherry (blog | @mrdenny)

And here are some links to whitepapers that I know you will find very useful as well:

High Availability with SQL Server 2008

SQL Server 2008 R2 High Availability Architecture White Paper

See you next week!

The post SQL University – HA/DR Week appeared first on Thomas LaRock.

]]>
https://thomaslarock.com/2011/04/sql-university-hadr-week-2/feed/ 1 5886
SQL University – Administration VLDB https://thomaslarock.com/2011/04/sql-university-%e2%80%93-administration-vldb/ https://thomaslarock.com/2011/04/sql-university-%e2%80%93-administration-vldb/#comments Thu, 14 Apr 2011 11:31:10 +0000 http://thomaslarock.com/?p=5884 Welcome back to SQL University! Today we talk about Very Large DataBases (VLDBs). I hope this blog post helps get you pointed in the right direction with regards to architecting and building your very own VLDB someday. (Chances are you already have one and maybe don’t know it yet!) Let’s get it started. tempdb First ... Read more

The post SQL University – Administration VLDB appeared first on Thomas LaRock.

]]>
Welcome back to SQL University! Today we talk about Very Large DataBases (VLDBs). I hope this blog post helps get you pointed in the right direction with regards to architecting and building your very own VLDB someday. (Chances are you already have one and maybe don’t know it yet!)

Let’s get it started.

tempdb

First up, a question for you: are you tall? How do you define yourself as tall? Do you compare your height to those around you? Or do you compare it to some statistical average? Is that average for all humans? Just adults? Adults in your part of the world?

OK, enough stalling, let’s get to the real question. Do you have a large database? How about a very large database? How do you know if it is large, or very large? Who decides this?

Nobody does. Well, technically you do, so I guess that really isn’t “nobody”. But the point here is that typically nobody knows that they have a large (or very large) database until they have hit some type of tipping point. Good tipping points to consider would be the following:

  • backup time
  • restore time
  • batch load time
  • database size
  • server memory needed
  • number of CPUs needed

If you find yourself administering a database that requires special hardware purchases (disk, CPU, memory) or the time it takes to perform data operations (batch loads, backups, restores) is taking too long then there is a good chance you would classify that database as being “very large”.

(At some point I feel the need to tell you that size doesn’t matter, at least I am always told that is the case.)

msdb

How do you decide that tipping point anyway?

Whose concerns are more important? Yours? Your boss? The end user? The business? (HINT: The answer is ‘All of the above’)

My tipping point was centered on recovery. Whatever was built needed to be recovered in an acceptable amount of time. Who decided what would be acceptable? Everyone, that’s who.

The managers wanted the batch load to be completed as quickly as possible. Their tipping point was focused on the amount of time it would take to shove terabytes of data into the database. The end users tipping point was reporting. They wanted their reports generated as quickly as possible, no matter what parameters were being used to generate the reports.

Sense any problems yet? Well, you should. Just those three things (backups, writes, reads) are not always playing nice together in the sandbox at recess. Once you decide that you have hit that tipping point, and that you have a very large database, make sure that everyone understands the expected performance. If report generation is going to take five hours, then make sure the end users know that. If the batch load takes four hours, make sure the managers know that. If the backups take three hours, make sure everyone knows that fact (as well as how long it will take for you to recover, if recovery is to be needed).

model

Now that everyone has their tipping point identified, as well as an expectation for their area of need, we can talk about the actual design. What? You normally design the database before everyone gets together to discuss their needs? Then you’re doing it wrong. If you don’t have a list of (at least general) requirements then how do you expect to design something that is going to satisfy anyone?

Let’s assume you have talked to everyone at this point. Now you need to get down to the details. Know the options you have with things like partitioning, filegroups, and piecemeal backups. Understand how you can help to architect a database that can help those who need to write, those who need to read, and those who need to recover.

master

At this level people will come seek out your knowledge. You will have been able to not only help architect a viable solution for the requirements given at the onset, but you will have planned for future growth as well. If one word could describe someone at the master level it would be this: scalability.

Make certain that whatever you spend time building is flexible enough to be moved around whenever necessary. And I don’t mean a lift and load from one server to another. What I mean is making certain that your design can be shifted as business needs change. Today the reports are fine, tomorrow they need to run 50% faster, and don’t be stuck saying “it will take six days to rebuild that array onto faster disks”. Nobody wants to hear that from a master. What they want to hear is “We can get it done, this is what the plan is, and if we start today we can have the new array up and running by tomorrow and here is the cost.”

Thinking two steps ahead and having a plan for whatever needs arise is the sign of a master. And don’t forget to mention the cost, because that usually makes everyone stop and rethink if their needs are true. You’ll be surprised at home often things go from “WE NEED THIS NOW” to “damn, we can’t afford that”. But if you don’t have the details then you are not seen as a master, just a roadblock to progress.

resourcedb

Surprisingly there are not a whole lot of people who write or present on working with VLDBs specifically. I will see items about scalibility and the like but I rarely see anyone ever saying “hey, check out what I am doing with this VLDB”. At any rate, here are a few links I believe you will find useful.

The post SQL University – Administration VLDB appeared first on Thomas LaRock.

]]>
https://thomaslarock.com/2011/04/sql-university-%e2%80%93-administration-vldb/feed/ 3 5884
SQL University – Performance Tuning Week https://thomaslarock.com/2011/04/sql-university-performance-tuning-week/ https://thomaslarock.com/2011/04/sql-university-performance-tuning-week/#comments Tue, 05 Apr 2011 18:21:34 +0000 http://thomaslarock.com/?p=5470 Welcome back to another week of SQL University where the topic is Performance Tuning. Show of hands: how many people think performance tuning is hard? OK, put your hands down, you are sitting at a computer and it looks really weird. Anyway, many folks think performance tuning is hard and here are the main reasons: ... Read more

The post SQL University – Performance Tuning Week appeared first on Thomas LaRock.

]]>
Welcome back to another week of SQL University where the topic is Performance Tuning.

Show of hands: how many people think performance tuning is hard? OK, put your hands down, you are sitting at a computer and it looks really weird. Anyway, many folks think performance tuning is hard and here are the main reasons:

  • You need to know a lot of different things (network, SAN, hardware, IIS, AD, wtc.)
  • You are almost always in a reactive mode, so time is a factor and most folks don’t like being rushed for answers
  • Proactive tuning is often a low priority (don’t bother working on that Johnny, nobody is complaining about it yet)
  • Even if you have the time and the knowledge, you don’t necessarily know where to begin

My goal today is to help you find a way to make your life a little bit easier. I have a talk titled “Performance Tuning Made Easy” that seems to be fairly popular these days and will try to break it down for you in this post. Go ahead and read that post and I’ll wait. OK, readynow? Let’s begin.

tempdb

The first thing you need to do is to have awareness. There are many, many things going on in your shop right now. Some of them you know about, others you know nothing about. You don’t have a chance when it comes to performance tuning unless you have accomplished these two items: define and measure.

You need to define performance problems. The easiest way to do this is to head over to MSDN and write down the acceptable thresholds for performance. A good starting point is the 2005 Waits and Queues Whitepaper (I hope this gets updated for Denali). Another good starting point is to talk to your end users about the performance they expect for the various systems that they use. The point here is for you to define what is good and what is not good. That way the next time somebody stops by and says “hey, this is bad” you can refer back to your definitions and say “hey, not according to what we agreed on previously”.

You also need to be measuring performance. You can collect data by using native tools, or 3rd party vendor tools (and I can recommend one), or a combination of both. Whatever you choose is up to you, but if you are not measuring for performance on a regular basis then you are going to have trouble answering a very simple question: “is this a problem?” The trick here is to make certain you are measuring against your definitions. It is easy to overlook things but if you are not measuring against a definition then you are going to have trouble helping to identify problems (other than having your phone ring, of course).

msdb

Now we need to help you get the job done. Once you have your definition and measure in place you can go about analyzing the details. Most of the time this will be done in a “reactive” mode, but that is OK because if you have your definitions and measures it allows for you to be able to arrive at an answer faster than not having them.

The trick here is this: When performing your analysis make certain you refer back to your definitions. This is how you decide if something is a problem or not. If a developer opens up Task Manager and decides that the server is paging all of its memory to disk you need to be able to quickly show to them that the measures you have in place indicate there is not a memory problem.

model

Here is where you need to lead by example. You want to be able to suggest improvements and need to do so in a way that makes people feel GOOD about your suggestions. So instead of saying “I see what’s wrong with your code”, try saying things like “I think we might see an improvement if we could make a change here”. You are often going to be handed issues with code and queries that you have never seen before and how you react to these situations will dictate if people will want to seek out your advice or just avoid you at all costs.

master

This is where you enter a zen-like state of enhanced consciousness. How do you get there? One word: proactive. You have your definitions in place. You have your measures. You are able analyze and suggest improvements. And you are able to do all of this before your phone rings.

That’s when you know you are a master at performance tuning. When you fix problems before they become problems.

resourcedb

Here is a list of people that I consider to be at the top of the game right now with regards to performance tuning. They are in no particular order, just people that know what they are doing and are willing to help you understand as well:

The post SQL University – Performance Tuning Week appeared first on Thomas LaRock.

]]>
https://thomaslarock.com/2011/04/sql-university-performance-tuning-week/feed/ 4 5470