I was fortunate to take part in a two day lab at the Microsoft Tech Center in Waltham last week. The lab was set up to assist people in migration of their SQL 2000 or SQL 2005 database to SQL2008. If you are running a version of MS SQL earlier than SQL 2000 you would need to migrate your system twice. More importantly, if you are running an earlier version you should step into the 21st century. Don’t be shy.
The lab required me to bring some files, so I had some work to do before I even set foot in the center. I used a tool called the Upgrade Assistant which works in conjunction with the Upgrade advisor. When we started doing migrations to SQL 2005 using the Upgrade Advisor at that time we simply pointed the advisor to a SQL 2000 database and ran the advisor then examined the results. Now, we were not so blind as to believe that the advisor would catch every potential problem, in fact if you have ever used the advisor you are familiar with the warning you get that says roughly “there could be some problems we didn’t find, so just be aware”. Um, thanks.
The SQL 2008 upgrade assistant looks to help reduce these potential problems by helping you to configure some trace activity against the databases you are going to migrate. This extra step is not something we had considered before. The upgrade assistant is looking to bring the entire application into focus, as opposed to just the database itself. Not a bad idea. Too bad it didn’t work.
Oh sure, someone with some mad DBA skillz would have had everything humming along in no time. Unfortunately I was the only one in the room. I tried to get the upgrade advisor to work in the manner in which it was intended but hit a few snags. The upgrade assistant will allow you to do native dumps of all databases on a target instance. now, that would be cool, except it only allows native dumps, and only to a local drive. For me, that was a sign of things to come.
So, I showed up at the lab with dumps from one user database as well as master, msdb, and model and a trace file for all activity hitting that instance until i hit the button to stop the trace. The idea would be to take these files, load onto the lab server and hit the button to let it work.
The first error seemed to indicate that the upgrade assistant did not like the fact that the system dumps i provided were not dumps of databases on the C: drive. So, because we put our data on a non-default drive, it appeared to cause an issue. So, we tried to restore master manually, after creating the necessary drives. By the way, we were using instances loaded to a Hyper-V server and we used a lot of snapshots. A lot of snapshots.
I suppose restoring master is one of those things we should all be able to do, but never really practice. I managed to get it restored once manually and I know this because I was able to connect to the instance using the sa password. But I was determined to get the tool working, and the tool wants to restore the system database, so I kept trying. The tool would restore them just fine, but do you know what happens when you restore master? You reset the logins, of course. And it just so happens that the master database in my shop does not have the same login as the windows account I was using to connect to the lab instance. In short, once I pushed the button on the tool to restore everything, I would be locked out. So, just use the sa password right? Sure, but for whatever reason the sa password would not work. No, I don’t know why, and I grew tired of trying.
So, we pressed forward with just restoring the user database and replaying the trace file. The output generated by the replay was saved and then we did an in place upgrade. After that was complete we did another trace replay and compared the two output files. I had 157 differences. Not good. Many of them seemed benign, and the reason for the large number was because of the duration of my trace (about an hour). We kept seeing the same errors repeated, I probably only had four distinct errors, and none of them were anything to be concerned about.
Then we went to a new instance, did a fresh install of SQL 2008, and ran through the process again. I decided to compare the trace file output from the fresh install against the SQL 2008 in place upgrade. Why? Well if i compared it to the original output it might say 157 errors. That’s good? Well,not to me, no…how would I know if they were the same 157 errors? So, I compared the two SQL 2008 replay outputs and found them to match.
Last step, I put the database into ten-oh mode and ran the process again. The output files matched again and at that point I was satisfied that the database was in very good shape to be migrated to SQL 2008. I say database because I did not bring any application code with me, so I cannot comment on that. I was done with the lab by the end of the first day.
My cover was blown on the second day when this was posted on the board in my lab room. Since I was done with the upgrade assistant I spent the day picking the brains of the Microsoft guys regarding a handful of new features of SQL 2008. The Big Three for me were Resource Governor, Performance Data Collection, and Policy Based Management. I also got some extra info on a host of other little things, like Central Management Servers. And I hope to put together some talks on some of these subjects in the very near future so perhaps I will blog about them as well.
The other upside to my second day was the tour of the facility, especially the “Envisioning Center”. You can see a handful of photos here. The room is nice, and I had seen it before, but the coolest demo was the one for the Surface. It was fairly impressive and I could not stop thinking about how many applications such a device would have, beyond serving as an expensive coffee table.
Overall it was a great experience and I recommend that others ask their local Microsoft reps if they can take part in such labs as well.
As the SQL performance-&-scalability liaison for a number of MTC engagements, I can vouch for the value of these labs. Even for organizations with robust testing environments & very smart people–indeed, especially for such orgs–the dedicated hardware & personnel combined with the ability to focus exclusively on the project at hand with minimal distraction provides unparralelled opportunities for optimization.
Dittos re: Mr. Batman’s recommendation that other organizations avail themselves of the experience.
PS Re: Not being able to restore master off the top of your noggin, well, that’s why we should always have the latest-&-greatest versions of BOL, eh?