Joe the DBA sent me the usually morning email, detailing the failed custom collections in Operations Manager. I had asked for Joe to do this routinely so that we could monitor our monitoring system, as any good DBA with OCD would do. With our recent push for server consolidation I have been spending more time on Operations Manager as I plan to use it for any and all necessary metrics in our project.
“Tokyo failed again,” Joe said over my cube wall. Sure enough, there it was, in the midst of a handful of failures one stood out. It was the SQL box that resides in Tokyo, Japan. I started to dig through the VB script, hoping I could reproduce the error, to no avail. The process is quite simple. Every morning at 2:20AM we have a script that selects all the syslogins on the server, inserts them into our central repository, then for each syslogin row the code takes the login name and then executes sp_helplogins and pushes the result also to our central repository.
Or not. It would seem that Tokyo was not pushing the second result set. Ever, in fact. The logins were coming through, but nothing else.
“Weird,” I thought to myself, and put down my bacon sandwich to focus on the matter.
I rooted around and found that, as expected, the process ran last night. I saw the entries for August 29th, at 2:20 AM, as expected. Only I did not see any entries for August 29th for the second set of records. I did, however, see entries for August 28th at 1:20PM.
“I must have inserted those records myself when I was trying to debug,” I muttered and took another look at the data. Then another. Then I looked at my phone. Then my Blackberry. Then my computer’s time. Then I did a SELECT getdate() on the Tokyo server. It came back with August 29th, 2:45 AM.
Or roughly thirteen hours into the future.
Somehow, I was getting data from the future inserted into our central repository. Well, not somehow, because Tokyo is literally and figuratively on the other side of the planet. The Operations Manager agent was working perfectly, the script was running at the right time, only since it was already tomorrow it was running during our afternoon. My code was doing a getdate() when selecting the syslogins and picking up Tokyo time, but when it was inserting the results of the sp_helplogins back to the central repository it was also using a getdate() and would then insert the local time (in this case, EDT).
I had clearly opened up a rift in the fabric of space time, but the effect was only temporary.
But what to do to solve the issue? Do I change the code to always insert according to local time? Perhaps. But we also have a purge process that runs in EDT, so we would want the time to be in EDT, otherwise we may purge something incorrectly. And what about our ad-hoc scripts or reports? We use datediff() frequently, so shouldn’t we agree upon a standard?
With three servers (Operations Manager, our central repository, and Tokyo) we have the potential for three different timezones. How do you keep it straight? Who wins?
I have no idea right now on what the solution should be, but I am leaning towards having two columns in the tables. One column for the local time on the server, the other for when the data gets inserted. Of course, we know the local time on the server, because that is when the Operations Manager agent will start the script, and that would keep everything consistent. Well, if you think having data from the future as being consistent, then sure.
When designing a system that spans multiple timezones you simply must consider what is meant by “time”.
Why not use UTC which should be standard across all three?
i suppose we could use the GETUTCDATE(), but will that work against SQL2000? If not, then we would need to split out our script to run two versions, one for SQL2000 boxes and the other for servers built in this century.
even with UTC, the OpsMgr agent is still going to run based on the local time. so, 24 servers around the world, and each one would be kicked off, every hour, like…clockwork! the question becomes: what value do you store in the repository? even with UTC, you would still have 24 different values.
We store all time/dates as GMT/UTC and then adjust on the local server (if required).
We are quite lucky down here (New Zealand) that we are generally +12 hours from GMT, so our users can easily translate the times if we thorw a (GMT) in the display.
Great! Don’t forget about DST. That’s a lot of fun on environmental and manufacturing sytems especially in the Fall when you hit 1 and 2 am twice.
good point, and something to consider as some locations may not observe DST, or if they do, they may not start on the same day.
how come Star Trek never worries about such details when they time travel?
I don’t use Operations Manager, but it does make logical sense to use UTC for your timestamps when dealing with time data. 2:00 PM EST is NOT 2:00 PM CST; that’s one of the hardest things to convince my boss when dealing with reports for clients.
As noted DST is fun to deal with; try also building a cube for data across time zones when you want to have a dimension by day (what day?UTC? Client local time? server local time?).
I’ve long since advocated that we all abandon the concept of local time, and we each determine our days based on UTC. If I work 12 noon to 8 pm UTC, so be it.
good points, thanks Stuart. will I see you at PASS again this year?
i think we will need to switch to using all UTC timestamps for everything we do. i also believe we should keep the local time of the server, so perhaps just the two columns will suffice. i do hate to store something twice, so maybe just the UTC will be enough.