The Future For Database Transaction Units

It was over three years ago that Microsoft introduced the concept of Database Transaction Units (DTUs). To those of us familiar with SQL Server and SQL Database we all had one question: what the hell is a DTU?

The DTU would calculate the resources consumed and use that number to help guarantee performance for all customers. Exceed the DTU you pay for and you will get throttled. It does not matter what hardware resource your workload is consuming most. The end result is a DTU number that is used to compare to your service level.

For three years we have adjusted to the idea that DTUs are a thing. We even have taken the time to write scripts so we could look at the DTU for specific queries.

At Microsoft Build we heard about MySQL and PostgreSQL coming to Azure. Earlier this week I noted the billing for those services and found something interesting: there is no DTU. They use Compute Units (CUs) combined with IOPS instead of DTUs.

But these are Platforms-as-a-Service, same as SQL Database. Why the difference in pricing? I tweeted about it and got a reply the next day:

I agree that using the combination of CUs and IOPS gives more flexibility for both customers and Microsoft.

So that leaves me with one question: Why are we still using DTUs? It would seem to me that the idea of a DTU, while once relevant, isn’t anymore. And while DTUs are inside of the pricing for data products such as SQL Database and SQL Data Warehouse, I am uncertain that DTUs are as useful as CUs and IOPS. I like the current pricing model for MySQL and PostgreSQL. I’m hoping SQL Database start using CUs and IOPS for pricing soon.

I write this post with a smile, thinking about how fast things change in the Cloud-first world of tech these days. It’s clear to me that if DTUs were the future then MySQL and PostgreSQL would have been using them. Since they are not, I suspect that DTUs will be going away. Customers care more about performance than billing. But it does not make sense to have complicated billing for customers using a mix of both platforms.

4 thoughts on “The Future For Database Transaction Units”

  1. I believe it is Database Throughput Units. All Azure PaaS offerings have the concept of “throughput units”, which blend CPU, memory, IO. SQL DW uses DWU’s. Data Lake uses ADLAUs, etc. The theory is you should think in terms of your performance needs, not your hardware. I agree with this approach, in theory. As a lead data architect on HUGE software systems I’ve always told my DBAs what I needed in terms of throughput, then let the DBA decide what that translated into in terms of SAN, licensing, etc.

    Reply
    • Thanks for the comment! The first link in this post brings you to a page at MSDN that says ‘transaction’, but I have seen ‘throughput’ used as well. I’m not sure which way is official, or that it matters. As you stated, the idea is to think about performance needs, not hardware. But for database workloads it makes more sense to unbuckled the storage needs from your compute needs, IMO.

      Reply
  2. I tend to think about DTUs as mainly compute units anyway. On a call with Microsoft the SQL Azure team also basically confirmed that it is mainly a CPU and memory metric as well. A great example of this is the S3 DTU 100 and the P1 DTU 100 performance levels, i.e. both are DTU 100 (and have the same memory and CPU allocation) but have massively different IO performance levels. Basically, that’s the big step change (as well as price!) between standard and premium tiers, in P1 you get 10 times the IO that you do in S3, according to the tests I did a couple of years ago. So perhaps DTUs won’t go away, but merely be rebranded slightly into just a CPU and memory measure, with a new and separate measure being introduced for IO, which is missing today in SQL Azure.

    Reply

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.