For only $5 a month, plus data transfer costs (Basic level), you can have
Standard plans start at $15 a month for up to 250 GB of data with 10 DTU, and 28 days of database recovery.
Of course, more space, DTUs and other features are available (along with pooled resources) as you scale and pay more.
What level you choose depends on how much your database stores and how intensely it's used.
One of the features of SQL Server databases on Azure is the "Overview":
From the Resource Graph, you can click on the pin to open it up in a larger screen. You can set a range of time to view the demand on your database for the past period (hour, day, week, month) or a specific date/time range. By default, the past hour appears.
By default, the DTU Percentage is shown, but to make it clearer, we've found by unchecking that and seeing the DTU Used and DTU Limit is better if we're changing the DTU Limit.
This is set on the list of available metrics:
We can see the DTU usage against the maximum DTU for this database. The cyan line across the top is the DTU limit. The dark blue line is the DTU used. The limit is what we pay, so it's important to make sure we scale it to what the application needs.
Looking at the activity for the past week, there are times when the usage is zero (non-business hours) with expected spikes of usage that are well below the maximum capacity.
It hit the max one day, but overall, the usage level and limits seem reasonable.
One could increase the DTU level to avoid the maximum use that one day, accept the current settings, or even lower it. It depends on the urgency and value of what is running when the maximum is reached. In some cases, users may not even notice because they've launched something that they wouldn't expect to finish immediately. In other cases, it could be critical that more resources are available.
It's a classic capacity utilization question of how many power plants need to be built to make sure there's enough electricity on the hottest summer day, knowing that much of that investment sits idle the rest of the year.
While everything seems fine at the weekly level, looking at the hourly graph gave us a shock:
That looks bad. It looks like the database is maxed out for most of the hour. It seems conclusive that we need to increase our DTU level.
However, given that the weekly graph looks reasonable, we wanted to better understand what was happening. How often were we hitting the limit and for how long?
We reduced the time window from a week to a few days, to one day, and even a few hours and didn't see any period of high utilization similar to the one hour graph.
We finally narrowed it down to 65 minutes and saw this:
These are completely different displays of the same period of time. The 65 minute graph never hits the maximum DTU. What's going on?
With help from the Microsoft Azure support team, it turns out the data is calculated and displayed differently for 60 minutes or less versus more than 60 minutes.
By collaborating with the Microsoft Azure team, we concluded that we need to focus on the >60 minute time ranges to monitor and set our resource limits. Our resources are set at the right level for our application, and we will continue to monitor it to ensure it's adequate over time.
We hope this information helps you avoid the confusion we experienced since we didn't find any documentation explaining this distinction at the 60 minute mark.
Have any suggestions or comments? Visit our Blog and leave us a reply!