Sunday, March 8, 2015

Do You Know If Your Database Is Slow?

Going to the basics of how database performance can be described, we can simply say that database performance is either the response time of the operations the end-user do and/or  the amount of work the database instance does in a certain time period – throughput.
We can easily find these metrics in from the v$sysmetric dynamic view:

SQL> select to_char(begin_time,'hh24:mi') time, round( value * 10, 2) "Response Time (ms)"
     from v$sysmetric
     where metric_name='SQL Service Response Time'


So this is the last-minute response time for user calls (here in ms). We can check the throughput by checking the amount of logical blocks (it includes the physical blocks) being read, plus we can add direct reads (last minute and last several seconds output here for a database with 8 KB block):

SQL> select a.begin_time, a.end_time, round(((a.value + b.value)/131072),2) "GB per sec"
 
from v$sysmetric a, v$sysmetric b
 
where a.metric_name = 'Logical Reads Per Sec'
 
and b.metric_name = 'Physical Reads Direct Per Sec'
 
and a.begin_time = b.begin_time
 
/
 

We can check more historical values through v$sysmetric_summary, v$sysmetric_history and dba_hist_ssysmetric_summary.
So did these queries answer the basic question “Do we have bad performance?”? 100 MB/sec throughput and 0.32 ms for a user call? We have seen better performance, but is it bad enough that we should alert the on-call DBA to investigate in more detail and look for the reason why we are seeing this kind of values? We cannot say. We need something to compare these values to so that we can determine if they are too low or too high. It is somewhat like being in a train that passes next to another moving train, going in same direction but at a different speed. We don’t know the speed of our train, and we don’t know the speed of the other train, so we cannot answer the question “Are we going very fast?”. If we turn to the other side and see a tree passing on the other side of the train, we will be able to estimate the speed of the train (also taking into account our experience of what is very fast for a train…). So we need something that has an absolute value. In the case of the tree, we know that the tree has speed of 0 (Ok, it is not completely absolute, but we had to simplify now :) ).
So we understand that we need an absolute value or base-line, which we know represents having  “bad”, “normal”, or “good” performance. How do we find these values?

Bad, normal and good

One way to establish these absolutes is to just experiment, establish when the database instance provides acceptable performance by going to the applications that uses the database and checking its response time, or run the queries that the application runs directly and determine if they complete in acceptable time (defined by the business requirements) – when you reach these results, check the database instance response time and current throughput, and carve them in stone as absolutes that can be used to compare future measurements.
The approach above may sometimes work, but when you start measuring response time, you will notice that it might go up and down wildly. You will need to define some bounds around the value you think is a “normal”  response time. So a response time above this bound can be called “bad”, and we can alert that we have performance degradation.
To define this point more accurately, I would suggest using another strategy. We can make an “educated” guess on these values by analyzing them historically from the DBA_HIST_SYSMETRIC_SUMMARY view. We just need to have enough history in there.

SQL> with epsilon
as
(select avg(average - STANDARD_DEVIATION ) m1,
        avg(average +  STANDARD_DEVIATION ) m2
from dba_hist_sysmetric_summary
where metric_name='User Calls Per Sec')
select avg(a.average -  a.STANDARD_DEVIATION) "A - Good",
       avg(a.average) "Average",
       avg(a.average + a.STANDARD_DEVIATION)  "B - Bad"
from dba_hist_sysmetric_summary a,
dba_hist_sysmetric_summary b,
epsilon e
where a.metric_name='SQL Service Response Time'
and b.metric_name='User Calls Per Sec'
and a.snap_id = b.snap_id
and b.average between e.m1 and e.m2
/
 

Please note the subquery called epsilon. I have used it here to limit the history from which we are learning to a subset of AWR snapshots where there was more meaningful work done on the database. It does not take into account times of very low activity and times of very high (abnormally) high activity, which don’t necessarily show a representative load from which we can extract our “normal” response time behavior.
So now when we check the current response time:

SQL> select to_char(begin_time,'hh24:mi') time,  value "Response Time"
from v$sysmetric
where metric_name='SQL Service Response Time'
/

Specific Response Time

sRT = Response Time (in ms) / Throughput (in GB/sec)

SQL> with epsilon
as
(select avg(average - STANDARD_DEVIATION ) m1,
avg(average +  STANDARD_DEVIATION ) m2
from dba_hist_sysmetric_summary
 where metric_name='User Calls Per Sec')
select avg( ((a.average-a.standard_deviation)*10)
/
(((c.average-c.standard_deviation)  + (d.average-d.standard_deviation))/131072)) A
,
avg( (a.average*10)
/
((c.average + d.average)/131072)) "Average"
,
avg( ((a.average+a.standard_deviation)*10)
/
(((c.average+c.standard_deviation)  + (d.average+d.standard_deviation))/131072)) B
from dba_hist_sysmetric_summary a,
 dba_hist_sysmetric_summary b,
 dba_hist_sysmetric_summary c,
 dba_hist_sysmetric_summary d,
 epsilon e
where a.metric_name='SQL Service Response Time'
and b.metric_name='User Calls Per Sec'
and c.metric_name='Logical Reads Per Sec'
and d.metric_name='Physical Reads Direct Per Sec'
and a.snap_id = b.snap_id
and a.snap_id = c.snap_id
and a.snap_id = d.snap_id
and b.average between e.m1 and e.m2
order by 1
/
 

Again, we can establish some values (for example, from the standard ranking for efficiency as shown in the image above, and go by these values). Or, we can create moving baselines as previously from the history of the particular DB instance usage by using the query (though with not that much ranks):

with epsilon
as
(select avg(average - STANDARD_DEVIATION ) m1,
avg(average +  STANDARD_DEVIATION ) m2
from dba_hist_sysmetric_summary
where metric_name='User Calls Per Sec')
select avg(round(a.average + a.STANDARD_DEVIATION)) + stddev(round(a.average + a.STANDARD_DEVIATION)) A,
avg(round(a.average + (a.STANDARD_DEVIATION/2))) + stddev(round(a.average + (a.STANDARD_DEVIATION/2))) B,
avg(round(a.average)) C,
avg(round(a.average - (a.STANDARD_DEVIATION/2))) - stddev(round(a.average - (a.STANDARD_DEVIATION/2))) D,
avg(round(a.average - a.STANDARD_DEVIATION)) - stddev(round(a.average - a.STANDARD_DEVIATION)) E
from dba_hist_sysmetric_summary a,
dba_hist_sysmetric_summary b,
epsilon e
where a.metric_name='Database CPU Time Ratio'
and b.metric_name='User Calls Per Sec'
and a.snap_id = b.snap_id
and b.average between e.m1 and e.m2
/
 





No comments:

Post a Comment