Organizational Research By

Surprising Reserch Topic

how to disable oracle cache for performance tests

how to disable oracle cache for performance tests  using -'oracle,caching,oracle10g,performance'

I'm trying to test the utility of a new summary table for my data.

So I've created two procedures to fetch the data of a certain interval, each one using a different table source. So on my C# console application I just call one or another. The problem start when I want to repeat this several times to have a good pattern of response time.

I got something like this: 1199,84,81,81,81,81,82,80,80,81,81,80,81,91,80,80,81,80

Probably my Oracle 10g is making an inappropriate caching.

How I can solve this?   

asked Oct 13, 2015 by ajit.chavhan
0 votes
1 view

Related Hot Questions

3 Answers

0 votes

EDIT: See this thread on asktom, which describes how and why not to do this.

If you are in a test environment, you can put your tablespace offline and online again:


Or you can try


but again only on test environment.

When you test on your "real" system, the times you get after first call (those using cached data) might be more interesting, as you will have cached data. Call the procedure twice, and only consider the performance results you get in subsequent executions.

answered Oct 13, 2015 by patelnikul321
0 votes

An Oracle database is a complex entity. It has all sorts of mechanisms to improve and optimize performance, like caching results, creating and caching execution plans, caching dictionaries, etc. When measuring query times, often the first attempt will take several seconds, while next attempts only takes a few milliseconds. That is because Oracle caches almost everything during that first attempt. In the real world though, where databases are under heavy use, caches expire. Performance is based upon first and second attempts together. This means, the more diversified the queries, the less advantage you get from caching.

But what about Oracle’s Explain Plan? Well, it gives information about the execution plan and estimates performance… Yes, explain plans are great, unfortunately they are only theoretical and don’t provide real world figures! An Oracle Explain Plan is simply an estimate of the execution plan. The real execution plan may very well differ from the Explain Plan estimation.

Worst case testing

To get consistent time measurements without being affected by caches, you’ll have to clear all caches before executing a query under test:

Clear Oracle’s buffer_cache:
alter system flush buffer_cache;
Clear Oracle’s shared_pool:
alter system flush shared_pool;
Clear disk cache (e.g. for CentOS):
echo 3 > /proc/sys/vm/drop_caches

The buffer_cache is part of Oracle’s SGA (System Global Area) and buffers blocks read from disk to minimize disk IO. The shared_pool is another part of the SGA and is used for caching the library, dictionary and session information. The OS disk cache is used by the OS to buffer blocks read from disk.

The disadvantage of this approach is flushing the dictionary, part of Oracle’s shared_pool. Reading the dictionary of a table can take a long time, especially when a table contains many (sub)partitions. Comparing query times between differently partitioned tables may give unrealistic results. One may assume dictionaries of frequently used tables will be loaded in the dictionary cache almost permanently.

Realistic worst case testing

We had this test setup: We optimized a query using Oracle’s Explain Plan. The Explain Plan predicted the improved version of the query would perform better. But using the worse case scenario test environment, as described in the previous paragraph, the optimized query actually performed worse then the original. After generating a trace file, we saw the actual query time was better for the optimized query. But we had altered the partitioning layout of the table for the optimized query, and it caused an extra delay caused by reading the dictionary.

This time we want to have the dictionary already cached, before running the query under test (see list above for the exact commands):

Clear Oracle’s buffer_cache
Clear Oracle’s shared_pool
Clear os disk cache
Run warmup queries
Clear buffer_cache
Clear os disk cache

A warmup query should be a query selecting from the same tables as the query under test, but with a different Execution Plan (i.e. where clause), and returning an empty result set. For example, when you have a partitioned table, use a where clause that does not match any partition. This will load the dictionary, but not any index or rows for scanning. By not clearing the shared_pool after the warmup queries have run, the cached dictionary is still available when executing the query under test.

Now we got results that match our expectations and can be linked to explain plans. We finally can prove the optimized query gives better performance.

So it is possible to get consistent performance test results with Oracle!

answered Oct 13, 2015 by rajesh
0 votes

Probably my Oracle 10g is making a inappropriate caching.

Actually it seems like Oracle is doing some entirely appropriate caching. If these tables are going to be used a lot then you would hope to have them in cache most of the time.


In a comment on Peter's response Luis said

flushing before the call I got some interesting results like: 1370,354,391,375,352,511,390,375,326,335,435,334,334,328,337,314,417,377,384,367,393.

These findings are "interesting" because the flush means the calls take a bit longer than when the rows are in the DB cache but not as long as the first call. This is almost certainly because the server has stored the physical records in its physical cache. The only way to avoid that, to truely run against an empty cache is to reboot the server before every test.

Alternatively learn to tune queries properly. Understanding how the database works is a good start. And EXPLAIN PLAN is a better tuning aid than the wall-clock. Find out more.

answered Oct 13, 2015 by android_master