Group Column Statistics (Part 2)

As described in Part 1 Oracle gathers extended statistics automatically on column groups. This was also available in 11g but here you had to add it manually.

How can this become a performance issue in our 12c database ? If you read my post about   our performance issues related to this you know that we spent a lot of time hard parsing and that we  saw a specific sql on the top list for parsing.


The sql was

47r1y8yn34jmj select default$ from col$ where rowid=:1

Searching Oracle Support for this sql pointed us to an interesting document.
Higher CPU and Slow Performance after Migration to 12c with Top Sql as SELECT DEFAULT$ FROM COL$ WHERE ROWID=:1 (Doc ID 2013851.1)

If we read this document it tells us that if the length of default$ in the sql above is > 32 bytes the result will not go into the row cache. Instead it is read for every hard parse related. Remember that extended stats is not tied to a specific sql. It is related to a group of columns. We can have many different sqls referencing the same columns, each of these sqls will use the extended stats. The most common situation of frequent parsing would be sqls using literals.

Let’s see what happens if we run a couple of similar sqls using the columns i used in my previous post (part 1). I will enable a 10046 trace for these executions.

So I ran 3 sqls with different sql_id, they all use the columns listed in part 1.

Looking at the raw tracefile we can see


Here we have our sql related to extended stats, it is hard parsed. The bind variable lists the rowid. We can see that this is a recursive sql in the parse line where dep=1. This implies it isn’t executed by us but on behalf of our main sql.


Next we see that my sql is executed and hard parsed, this is our main sql so dep=0.
After our main sql is executed we again hard parse and execute the cursor for sql: select default$ from col$ where rowid=:1. See cursor id #140737284232128. If you compare the bind value for each execution of cursor #140737284232128  you see it is always the same,  00009F13.0033.0001. Cursor id #140737284232128 is executed again because I execute another sql that requires hard parsing.


Another new sql is parsed and executed. Again Cursor id #140737284232128 is executed because I execute my last sql that requires hard parsing.


After I run my last statement (I said 3 statements) , Cursor id #140737284232128 is not executed again. But it would if I executed another sql that used the same group of columns, actually it can use this extended statistic for a sql having only a subset of the predicates in the statement. An example would be

select *
 from sh.customers
 where cust_city_region='Hollywood'
 and cust_city = 'Los Angeles'
 and cust_state_province = 'CA'
 and country_id= 52790;

We can use the extended statistics and multiply by selectivity of the new column cust_city_region.

Okay, last step is to verify that my default$ is longer than 32 bytes. In the trace we could see that the rowid is value=00009F13.0033.0001. First step is to convert them from hex to numbers. I will use dbms_rowid package to get the information we are looking for
First I will convert the values in the bind.
00009F13.0033.0001 is block number . row number . file number

00009F13 => 40723
0033 => 51
0001 => 1

Then i use:

(rowid_type, data_object_id, relative_fno, block_number, row_number);

rowid_create need some more information, second value is data_object_id for col$. Rowid_type should be 1 = extended rowid, data_object_id  we can get from dba_objects.

SQL> select object_id,data_object_id 
     from dba_objects where owner ='SYS' and object_name='COL$';

---------- --------------
 21                     2

With the details in place we can run our sql


The name we recognize from the dba_stat_extension and the column statistics. Default$ is the three columns combined along with what i think is optimizer instructions.
This string is longer than 32 bytes and that is the reason for this sql not being stored in the row cache. As the note 2013851.1 mentioned, if it is > 32 bytes it will be hard parsed for each sql that needs to use the extended stats.

We have a few options to fix this
set  “_optimizer_enable_extended_stats”=FALSE

or we can as suggested in the note 2013851.1 drop the extended stats

the third option is to set cursor_sharing=force.

I started by disabling the use of extended stats. In my next batch run the high hard parse that we saw for sql: select default$ from col$ where rowid=:1 is now gone.

We have more issues on this database so I will deal with them one by one.
We do have application code using literals contributing to the high hard parse,
so I might end up using cursor_sharing=force.

One more thing to mention, in the awr we saw hard parsing related to invalidation. There are several notes about extended statistics causing unnecessary invalidation, one I found is this (Doc ID 2110955.1) , another is a blog by Christian Antognini  . I do believe that the problem with extended stats is also contributing to our problems with invalidation.

I hope this information is useful for you, feel free to follow me on twitter if you want to have more information about analyzing and solving issues I come across during my work.
There is a link on the front page of the blog.


2 thoughts on “Group Column Statistics (Part 2)

  1. Pingback: Smart way of Technology

  2. Pingback: Group column statistics (part 1) | Oracle Tuning

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s