All_source is slow

A customer is preparing to move an application from 11g to 12c. A key function in the application is to detect new tasks, I do not have the details how it works but the function is executed every minute. The sql listed below is part of that function. In 11g this sql is very fast, less than a second. In 12.2 it takes much longer I have seen from 17 to 46 seconds to run.

My first step is to verify the issue, and I can see the same numbers as they say.

all1.png

The query returns 1 row and it takes 46 seconds this time. The all_source table lists all stored objects (procedures, triggers etc) accessible by the current user, which means the objects he owns and the ones he has privileges to see. I traced the session running the sql.

all2.png

In the trace we see that the elapsed time is 3.42 seconds, cpu time is higher but that is just an rounding error from tkprof I think. One row is returned, we can see that we have an extended data link towards int$dba_source. What is an extended data link ?

In 12c we can have multitenant architecture, which means we have a root db (CDB) and connected to that one or more application db:s (PDB). The benefit is that we can share common stuff, like memory, dictionary, redo logs etc. In this case we share some dictionary data between the cdb and the pdb:s. Common stored objects are stored at cdb level while application related objects are stored at pdb level. This saves space if you have many pdb:s in your cdb. Since all_source lists objects owned by the user and objects owned by others (like sys) that the current user has privileges to run,  we need to read the source table of the cdb. The Extended Data Link combines the source from both the cdb and the pdb. You can read more about it here.

The execution i traced took around 17 seconds to run, and here we have 3.42 seconds for our sql. Where are the other time spent ? Looking further down the trace-file I found the following.

all3.png

This is a recursive sql, recursive depth is 1. This sql is the view of  INT$DBA_SOURCE and it returns 10 million rows. These 10 million rows from the view are then moved into our main execution and filtered down to 1 row. The execution plan of our main sql shows that the INT$DBA_SOURCE is accessed as a view.

all7.png

Accessing this view uses the rest of our 17 seconds, elapsed time are 14 seconds. Comparing this time to 11g is not feasible, since we have a lot of new functionality in 12c. Still I think that execution time of 17 seconds or even higher is suspiciously high for this sql. I start to think it cannot be correct. How can I dig deeper ?

Looking at the sql I start to think that since they specify owner=’IFSAPP’ in the predicate list, doesn’t that mean we will only see objects owned by IFSAPP ? I think so, which should mean we are not interested in objects owned by others. Does this mean we can we use one of the other views we have ? We have also user_source, dba_source and in 12c cdb_source. User_source would be sufficient but since I do not control the sql I cannot use user_source, since owner column is not in that view.
The sql would fail on the owner=’IFSAPP’ clause. I can use dba_source, it lists all stored objects in the database.

If I run the same sql by just switching all_source for dba_source will that improve my execution time ?

all4.png

Execution time is now 0.16 seconds, still returning the same row. Now I start to think that the timing from using all_rows are not correct. It has to be something weird going on.

Tracing the good execution showed me this details

all5.png

Elapsed time is 0.00 seconds which is so fast that the timing isn’t shown by tkprof. We have the extended link to int$dba_source. Most interesting is the recursive select that we had in the slow execution.

all6.png

Comparing the recursive select between all_source and dba_source we can see that it differs a lot when it comes to execution time and rows returned. 1443 rows instead of 10 million.  The reason is that the predicates from our sql is now pushed down to this sql (which is the non merged view of INT$DBA_SOURCE). A predicate push down means we push predicates from our main query into views that cannot me merged. This improves performance by removing unwanted data earlier in the execution phase. It is a big difference if the view returns 1443 rows compared to the 10 million in the bad execution. You can read more about push predicates here if you like.

Since we have the predicate owner = ‘IFSAPP’ in our query I believe it is possible to use dba_source instead of all_source. Data returned will always be the same.

Comparing the two executions raises one question, why isn’t that push down done on my all_source view ? I found this odd and suspected a bug, I logged a case with Oracle Support and was pointed to a unpublished bug.

Bug 25192044 cross container queries may not push down predicates due to alias mishandling. According to the bug it is fixed in:

  • Future Version
  • 12.2.0.1.171017 (Oct 2017) Database Release Update (DB RU)
  • 12.2.0.1.171017 (Oct 2017) Bundle Patch for Windows Platforms

Not much information is released about this bug but this is what is listed:

Rediscovery Notes
Predicate pushdown for cross-container fixed tables do not work 
when alias is different from object name.

I would avoid using dictionary views in my application code, the main reason are that I do not control dictionary views. Discussing with the vendor they say they do not support a multi-tenant setup. Now it is up to my customer to decide how to move on, after patching the query for all_source was as fast as dba_source. Maybe they have to reinstall the db as a non-cdb setup after all.

Thanks for reading !

 

Advertisements

4 thoughts on “All_source is slow

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s