Around 9 months ago we upgraded a Demantra system to Oracle 12c. Now they started to complain that batch jobs on the weekend is taking a very long time. I was asked to have a look. This is my findings.
A colleague asked me to have a look at a AWR report for an older application.
Looking at the top sqls I noticed that the application code was accessing dictonary table user_table around 40-50.000 times per hour. Even if it wasn’t using a huge amount of cpu I started to think how I can tune this statement.
(read Part1 here)
Why do I need a part two ? well in part 1 i said that “I’m (almost) sure we suffered from Very Long Parse Time for Queries in In-Memory Database (Doc ID 2102106.1).”
So glad I added (almost) because now it seems we still have the same issue during our last batch run. I did a few tests after setting inmemory_query=disable and it worked fine, but my testing was obviously not good enough.
This problem was experienced in a 184.108.40.206 production database, and started when we saw in increased amount of sessions waiting for enq – TX row lock contention.
I frequently check my customers databases, during one of these controls I found that we had many versions of the same sqls with reason ROLL_INVALID_MISMATCH. I do not experiencing an issue around high version count. Hard parse is not an issue and shared_pool is not using a lot of memory. I just want to understand whats behind the figures. This is what I found.
After we moved a production database from our US office to the office in Sweden (Gothenburg) I started to see a remote query having issues with SQL*Net message from dblink. I wanted to analyze that further and see if I could fix it.
A colleague complained that a query was much slower in QA compared to PROD even if it had the same plan and returned the same amount of rows. QA took around 12 seconds and in PROd 2 seconds. He also noticed much more physical reads on the QA system.
I have helped a customer archiving some data from a big table to a backup table.
When he verified data he said he found some oddities, he is running an update script to synchronize data. This is his comments.
In my previous post 1 and 2 I analyzed high wait time for Log File Sync. This is my last part and we will see what happens when we have added really fast fusion IO disk cards to our new system. This is also a release 12C database.