I was asked to have a look at a batch job that started to take increase in time. We are close to spend more time than we can allow. Looking at the awr report from the time of the batch. I noticed some internal sql on the top list.
I received a case where a developer had a session that hung waiting on enq: TX – row lock contention. It was several things that puzzled me, he claims he is the only one running on this database and the row lock was related to an insert.
insert into PRODUCT (ID, VIN, CHASSISSERIES, CHASSISNO, MODIFICATIONTIME, CREATIONTIME) values (:1 , :2 , :3 , :4 , :5 , :6 )
I found this a bit odd at first. I relate enq: TX row lock contention with updates and deletes, not inserts. I want to analyze this further.
As described in Part 1 Oracle gathers extended statistics automatically on column groups. This was also available in 11g but here you had to add it manually.
How can this become a performance issue in our 12c database ? If you read my post about our performance issues related to this you know that we spent a lot of time hard parsing and that we saw a specific sql on the top list for parsing.
So group column statistics, what is it ? it is information about relationship between columns in the same table. There are numerous blogs about this. One I read is the Oracle Optimizer Blog explaining the details. This is from 11g when you had to create it manually.
From 22.214.171.124 it is done on the fly, which can cause issue. See my previous post about my experiences. This post is a spin off from that post explaining the details a bit more.
Around 9 months ago we upgraded a Demantra system to Oracle 12c. Now they started to complain that batch jobs on the weekend is taking a very long time. I was asked to have a look. This is my findings.
A colleague asked me to have a look at a AWR report for an older application.
Looking at the top sqls I noticed that the application code was accessing dictonary table user_table around 40-50.000 times per hour. Even if it wasn’t using a huge amount of cpu I started to think how I can tune this statement.
(read Part1 here)
Why do I need a part two ? well in part 1 i said that “I’m (almost) sure we suffered from Very Long Parse Time for Queries in In-Memory Database (Doc ID 2102106.1).”
So glad I added (almost) because now it seems we still have the same issue during our last batch run. I did a few tests after setting inmemory_query=disable and it worked fine, but my testing was obviously not good enough.
This problem was experienced in a 126.96.36.199 production database, and started when we saw in increased amount of sessions waiting for enq – TX row lock contention.