I received a case where a developer had a session that hung waiting on enq: TX – row lock contention. It was several things that puzzled me, he claims he is the only one running on this database and the row lock was related to an insert.
insert into PRODUCT (ID, VIN, CHASSISSERIES, CHASSISNO, MODIFICATIONTIME, CREATIONTIME) values (:1 , :2 , :3 , :4 , :5 , :6 )
I found this a bit odd at first. I relate enq: TX row lock contention with updates and deletes, not inserts. I want to analyze this further.
In my last post I have discussed issues seen in a Demantra system after upgrade to 126.96.36.199. We have had many issues, see earlier post This is about another issue.
Not the major contributor to our problems but I still think it is interesting to know.
As described in Part 1 Oracle gathers extended statistics automatically on column groups. This was also available in 11g but here you had to add it manually.
How can this become a performance issue in our 12c database ? If you read my post about our performance issues related to this you know that we spent a lot of time hard parsing and that we saw a specific sql on the top list for parsing.
So group column statistics, what is it ? it is information about relationship between columns in the same table. There are numerous blogs about this. One I read is the Oracle Optimizer Blog explaining the details. This is from 11g when you had to create it manually.
From 188.8.131.52 it is done on the fly, which can cause issue. See my previous post about my experiences. This post is a spin off from that post explaining the details a bit more.
Around 9 months ago we upgraded a Demantra system to Oracle 12c. Now they started to complain that batch jobs on the weekend is taking a very long time. I was asked to have a look. This is my findings.
A colleague asked me to have a look at a AWR report for an older application.
Looking at the top sqls I noticed that the application code was accessing dictonary table user_table around 40-50.000 times per hour. Even if it wasn’t using a huge amount of cpu I started to think how I can tune this statement.
(read Part1 here)
Why do I need a part two ? well in part 1 i said that “I’m (almost) sure we suffered from Very Long Parse Time for Queries in In-Memory Database (Doc ID 2102106.1).”
So glad I added (almost) because now it seems we still have the same issue during our last batch run. I did a few tests after setting inmemory_query=disable and it worked fine, but my testing was obviously not good enough.
We experienced an issue of spinning cpu which required us to apply a one of patch,
at the same time we applied the latest PSU patch April 2016.
3 days later when a batch was running we got performance problems,
the batch was taking much longer than before.
This problem was experienced in a 184.108.40.206 production database, and started when we saw in increased amount of sessions waiting for enq – TX row lock contention.
I frequently check my customers databases, during one of these controls I found that we had many versions of the same sqls with reason ROLL_INVALID_MISMATCH. I do not experiencing an issue around high version count. Hard parse is not an issue and shared_pool is not using a lot of memory. I just want to understand whats behind the figures. This is what I found.