We experienced an issue of spinning cpu which required us to apply a one of patch,
at the same time we applied the latest PSU patch April 2016.
3 days later when a batch was running we got performance problems,
the batch was taking much longer than before.
This problem was experienced in a 184.108.40.206 production database, and started when we saw in increased amount of sessions waiting for enq – TX row lock contention.
I frequently check my customers databases, during one of these controls I found that we had many versions of the same sqls with reason ROLL_INVALID_MISMATCH. I do not experiencing an issue around high version count. Hard parse is not an issue and shared_pool is not using a lot of memory. I just want to understand whats behind the figures. This is what I found.
After we moved a production database from our US office to the office in Sweden (Gothenburg) I started to see a remote query having issues with SQL*Net message from dblink. I wanted to analyze that further and see if I could fix it.
A colleague complained that a query was much slower in QA compared to PROD even if it had the same plan and returned the same amount of rows. QA took around 12 seconds and in PROd 2 seconds. He also noticed much more physical reads on the QA system.
I have helped a customer archiving some data from a big table to a backup table.
When he verified data he said he found some oddities, he is running an update script to synchronize data. This is his comments.
In my previous post 1 and 2 I analyzed high wait time for Log File Sync. This is my last part and we will see what happens when we have added really fast fusion IO disk cards to our new system. This is also a release 12C database.
After enabling a logon trigger that started tracing a named user, I got many tracefiles. Analyzing (tkprof) each one of them is time consuming.
This problem started some weeks ago, users were complaining that certain search functions didn’t perform. it was either very slow or the application froze.
I noticed different plans over time.
During a batch job we noticed we had several sessions suffering from buffer busy waits.
Many sessions was running the same insert, one was related to auditing user logon.
We could see the pattern several days back.