I frequently check my customers databases, during one of these controls I found that we had many versions of the same sqls with reason ROLL_INVALID_MISMATCH. I do not experiencing an issue around high version count. Hard parse is not an issue and shared_pool is not using a lot of memory. I just want to understand whats behind the figures. This is what I found.
After we moved a production database from our US office to the office in Sweden (Gothenburg) I started to see a remote query having issues with SQL*Net message from dblink. I wanted to analyze that further and see if I could fix it.
A colleague complained that a query was much slower in QA compared to PROD even if it had the same plan and returned the same amount of rows. QA took around 12 seconds and in PROd 2 seconds. He also noticed much more physical reads on the QA system.
I have helped a customer archiving some data from a big table to a backup table.
When he verified data he said he found some oddities, he is running an update script to synchronize data. This is his comments.
In my previous post 1 and 2 I analyzed high wait time for Log File Sync. This is my last part and we will see what happens when we have added really fast fusion IO disk cards to our new system. This is also a release 12C database.
After enabling a logon trigger that started tracing a named user, I got many tracefiles. Analyzing (tkprof) each one of them is time consuming.
This problem started some weeks ago, users were complaining that certain search functions didn’t perform. it was either very slow or the application froze.
I noticed different plans over time.
During a batch job we noticed we had several sessions suffering from buffer busy waits.
Many sessions was running the same insert, one was related to auditing user logon.
We could see the pattern several days back.
A customer have lately experienced ORA-1652: unable to extend temp segment once or twice a week. When I started to analyze I expected to see a bad plan with huge sorts or large hash joins.