I was contacted by a developer that experienced ORA-00932: inconsistent datatypes: expected – got BLOB in a production system. This was seen after some changes of memory parameters and a reorg of the table. Continue reading
Looking at a production system I noticed that we every morning around 6am had a sql consuming cpu. From the sql it seemed to be related to auditing, it was executed from the CDB and the program is perl@. This seems to come from Grid Control. This is a Oracle 12.1 database and we are planing to upgrade to 12.2 before the end of 2018. This is a JDE system so we cannot go to 18c.
Looking at a 11g production database I noticed the following picture. I also checked previous days and the pattern are the same.
A customer is preparing to move an application from 11g to 12c. A key function in the application is to detect new tasks, I do not have the details how it works but the function is executed every minute. The sql listed below is part of that function. In 11g this sql is very fast, less than a second. In 12.2 it takes much longer I have seen from 17 to 46 seconds to run.
I’m helping a customer that experienced a scenario where some functions in the application stopped responding. The situation was seen for 48 hours. It has also been seen pretty frequent in the past months.
When approaching the vendor of the application the response was that the session was not receiving data from the database which implied it was a database issue. A reference to a blog by Oracle is used to confirm it to be a database issue. Looking at the details and reading the blog I agree that the conclusion that the jdbc session is stuck waiting for data from the database, but the reason for not receiving data are not database related. I think it is clearly a design issue related to the application. Here are my analyze…
I have been working on an application for some time now, the plan is to move it to Oracle 12c. We started with 12.1 but as soon as 12.2 was released we focused on using 12.2. We also applied the last bundle patch. This is a test system that uses the same data over and over so no new data are added.
A little background: The application has a job running every hour calculating prices, in 12.1 when I was first involved the execution time was around 90 minutes.
In 11.2 which is the current version each calculation took around 10 minutes.
My first steps was to revert all parameter changes used in 11g and basically starting with default values.
I was asked to have a look at a batch job that started to take increase in time. We are close to spend more time than we can allow. Looking at the awr report from the time of the batch. I noticed some internal sql on the top list.
I received a case where a developer had a session that hung waiting on enq: TX – row lock contention. It was several things that puzzled me, he claims he is the only one running on this database and the row lock was related to an insert.
insert into PRODUCT (ID, VIN, CHASSISSERIES, CHASSISNO, MODIFICATIONTIME, CREATIONTIME) values (:1 , :2 , :3 , :4 , :5 , :6 )
I found this a bit odd at first. I relate enq: TX row lock contention with updates and deletes, not inserts. I want to analyze this further.
As described in Part 1 Oracle gathers extended statistics automatically on column groups. This was also available in 11g but here you had to add it manually.
How can this become a performance issue in our 12c database ? If you read my post about our performance issues related to this you know that we spent a lot of time hard parsing and that we saw a specific sql on the top list for parsing.