I frequently check my customers databases, during one of these controls I found that we had many versions of the same sqls with reason ROLL_INVALID_MISMATCH. I do not experiencing an issue around high version count. Hard parse is not an issue and shared_pool is not using a lot of memory. I just want to understand whats behind the figures. This is what I found.
I checked a few of the sqls above and the reason for not sharing the cursor was.
SQL_ID : 34vjfyq7jj38r ADDRESS : 0000000A554E9260 CHILD_ADDRESS : 00000009F486DB98 CHILD_NUMBER : 3 ROLL_INVALID_MISMATCH : Y REASON : <ChildNode><ChildNumber>3</ChildNumber><ID>34</ID><reason>Rolling Invalidate Window Exceeded(2)</reason><size>0x0</size><details>already_processed</details></ChildNode> ----------------- SQL_ID : 34vjfyq7jj38r ADDRESS : 0000000A554E9260 CHILD_ADDRESS : 00000009F6646208 CHILD_NUMBER : 4 ROLL_INVALID_MISMATCH : Y REASON : <ChildNode><ChildNumber>4</ChildNumber><ID>34</ID><reason>Rolling Invalidate Window Exceeded(2)</reason><size>0x0</size><details>already_processed</details></ChildNode>
For this sql we had 24 diferent cursors all not shared due to ROLL_INVALID_MISMATCH.
It was the same for several of the sqls listed.
What does ROLL_INVALID_MISMATCH means ?
This is related to statistics gathering, prior to version 10g when statistics was changed for an object all cursors referencing that object was invalidated and reparsed. This could cause a spike in hard parsing causing performance issues.
From 10g and forward we have an option to handle how the invalidation of cursors should be done. The parameter is named no_invalidate and has the default value of AUTO_INVALIDATE. You can handle the default value by using the dbms_stats package.
To get the current value you can run SQL> select DBMS_STATS.GET_PARAM('NO_INVALIDATE') from dual; DBMS_STATS.GET_PARAM('NO_INVALIDATE') -------------------------------------------------------------------- DBMS_STATS.AUTO_INVALIDATE
You can change the default value by using the set_param command in dbms_stats or specify the value when you gather new statistics.
The values available for NO_INVALIDATE are:
TRUE………………….: Does not invalidate the cursor.
FALSE…………………: Invalidates the cursor immediatelly, same as for verion <= 9i
AUTO_INVALIDATE.: Default value which means Oracle will invalidate over time.
How does AUTO_INVALIDATE work ?
Lets say we collected new statistics at 6am this morning, all current cursors referencing the objects with new statsistics are marked for rolling invalidations. next time we parse one of these cursors marked for rolling invalidation we set a timestamp, assume next execution is at 8am . The timestamp can be anything from the parse time (8 am) plus the value of hidden parameter _optimizer_invalidation_period. The default value is 18000 seconds.
Continuing my example lets say that the timestamp generated is 9:35am.
Note that the parse we did at 8am is a soft parse, we do not use the new statistics!
This is also true for any parse occuring before the timestamp 9:35am, it will reuse the current cursor.
An execution after 9:35am will invalidate the old cursor and hard parse the cursor using the new statistics. This new cursor will have reason set to ROLL_INVALID_MISMATCH to explain why we couldn’t share the old cursor.
If the old cursor is aged out from the shared_pool it will do a new hard parse with the new statistics and the scenario above is not valid.
Can we reduce version counts related to ROLL_INVALID_MISMATCH ?
Unless you have clear indications that hard parse is a problem for you, I do not think this is an issue that you need to address. It is the result of Oracle fixing an old issue with possible spikes of hard parsing with changed statistics.
If you have a hard parse issue I doubt that you will see ROLL_INVALID_MISMATCH as the main reason for the hardparse.