Oracle DBA – A lifelong learning experience

  • Meta

  • Categories

  • Blog Stats

    • 1,654,056 hits
  • Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Join 377 other followers

  • Advertisements

ORAchk is released – what does it offer ?

Posted by John Hallas on December 29, 2015

I have looked at the ORAchk tool before and thought that I must do something with it and I have now had a more detailed review of what it offers.

Firstly the history, it started off as RACchk and then morphed into ORAchk with a sibling called EXAchk. The recent release version is which is available for MoS Doc_Id 1268927.2

It can perform an ever-growing list of checks against database, OS, specific applications (EBS and Peoplesoft being two)and various permutations of all that. It is very customisable, the usage command output runs to 281 lines !!

There is now the ability to run it as part of an Apex collection so that all reports can be centralised and you can then compare the differences between one run and the previous one.

Let me give an example of the ./orachk –b output. This shows the best practises and I have selected some of the output that seemed worthy of comment.

FAIL =>   Database parameter DB_BLOCK_CHECKSUM is NOT set to recommended value on DBNAME

FAIL =>   Database parameter DB_LOST_WRITE_PROTECT is NOT set to recommended value on DBNAME

FAIL =>   RECYCLEBIN on PRIMARY should be set to the recommended value on DBNAME instance

The first three FAIL messages are all correct and are all choices we have made. It is very valid to point them out and the RECYCLEBIN one which was to address a possible bug we had is perhaps worth re-visiting again


WARNING => Consider increasing the value of the session_cached_cursors database parameter for DBNAME

I must admit I have never heard of AWR_FLUSH_EMERGENCY_COUNT but it is something I will look at shortly.

The session_cached_cursors is very interesting and looking at it in a bit of detail it appears to be an accurate warning that needs some action taking. More on that shortly

WARNING => RAC Application Cluster is not being used for database high availability on DBNAME instance

FAIL =>   Active Data Guard is not configured for DBNAME

Whilst Oracle may consider that RAC is fantastic even they will not admit that it is suitable for every database installation so to warn a customer that is not in use is a bit over the top. Perhaps there should be another warning stating that you should really be running an Exadata stack. I am sure the sales teams would be in favour of that. The same argument goes for the Active Data Guard failure. That is not a FAIL, it is a WARNING at best.

So far then I think it has produced some very interesting ideas and suggestions on what could do with checking out. The concept of being able to run it centrally and co-ordinate results – the Collection Manager tool is very good. Where I start to worry is the new ability to write user-defined checks and the potential for the tool becoming a behemoth. If I wanted to write a query that runs regularly on a number of databases I would use OEM and a UDM (user defined metric), although now we call them metric extensions and they sit in the central repository. I would not want them running out of yet another tool. However in opposition to that argument I could see a very good usage of this by external auditors. They come on site, want to run a set of scripts and be able to report on the health (and security) of a selected system. The ability to put their scripts into a standard tool that is provided by Oracle could be very helpful to them.

As a side-thought I presumed RDA which incorporates the Health Check Verification Engine (HCVE)  scripts are now on their way out but perhaps not. Whilst there is certainly a degree of similarity between the two systems I think that with RDA being written in a very modular form (mostly in perl) that it is here to stay  and indeed the MoS overview of HCVE states “The purpose of HCVE is to provide a generic engine to execute assessment checks for customers and on the Oracle server side environment using very abstract rules provided by the product experts. These abstract rules are visible to the global support community and customers via our knowledge content repository.

The scripts provided within ORAchk are all accessible and provide a lot of good content. Two examples, the Goldengate check scripts (stored in .cgrep/ ogghc_11204.sql or ogghc_12101.sql) are absolutely excellent and give lots of ideas of what can be monitored. The second example is around the warning about a low value for session_cached_cursors mentioned earlier. Looking at the scripts that had been run I ferreted out the following sql

select 'percent_cursors_cached = '||round(to_number(decode(value, 0, 0 , (100 * used / value)))) usage


 ( select
max(s.value) used
v$statname n,
v$sesstat s
where = 'session cursor cache count' and s.statistic# = n.statistic#


 ( select value from v$parameter where name = 'session_cached_cursors' )
percent_cursors_cached = 942
select 471/50*100 from dual;

Which tells me that my parameter is only 900% too small – although I will do more validation work on that.

So overall I think that ORAchk is excellent and it cannot harm to run it against  selected production systems and review the output


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: