Oracle DBA – A lifelong learning experience

Archive for the ‘11g new features’ Category

11.2.0.4 new features and a OPTIMIZER_DYNAMIC_SAMPLING change

Posted by John Hallas on December 9, 2013

As of 27th August  2013, 11.2.0.4, the final release of 11GR2 was made available – a new features document is available . I will give a quick bullet list of the new features and then discuss one very important one that is not mentioned.

  •  Oracle Data Redaction – provides a new ASO (cost £££) option to redact specified information, even at runtime, from within the database
  •  Trace File Analyzer and Collector, is a diagnostic collection utility to simplify diagnostic data collection on Oracle Clusterware, Oracle Grid Infrastructure  – looks well worth investigating
  •  RACcheck – The Oracle RAC Configuration Audit Tool – to perform regular health checks as well as pre- and post-upgrade best practices assessments. Only written for linux so needed adapting for HPUX and needs to have the root password – an issue for many DBAs
  •  Database Replay Support for Database Consolidation – run lots of scenarios- batch, online, monthly process all at the same time even though captured at different periods.
  •  Optimization for Flashback Data Archive History Tables – use the OPTIMIZE DATA clause when creating or altering a flashback data archive.

So the one that has not appeared in that list, probably because it is not a new feature as such is tan additional value for the init.ora parameter OPTIMIZER_DYNAMIC_SAMPLING. This comes into play when a table does not have any statistics and the parameter is enabled. The previous default setting of 2  was to use dynamic statistics if at least one table in the statement has no statistics and the number of blocks that the statistics would be based on would be 64 and the value could range between 2 and 10 , each value doubling  the number of blocks that will be sampled. The new value of 11  means that the optimizer will gather dynamic statistics automatically whenever the optimizer deems it necessary and based on the number of blocks it thinks appropriate.

My testing has shown a couple of anomalies between the two versions it exists on ( 11.2.0.4, 12.1.0.1). Read the rest of this entry »

Posted in 11g new features, 12c new features, Oracle | Tagged: , , , , , , | Leave a Comment »

Large CLOBs and attention to detail needed while reading Oracle Support notes.

Posted by John Hallas on October 3, 2013

This post was forwarded to me by Vitaly Kaminsky who did work with me but has now bettered himself elsewhere. He writes :-

I have recently been involved with performance tuning of a database layer for the major Dutch website which was preparing for the “crazy days” of sales on the 2nd to 5th of October.

The general setup was as follows:
2-node physical RAC cluster with fast storage layer running ORACLE 11gR2 SE and ASM.

The important bit above is the “SE” which stands for Standard Edition and implies no performance packs, no AWR, no SQL tuning and no partitioning.

The shop application makes heavy use of Oracle Text functionality and the purpose of the tuning was to ensure we can get 12000 orders per hour through the system. Each of the orders would create a single insert and numerous updates of a CLOB in XML format. This is actually the total info we managed to get from the vendor on what the application performance tuning should be focused on.

As expected, after the first test runs, it became apparent that the stumbling block was literally a block of CLOB storage.  When there was enough padding space, the application ran fine and then, suddenly, the database would grind to a halt with “enq: HW – contention” waits. Read the rest of this entry »

Posted in 11g new features, Oracle | Tagged: , , , , | Leave a Comment »

SQL Plan Management – how to really utilize and not just implement

Posted by John Hallas on August 28, 2013

The title of this post is intentionally slightly unclear and hopefully it is intrigued people to view the post and even better, add their comments.

SQL Plan Management has been around since 11G came out which is back in 2007. It does not require a tuning pack, so the package DBMS_SPM can be used without additional licensing but if the SQL Tuning advisor is used to generate new profiles via tuning task then that does require a tuning pack license.

There are plenty of  articles available to show how to  use SPM to manage a SQL statement and ensure it has the best execution plan but what I am intrigued by is how to use it on an ongoing basis across a number of databases and across all standard (non ad-hoc) code i.e. how to implement a site standard for the use of SPM which can both be applicable to those databases that have two dedicated DBAs monitoring activity continuously (type 1) , the databases that we all have that pretty much run by themselves and need little maintenance (type 3)  and the multiple systems that lie in between those two types (type 2).

A colleague, Steve Senior has produced a flow chart of how we might deliver SPM across all systems but the stumbling block is how we manage changes on an ongoing basis, both those delivered because changing statistics (derived from data changes)  require new execution plans and after code changes then new plans will probably need to be evolved, plus the inclusion of totally new SQL statements, perhaps based on new tables which have been added to the schema. Read the rest of this entry »

Posted in 11g new features, Oracle | Tagged: , , , , | 2 Comments »

Rebuild of standby using incremental backup of primary

Posted by John Hallas on March 18, 2013

I have a long to-do list of things I want to test out and one is rebuilding a standby by using an incremental backup from primary. Then along comes a note from my ex-colleague Vitaly Kaminsky  who had  recently been faced with the problem when a customer relocated two Primary 2-node RACs and a single node standby databases to a new location and just happened to start the standby databases in read-only mode. Vitaly tells the story :-

As you may know, the read-only mode will prevent any redo logs being applied to standby database, but on the surface everything looks OK – no errors and MRP0 process is running and showing “applying log” in  v$managed_standby.

The only problem is – the recovery is “stuck” on the last log the database was trying to apply before it was opened in read-only mode.

Unfortunately, the customer did not notice the omission for over 2 weeks and by the time I have had a chance to look at the environments, there were about 50G of redo logs accumulated for each and some of them were missing and/or corrupt which excluded the possibility of SCPing the logs over to the standby server and registering them with standby databases.

One of another factors which caused a lack of  attention to the standby databases falling behind is the absence of any error messages in the alert logs – every single log was shown as shipped and received.

In a case like this, the only option is to rebuild the standby database and in the past I did it using the traditional RMAN duplicate for standby routine.  However, in this particular case I had 2 databases to rebuild – one is small and another is large. The network between primary cluster and standby was slow as well.

For the small database I decided to use Grid Control GUI based wizard for creating the standby database and this process is quite straightforward and described in the documentation. For the large one, however, duplicating the database using RMAN would be too slow,there may have been a  performance degradation during the run and the maintenance window was too short for out-of-hours run.

This was a perfect case to try “incremental backup” approach. This method is described in a number of sources (if you Google it) but none of the “vanilla” cases worked for me.

I will not be listing the names and detailed output due to the production nature of the work – just the list of steps.

So, this is what I did at the end of the day:

PRE-REQUISITES:

Primary database can be single node or RAC and running OK.
No downtime of Primary is required.
All Dataguard settings are intact

Step-by-step:

  1. 1.       Get the latest SCN from standby:
 select to_char(current_scn) from v$database;

10615562421
  1. 2.       Create incremental backup on Primary for all the changes since SCN on standby:
[oracle@primary backup]$ rman target /</pre>
connected to target database: PRIMARY (DBID=720063942)

RMAN> run

2> {

3> allocate channel d1 type disk;

4> allocate channel d2 type disk;

5> backup incremental from scn 10615562421 database format

6> '/tmp/backup/primary_%U';

7> release channel d1;

8> release channel d2;
<pre>9> }
  1. 3.       Create copy of control file on Primary:
 alter database create standby controlfile as ‘/tmp/backup/stby.ctl’;
  1. 4.       SCP the backup files and standby control file to the standby server. A little tip: if you copy the backup files to the directory with the same name (like /tmp/backup here), your controlfile will know about them and you can bypass the registration bit later on.
  1. 5.       The next step is to replace the standby control file with the new one. May sound simple, but this proved to be the trickiest part due to the fact that standby controlfile is OMF and in ASM.  You will need to use RMAN for the restore operation:

–          Switch the database to nomount, then:

restore controlfile from ‘/tmp/backup/stby.ctl';

–          Mount the database.

At this point you have the controlfile with the information about the files as they are on the Primary side, so, the next step is to register everything we have on Standby side:

catalog start with '+data/standby/';

Check the output and YES to register any reported standby files.

–          Shutdown immediate your standby instance.

 RMAN> switch database to copy;

RMAN> report schema;

On this stage you should have a nice and clean list of actual standby files.

 

Now we are ready to apply our differential backup to bring the standby in line with Primary:

RMAN> recover database noredo;

 Because the online redo logs are lost, you must specify the NOREDO option in the RECOVER command.

You must also specify NOREDO if the online logs are available but the redo cannot be applied to the incrementals.

If you do not specify NOREDO, then RMAN searches for redo logs after applying the incremental backup, and issues an error message when it does not find them.

When the recovery completes, you may start the managed recovery process again:

SQL> alter database recover managed standby database using current logfile disconnect;

Provided, all FAL settings are correct, your managed recovery will pick-up all logs generated on primary since the incremental backup and you will have fully synchronised standby again.

Posted in 11g new features, Oracle | Tagged: , , , | 4 Comments »

Using Datapump from SQLDeveloper

Posted by John Hallas on November 8, 2012

One problem that we all have is with exporting/importing between different versions of the datapump client. The following error is not uncommon

UDI-00018: Data Pump client is incompatible with database version 11.01.00.07.00

 Co-incidentally to having some problems with this, Vitaly Kaminsky, a colleague worked out a method of overcoming this by using SQLDeveloper and below is his document describing how to do it. Any kudos to me please, any problems contact him at uk.linkedin.com/pub/vitaly-kaminsky/20/434/244/

The “quick and dirty” way to save time while copying data, DDL, schemas and objects between Oracle databases using SQL Developer vs traditional Exp/Imp routines.

As every DBA knows, small, daily tasks and requests like “will you please copy…” or “please refresh..” may quickly consume considerable amount of time and effort, leaving you wondering where the day has gone. One of the most convenient ways to satisfy those requests is to use free tools like SQL Developer (and yes, there are many others, like Toad, but you have to pay your license fees).

Most of us have a considerable estate to look after, often consisting of some large production clusters and dozens, or hundreds, or thousands of test, development, integration, UAT and other databases, running on VMs or physical boxes.

In case of using traditional expdp/impdp routines, copying data and DDL between those small DBs may require more time for the setup than the actual process of moving data. I would estimate the time required to check the filesystem, permissions, create directory objects etc. to be in the region of 20 to 30 min per request. This is the actual time which you save by using SQL Developer Database Copy feature, because the other bits, like creating the script, specifying schemas, etc. will take about the same time.

The actual movement of data by Database Copy is performed by SQL Developer by running DDL and DML on the target system and pumping the I/O via the initiating workstation. This is the limiting factor and I would suggest to use this method if the actual volume of data does not exceed 500GB, otherwise, the setup time-saving will be lost on I/O.

Using SQL Developer 3.1.06, the process itself is wonderfully simple, you just need to make sure your connections have been setup properly with sufficient privileges:

1. Navigate to Database Copy in Tools menu:       

2. Select Source and Target databases and whether you want to copy schemas or objects:

3. Select required objects (ALL for full schemas):

4. Select source schema(s):

5. Select objects to copy (this example shows the selection of system objects which is sufficient for the demo):

6. On the next step you can walk through the objects to apply any required filtering:

7. Proceed to summary:

8. Once you press FINISH, the process will start and you can monitor the progress in the log window (just an example of text here):

That’s it, all done.

The limitation of the above is that you can’t save the specs for subsequent reuse, but the whole purpose of this exercise is to save time for one-off requests.

 

 

Posted in 11g new features, Oracle | Tagged: , , | 1 Comment »

Permissions problem with 11.2.0.3 and tnsnames.ora

Posted by John Hallas on November 5, 2012

There is a Bug documented in MoS regarding the setting of permissions by the root.sh script (which calls roothas.pl).  This causes the ownership of grid home to be owned by root and permissions given to oinstall group

app/gridsoft/11.2.0.3 $ls -ld

drwxr-x---  65 root       oinstall      2048 Feb 27  2012 . 

This causes any user who is not in the oinstall group not  to be able to run any programs such as sqlplus. The bug reference and title is Bug 13789909 : SIHA 11.2.0.3 INSTALL CHANGES THE GRID HOME PERMISSION TO 750 .

The bug is dismissed as being not a problem because nobody should be running executables from the grid home, they should be running from the RDBMS home. A fair point until you consider the location of the tnsnames.ora file.  Any user owning  a dblink needs to access the tnsnames file and even if you link the entry in Grid/network/admin to RDBMS/network/admin the user still does not have access to tnsnames.ora file.

This has only happened in 11.2.0.3 and only on standalone RAC installs. It applies to HPUX and OEL5 s far as I am aware although it was only reported against OEL.  The resolution is easy enough – in our case it would be

chmod 755 /app/gridsoft/11.2.0.3

 However I do think oracle should address this as the bug it is and not ignore it.

Posted in 11g new features, security | Tagged: , , | 3 Comments »

Excellent Optimizer Statistics articles

Posted by John Hallas on April 12, 2012

For anybody who is interested in reading about optimizer statistics and gaining a clear understanding on what they can do and how they can be managed then I suggest reading the following two white papers

Part 1 – Understanding Optimizer Statistics

Part 2 – Best Practises for Gathering Optimizer Statistics

Part 2 contains the best, most easily understood explanation of the problems with bind variable peaking and how they were addressed by using adaptive cursor sharing that I have seen.

Overall both documents are well written with good explanations and diagrams and I think anybody who has any interest in the Oracle Database engine and the tuning of databases for both consistency and performance should make these articles a must read. 

 

 

Posted in 11g new features, Oracle | Tagged: , , , | Leave a Comment »

Speeding up the gathering of incremental stats on partitioned tables

Posted by John Hallas on January 4, 2012

11G introduced incremental  global stats and the table WRI$_OPTSTAT_SYNOPSIS$ contains synopsis data for use in maintaining the global statistics. This table can grow very large and Robin Moffat has produced a good blog  post about  the space issues  – note we both worked at the same site so it is the same DW being discussed by both of us.

Apart from the space usage that Robin refers to, another worrying aspect is the time taken when gathering stats on a partitioned table and most of that time is taken by running a delete statement

DELETE
FROM SYS.WRI$_OPTSTAT_SYNOPSIS$
WHERE SYNOPSIS# IN
(SELECT H.SYNOPSIS#
FROM SYS.WRI$_OPTSTAT_SYNOPSIS_HEAD$ H
WHERE H.BO# = :B1
AND H.GROUP# NOT IN
(SELECT T.OBJ# * 2
FROM SYS.TABPART$ T
WHERE T.BO# = :B1
UNION ALL
SELECT T.OBJ# * 2
FROM SYS.TABCOMPART$ T
WHERE T.BO# = :B1))

I will demonstrate the problem and a simple solution and you will be able to see the significant performance improvements achieved. Read the rest of this entry »

Posted in 11g new features, Oracle | Tagged: , , , | 13 Comments »

UKOUG 2011 – day 3

Posted by John Hallas on December 9, 2011

The final day of the 2011 UKOUG conference and it was straight in at the deep end with Joel Goodman talking about automatic parallelism in 11GR2. The talk was full of information, as Joel’s talks normally are. He also had time to cover Parallel Bulk Update which groups sets of rows into chunks. Each chunk can have a success or fail independently of other chunks which removes the ‘all or nothing’ approach normally seen with PDML. He has a good blog entry on this which is well worth perusing if you are interested. http://dbatrain.wordpress.com/2011/07/01/add-bulk-to-your-parallel-updates/

 My site is just going down the road with Goldengate so the talk by Pythian’s Marc Fielding on a real life Goldengate migration was very useful. This was a large financial institution where the system was crucial to business continuity and GG was to be used to provide a rapid fallback facility if things went wrong. The main thing I took away from the talk was how small-minded they must be not to provide adequate testing facilities for such a large project. Not being able to use full data sets and similar sized hardware (OK it was a 14TB database) does add a lot of risk and no small matter of frustration to the technicians involved in the migration. Some of the diagnostics that Marc talked about will be very useful to use and I was interested in the alternatives to supplementary logging which may be required if there is no primary key and it is difficult to identify a row specifically.

I did start to listen to another talk but after around 10 people had left I plucked up courage and made a hasty exit myself. It was just not for me.

The best presentation I saw at the conference was Connor Macdonald on a fresh approach to the optimizer statistics. Connor is a real showman and his easy on-stage manner belies the degree of effort he must spending preparing his numerous slides. The set of slides associated with the ITIL process deserved a round of applause by itself and indeed it received one.  This was the second session I went  to that mentioned the value of index key compression and the way it can be calculated by using ‘analyze index validate structure’. A very good presentation that provided food for thought.

My final session was Mike Swing talking about database tuning for packaged apps. He had way too much content and rushed through it much too fast. As several people said to me afterwards, all he really recommended was getting faster disk and more memory.  I liked his presentation style and easy manner but it was a bit light on useful content.

So here endeth day 3. I think this was the conference I have enjoyed the most and got the most from. The presentations were of a top standard and even though I was only interested in the RDBMS stream I had plenty of choice for most time slots. I know that cancellations and changes are hard to avoid but there did seem to be a lot and that made planning harder than it should have been. I think my only constructive critique would be that there were a number of presentations repeated from last year (and some from other SIGs almost 2 years ago). I fully understand that a good presentation is  still a good presentation a year later and not everyone has the chance to have seen it but personally  I am not in favour of too much repeated material.

 

Posted in 11g new features, Oracle, UKOUG | Tagged: , , , , , , , | 1 Comment »

UKOUG 2011 Part Deux

Posted by John Hallas on December 7, 2011

Day 2 of the UKOUG conference at the ICC in Birmingham and back into the fray.

First up was Thomas Presslie talking about Dataguard fast start failover. How he managed to demonstrate transactions and network connectivity using whisky and toilet paper could not be done full justice in a blog – it had to be seen to be believed.

It did make me want to do more with FSFO, especially noting how easy the setup was using OEM. However my belief that the database is only part of the end solution and failing that over to a second datacentre after a network flicker may leave the application stack in a mess does still concern me. Co-incidentally I have a requirement to set up a second standby configuration cascaded from a physical standby but keeping the 3rd database perhaps one hour behind whilst the standby is in real time apply mode with no lag. That might give us a chance to determine the status of the data before a logical corruption (user error) had occurred. Much more likely to be of value is flashback query but we are going to look at both avenues. It is highly unlikely we would ever be in a position to flashback the database.

Julian Dyke then talked for an hour about RAC trouble-shooting (mostly 11.2.0.2) and the time flew past. I made quite a few notes of things to think about. The pros and cons of putting the scan addresses in /etc/hosts (HPUX) to be used in the event of a DNS failure was one thought. Looking at the exectask function and the scripts used to call various function was another action I took for myself. Another was a big list of asmcmd commands, some of which I did not recognise. I think they must have come in with 11GR2 which I have not really used myself although we are using it on site.

Tanel Poder’s biggest ever problem was next up. I had seen this presentation last year and knew the answer but how he got there was still interesting. The use of the HPUX command kitrace (similar to dtrace on Solaris – see reply below for more details) reminded me that I was going to look at that in some detail but have never got around to it. As my site is likely to be moving away from HPUX sooner rather than later perhaps there is not much point now.

After lunch John Beresniewicz was talking about ASH outliers. Quite mathematically based, which is always a challenge for me but he will be posting a script (possibly via Doug Burn’s blog) which he has developed as another means of dissecting and analysing ASH data.

Michael Salt’s talk on indexes was full of real world examples and there were lots of nice little hints and tips, none of which were earth-shattering but all of which were good practise and I found it a useful reminder of what I should be doing when looking at code. On the same theme two slots later Tony Hasler was presenting a beginners guide to SQL tuning.   I have never seen Tony present before but I really liked both his style and the content. A lot of information thrown in and good explanations of various autotrace outputs. I will definitely be downloading his presentation to run through it and see what I can put to further use. Whilst I do not think I am expert in the field of SQL tuning, indeed far from it, I do like to think I know what to look for. Sometimes listening to others you realise in the same lecture both how much you already know and how little you actually follow best practises. There is no real substitute from looking at code and trying to improve performance. For a lot of us who have a very wide-ranging DBA role then that opportunity to practise odes not appear often enough which is why it is good to review and refresh your approach now and then.

At every conference I like to try and hear something new or touch on an area that is outside my day job. John King’s talk on Edition Based Redefinition was just that. I am not really in a position to take advantage of the ability to let users run differing sets of code and then migrate them across to a new release in a seamless manner, all without any outages or interruption to service. However I could see how useful it could be, especially in the world of the Apps DBA, say for EBS. Apparently no less a person than Tom Kyte referred to EBR as the ‘killer feature’ within 11GR2.  John had an easy, comfortable manner  and the time flew past, so much so that he had to be dragged kicking and screaming from the stage by the next presenter.

All in all another good day, rounded off with a couple of beers with work colleagues and a few presenters, all with plenty of Oracle chat included.

Posted in 11g new features, Oracle, UKOUG | Tagged: , , , , , , , , , , , | 6 Comments »

 
Follow

Get every new post delivered to your Inbox.

Join 256 other followers