The first day of UKOUG conference at the ICC in Birmingham and there were a lot of familiar faces are around. Due to a lot of traffic I missed the first presentation I wanted to see which was from Kyle Hailey on SQL tuning. I will have to download the presentation later. I did make it to Greg Rahn on the SQL Monitoring report and that was well worth the time spent. Whilst I am familiar with the functionality, he opened my eyes by providing a number of examples of what he would look at first to try and determine a better resolution. His presentation style was comfortable and he had a small number of examples which covered quite a lot of scenarios. He did point out it was only to be used if you had paid for the Tuning and Diagnostic pack but as it is turned on by default I did wonder how many use it without any further consideration. Read the rest of this entry »
Archive for the ‘11g new features’ Category
Posted by John Hallas on December 6, 2011
Posted by John Hallas on October 19, 2011
Whilst ensuring that Resource manager was working properly I noticed a problem that it seemed to be dropping out of the plan I wanted to run ( DW_PLAN) and returning to the default plan whilst in the automatic maintenance windows.
The fix for that is to set the parameter resource_manager =’FORCE:DW_PLAN’ and the DW_PLAN is retained. The reason why the default plan is started off is so that scheduler knows it has sufficient resources to get the job done and it will not be artificially constrained. If your plan does not limit the scheduler required resources then there is no harm in making your normal plan the plan for 24*7
I had noticed that the plan was defaulting from a couple of entries in the alert log but wanted to get an exact listing of what was happening. I used the xml logs that came with the ADR package and the X$DBGALERTEXT view. Read the rest of this entry »
Posted in 11g new features, Oracle | Tagged: automatic maintenance window, dbms_scheduler.set_attribute, DEFAULT_MAINTENANCE_PLAN, Resource Manager, resource_manager =’FORCE:', X$DBGALERTEXT | Leave a Comment »
Posted by John Hallas on September 23, 2011
I was pleased with my the presentation on ADR I gave yesterday at the Unix SIG in Reading. The timing was bang on at 60 minutes and the audience had not seen or used many of the features I discussed so that was a bonus. I had the benefit of being able to give the same talk two days earlier to a team of DBAs at work and I learned a lot from the comments I received and that made for a better presentation yesterday.
One question I was asked afterwards was ‘ are Health Monitor and Support Workbench Enterprise Edition features or are they also available in Standard Edition?’. As we do not use Standard Edition I could only hazard a guess that these were tools that were advantageous to Oracle support as well as users and therefore I figured that they would be included in both editions. I have since looked at the list of features for 11GR1 and can see nothing that suggests that the features are licenseable or are different between editions. If anybody knows different then please feel free to correct me.
The presentation is available in PDF format from the UKOUG site, provided you have a member access that is.
Posted by John Hallas on September 16, 2011
I am presenting a talk around the use of the Automatic Diagnostic Repository at the UKOUG SIG in Reading on 22nd September 2011. I will be covering, amongst other things, the management of files, the Health Monitor, incidents and problems and the Support Workbench utility. I am hoping that, whilst everyone will probably be already aware of ADR, some of the things I mention might be new or have not been fully looked at before. The Health Monitor was new to me and the ease of use of the Support Workbench when raising an SR is certainly the way forward.
I will also be quite critical of how ADR has been delivered, particularly in respect of the management of diagnostic data and the trace and alert logs that are generated. ADR currently lacks features such as the management of the alert log which still needs external management using such commands as logrotate on unix. The standard alert log has not been replaced by an alert log in xml format (log.xml) and the old alert log is only created for backward compatibility and is not guaranteed to be available in the future. Listener logs are not purged and there are ongoing problems in removing core dumps (on HPUX at least). Overall I will be suggesting that ADR is not as automatic as it could be but some of the additional features other than file management are well worth investigating.
I look forward to meeting fellow UKOUG members and hopefully I may learn a few new things about ADR myself if I get good audience interaction.
Posted by John Hallas on September 6, 2010
After applying PSU 4 on Oracle RDBMS home, we saw the below error whilst extending a datafile:
SQL>ALTER DATABASE DATAFILE '+DATA/sid/datafile/system.276.723752265' RESIZE 1524816K; ALTER DATABASE DATAFILE '+DATA/sid/datafile/system.276.723752265' RESIZE 1524816K * ERROR at line 1: ORA-01237: cannot extend datafile 2 ORA-01110: data file 2: '+DATA/sid/datafile/system.276.723752265' ORA-17505: ksfdrsz:1 Failed to resize file to size 190602 blocks ORA-15061: ASM operation not supported 
Solution. The problem is that we are using separate installations for the ASM and RDBM binaries and there is a conflict betwen the two if PSU 4 has not been applied to both sets of binaries. Therefore PSU needs to be applied to the ASM binaries as well.
Not a major issue but a trap that is easy enough to fall into.
a) Apply OPatch to have the latest version i.e. 18.104.22.168.2
b) Apply PSU 4 patch (9654987)
SQL>ALTER DATABASE DATAFILE '+DATA/sid/datafile/system.283.723752313' RESIZE 1524816K; Database altered.
Metalink Id: ORA-15061 reported while doing a file operation with 11.1 or 11.2 ASM after PSU applied in database home [ID 1070880.1]
Posted by John Hallas on September 2, 2010
I recently posted on the oracle-l mailing list about how to stop denial of serice attack. My message is below
We had an application that repeatedly connects to the database via java connection pool fail because the account had become locked. The application kept on trying, the database did not allow the connection and we ended up with thousands of ‘dead’ processes causing the unix server to hang as all memory was used up.
The obvious thing to fix in our case was some form of application logic to recognise that failed connections had been made and stop the repeated connection attempts.
However this could also be used in a denial of service attack. What steps could we take to reduce that risk. The problem as I see it is that the database has reacted correctly and there is not much more we could do at the database level. However I am always open to suggestions.
I received two responses, both of which were valuable. Freek DHooge suggested enabling dead connection detection by using the sqlnet.expire time setting and another mail from Grzegorz Goryszewski directing me to the 11g new feature listener connection rate feature. I set up a test to use both features and here are the results. Read the rest of this entry »
Posted by John Hallas on June 16, 2010
Our site policy is to enable flashback logging on our production databases unless there is a good reason not to, two examples are the data warehouse where volume is prohibitive and the other is a highly active Peoplesoft database where performance is a primary consideration and we suffer from “
flashback buf free by RVWR" wait event as it cannot write the flashback logs quickly enough.
On the same system we had experimented with disabling flashback against two tablespaces which contained objects of a transient nature and that could be rebuilt if necessary. We finally agreed that the overhead of flashback was too heavy balanced aganst the likelyhood of us ever using it and so we disabled it at the database level.
However, when we are applying application code changes during an agreed outage we bounce the database and restart it with flashback enabled. Therefore when the changes have been applied and basic testing has taken place we can quickly flashback the database to a clean point with a minimum of fuss in the event that the changes have not produced the desired results. If the bundle is OK we disable flashback logging dynamically without a further database bounce. We find this an excellent belt and braces method which has little overhead or manageability cost.
During a recent bundle release we did want to flashback but had some problems Read the rest of this entry »
Posted by John Hallas on January 28, 2010
ASM has to be equal to or higher than the highest version of the databases that are using it and the compatability settings have to be correct.
PSU 1 (Oct 2009) did not enforce that requirement. PSU 2 (Jan 2010) does check.
We determined this because we do not always apply the latest PSU against the ASM binaries but we do against the RDBMS code. Today the following sequence of events took place along with the associated error message.
Posted by John Hallas on January 22, 2010
My first public presentation is over now and whilst I was very nervous beforehand I felt quite comfortable once I started. To anybody who was there, thanks for putting up with me.
I promised to upload the contents of a .profile we use for the oracle account as that includes a number of useful functions and aliases. This is rolled out to every database server to ensure that we have a similar feel to every server.
We also have the oratab files set up so that the primary database is first (if there is more than one), ASM is next and the Grid agent home is next. That way the default SID setup when logging in is the main database we are likely to be using. Read the rest of this entry »
Posted by John Hallas on January 6, 2010
I posted the mail below on the Oracle-L mailing yesterday and was struck by a response given by many
I believe that there is a fundamental flaw in how flashback is managed in a database.
If I make the decision, based on business requirements and technical reasons, that I want flashback logging to be enabled for a database then I would expect that to remain the situation.
However Oracle can disable flashback and not really inform the user at all. Yes there is a message in the alert log Read the rest of this entry »