Oracle DBA – A lifelong learning experience

Xmas day -150 hits. What is wrong with the world

Posted by John Hallas on December 30, 2016

Yes, very tongue in cheek. I know everyone does not celebrate Xmas.

I was still surprised though. This is what was viewed

Best wishes for 2017 to all my readers.

xmasday

Posted in Oracle | Leave a Comment »

RMAN checksyntax function

Posted by John Hallas on December 29, 2016

I was looking at the RMAN DEBUG options and came across the CHECKSYNTAX function which I had not used before.

Firstly a quick recap on the DEBUG option.

This can be called using the following syntax

rman target / catalog rman12g1/xxxx@rmancat debug trace=rmantrace.log cmdfile=backup.rcv

or

rman target / catalog rman12g1/xxxx@rmancat debug trace=rmantrace.log then run RMAN> @backup.rcv (or just type in your run block of commands)

There are a number of options to DEBUG and one of the error messages lists them out quite neatly

RMAN-01009: syntax error: found "integer": expecting one of: "all, duplicate, recover, restore, resync, sql"

To be honest if I was tracing I would just stick with the DEBUG=ALL format. The DEBUG=SQL gives all the internal commands that RMAN calls and could be interesting if you were doing a deep dive into RMAN functionality

Anyway, back to the CHECKSYNTAX option.

I run it against the edited version of the command file used above

rman target / catalog rman12g1/xxxx@rmancat checksyntax cmdfile=backup.rcv 

Recovery Manager: Release 12.1.0.2.0 - Production on Wed Dec 28 10:22:20 2016
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
connected to target database: T12TEST (DBID=1543168240)
connected to recovery catalog database
RMAN> run {
2> sql "alter session set nls_date_format=''YYYY-MM-DD:HH24:MI:SS''";
3> allocate channel c1 device type disk format '/app/oracle/backup/backup_db_%d_S_%s_P_%p_T_%t';
4> allocate channel c2 device type disk format '/app/oracle/backup/backup_db_%d_S_%s_P_%p_T_%t';
5> backup database INCLUDE CURRENT CONTROLFILEs;
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00558: error encountered while parsing input commands
RMAN-01009: syntax error: found "identifier": expecting one of: "controlfile"
RMAN-01008: the bad identifier was: CONTROLFILEs
RMAN-01007: at line 6 column 33 file: backup.rcv

Note the file backup.rcv has a blank line as the first line which confuses the line numbering

Pretty neat. I edit the file and put a different error in, a much more common missing semi-colon

RMAN> run {
2> sql "alter session set nls_date_format=''YYYY-MM-DD:HH24:MI:SS''";
3> allocate channel c1 device type disk format '/app/oracle/backup/backup_db_%d_S_%s_P_%p_T_%t';
4> allocate channel c2 device type disk format '/app/oracle/backup/backup_db_%d_S_%s_P_%p_T_%t'
5> backup
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00558: error encountered while parsing input commands
RMAN-01009: syntax error: found "backup": expecting one of: "auxiliary, connect, format, maxopenfiles, maxpiecesize, parms, rate, send, to, comma, ;"
RMAN-01007: at line 6 column 1 file: backup.rcv

Overall not a mind-shatteringly exciting find but something that might be of use one day

Posted in Oracle, RMAN | Tagged: , | 6 Comments »

Performance problems with OEM AWR warehouse

Posted by John Hallas on December 20, 2016

The Enterprise Manager AWR Warehouse is designed to hold performance data from multiple databases  for long-term analysis. It promoses that it will save storage and improve performance on your production systems. In that it is indeed correct. However the warehouse itself does not seem to be performant when taking in multiple sources and retaining them long-term – 400 days in our case. Why 400 days is an obvious question that might be asked. Primarily because we are a Retail organisation and Easter is variable each year.

 

The AWR repository database is performing poorly during the insert stage of the upload process.
Just to quickly summarise the process:
  • A dmp file is extracted on the source database and transferred across to the AWR server
  • The dmp file is then imported into a temporary schema called AWR$XXXXXX (this loads quickly)
  • This data is then inserted into the main AWR tables inside the SYS schema. Is is this stage that is slow.

In order to completely isolate the issue, we altered a parameter, so only one AWR file gets loaded at once, cutting any contention / locking issues out of the equation:

Read the rest of this entry »

Posted in 12c new features, Oracle | Tagged: , , , | 3 Comments »

Reorganising data within tablespaces

Posted by John Hallas on December 19, 2016

I am currently interested in managing storage on our Exadata platform which is primarily used for running a data warehouse.

The ongoing challenge is that most of the large tablespaces use a bigfile datafile and we insert data using append. That causes data gaps within the datafile and it is constantly growing. A recent example showed a 7Tb datafile containing only 2Tb of data and yet the ASM diskgroups were reporting we were running out of space.

What I intend to share here is how we are organising object to reclaim some of that space and I will include the scripts I am using. I am fully aware of the DBMS_REDEFINITION package and also the capabilities from within Oracle Enterprise Manager but I decided to stick to a method where it was all under my control and I knew exactly where I was. Daily batch on this system can run for up to 24 hours and is normally between 16 and 20 hours long so I had to be aware of when certain tablespaces were not being used.

I have been working through a tablespace at a time so that is the process I will demonstrate – using a tablespace DW_BIA_RTL_TBS_BKP as an example

From a standard script of which there are many versions around on the net to show tablespace usage I select the tablespace I am interested in Read the rest of this entry »

Posted in Oracle | Tagged: , , , , | Leave a Comment »

Resolving two errors on Datapump export – parameter cluster=N/Y

Posted by John Hallas on December 5, 2016

I attempt to export a table with 76,000 rows using a relatively standard datapump command

expdp schemas=ODI_TEMP cluster=y directory=MCB content=ALL dumpfile=OEC_%U.dmp include=table:"IN('ORDM_ERROR_COUNTS')" parallel=8 logfile=OEC.log

It fails with a fatal error and also produces a warning

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management and OLAP options
Starting “ODI_TEMP”.”SYS_EXPORT_SCHEMA_03″:  odi_temp/******** schemas=ODI_TEMP cluster=y directory=MCB content=ALL dumpfile=OEC_%U.dmp include=table:IN(‘ORDM_ERROR_COUNTS’) parallel=8 logfile=OEC.log
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 41 MB
>>> DBMS_AW_EXP: Ignoring BIA_RTL.MRDW_SLS_MRGN_AW
Processing object type SCHEMA_EXPORT/TABLE/TABLE
ORA-31693: Table data object “ODI_TEMP”.”ORDM_ERROR_COUNTS” failed to load/unload and is being skipped due to error:
ORA-31617: unable to open dump file “/zfs/exports2/mcb/dp/OEC_01.dmp” for write
ORA-19505: failed to identify file “/zfs/exports2/mcb/dp/OEC_01.dmp”
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
>>> DBMS_AW_EXP: BIN$IrRPsbepeYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbereYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbeteYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbeveYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbexeYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbezeYXgU60IBArnYQ==$0 not AW$
ORA-31694: master table “ODI_TEMP”.”SYS_EXPORT_SCHEMA_03″ failed to load/unload
ORA-31617: unable to open dump file “/zfs/exports2/mcb/dp/OEC_01.dmp” for write
ORA-19505: failed to identify file “/zfs/exports2/mcb/dp/OEC_01.dmp”
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-19505: failed to identify file “/zfs/exports2/mcb/dp/OEC_01.dmp”
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3

 

The fatal error is obviously a problem that needs resolving and I attempt that with all sorts of things, changing the directory permissions, ensuring I can write a file into the directory, trying a variety of directories, all ending with the same failure message. What is even more odd is that 2 of the 8 dump files I want are being created.

-rw-r-----+ 1 oracle dba            4096 Dec  3 14:58 OEC_01.dmp
-rw-r-----+ 1 oracle dba           24576 Dec  3 14:59 OEC_02.dmp
-rw-r--r--+ 1 oracle dba            2200 Dec  3 14:59 OEC.log

I must mention at this point that I am using an 8 node RAC cluster. You may have noticed the cluster=y parameter in my expdp command. I don’t normally ever enter that as it is the default anyway, I just used it to demonstrate the issue.

The answer is of course that the directory I am trying to write into does not exist across all 8 nodes. In fact I have jumped onto node 1 as I normally do and the directory does not actually exist on any of the other 7 nodes. So the failure message is entirely accurate but quite hard to actually track down. I re-run the command with cluster=N, indicating it will still attempt to write 8 parallel files but all on the current node.
Master table “ODI_TEMP”.”SYS_EXPORT_SCHEMA_04″ successfully loaded/unloaded
******************************************************************************
Dump file set for ODI_TEMP.SYS_EXPORT_SCHEMA_04 is:
/zfs/exports2/mcb/dp/OEC_01.dmp
/zfs/exports2/mcb/dp/OEC_02.dmp
Job “ODI_TEMP”.”SYS_EXPORT_SCHEMA_04″ successfully completed at Sat Dec 3 15:12:46 2016

Excellent – the job has worked. But it still only created 2 dump files, how can that be. I mentioned the table only had 76K rows in it so Oracle knows it is overkill to run 8 streams and tones it all down.

The warning error >>> DBMS_AW_EXP: BIN$IrRPsbepeYXgU60IBArnYQ==$0 not AW$ is normally caused by objects in the dba recyclebin

Starting “SYS”.”SYS_EXPORT_SCHEMA_01″:  /******** AS SYSDBA schemas=ODI_TEMP_TST1 cluster=N directory= content=ALL dumpfile=JH%U.dmp include=table:IN(‘E$_DWB_POINTS_TRANSACTION’) parallel=4 logfile=jh.log
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 800 MB
>>> DBMS_AW_EXP: BIN$IrRPsbepeYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbereYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbeteYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbeveYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbexeYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbezeYXgU60IBArnYQ==$0 not AW$
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
>>> DBMS_AW_EXP: Ignoring BIA_RTL_MCU1.MRDW_SLS_MRGN_AW
>>> DBMS_AW_EXP: BIN$IrRPsbepeYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbereYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbeteYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbeveYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbexeYXgU60IBArnYQ==$0 not AW$
>>> DBMS_AW_EXP: BIN$IrRPsbezeYXgU60IBArnYQ==$0 not AW$
. . exported “ODI_TEMP_TST1″.”E$_DWB_POINTS_TRANSACTION”  696.8 MB 3213090 rows
Master table “SYS”.”SYS_EXPORT_SCHEMA_01″ successfully loaded/unloaded

select count(*) from dba_recyclebin

--640 objects

purge dba_recyclebin;

re-run the export.  (Note the datapump command is different from the one at the beginning of the port. I just run this on a test environment just to prove the point and  to negate the need to raise a change)

 

Starting “SYS”.”SYS_EXPORT_SCHEMA_01″:  /******** AS SYSDBA schemas=ODI_TEMP_TST1 cluster=N directory= content=ALL dumpfile=JH%U.dmp include=table:IN(‘E$_DWB_POINTS_TRANSACTION’) parallel=4 logfile=jh.log
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 800 MB
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported “ODI_TEMP_TST1″.”E$_DWB_POINTS_TRANSACTION”  696.8 MB 3213090 rows
Master table “SYS”.”SYS_EXPORT_SCHEMA_01″ successfully loaded/unloaded

Job “SYS”.”SYS_EXPORT_SCHEMA_01″ successfully completed at Mon Dec 5 08:37:04 2016

 

All completed successfully. Having spent so much time trying to fix the parallel issue and having written this blog entry up, as an aide memoire to myself as much as anything else, I am unlikely to forget how to address ORA-31617 next time I see it.

 

Posted in Oracle | Tagged: , , , , , , | 1 Comment »

Flashback database – dont forget the standby

Posted by John Hallas on September 21, 2016

Today an  application team had a problem with a production system and they wanted the database restored to a point a couple of days ago. It is a configuration management system and they figured they could lose any changes . Their call.

I did all the good things

  • Checked we had a change in place
  • Blacked out primary and standby in OEM
  • Checked flashback was on  –
select flashback_on from v$database;
  • Checked how far we could flashback to
set lines 120
col oldest_flashback_scn for 99999999999999999999
col oldest_flashback_time for a30
select
oldest_flashback_scn,
to_char(oldest_flashback_time, 'DD-MM-YYYY HH24:MI:SS')
from v$flashback_database_log;
  • Checked that standby had flashback on.

I stopped managed recovery on standby, shutdown that database, stopped primary and mounted and flashed back to my desired time. Everything was fine.

I went to standby, mounted it and ran

flashback standby database to timestamp
to_timestamp ('19/09/2016 12:00:00','dd/mm/yyyy hh24:mi:ss');
ERROR at line 1:
 ORA-38729: Not enough flashback database log data to do FLASHBACK.

 

That stumped me. Obviously I knew what the message meant and I knew I had not run the script to see how far we could flashback to on standby but why did we have a problem?

Flashback retention was set to 1440  – 24 hours on both primary and standby but there had been no space pressure on primary so it had retained sufficient  flashback logs to go back 6 days if required

It did not take a lot of digging around to find the explanation. Normally we have one database per server (physical or virtual) in production. We then map that  to a dedicated standby server. In this case we had 3 databases running. Due to performance considerations we run 2 primaries and a standby on one side and 2 standbys and a primary on the other. This database was not the key/busiest database and so the standby was sharing with another standby and the primary for the biggest database. That had generated a lot of logfiles which had caused space to be consumed and the flashback logs had been tidied to meet the 1440 minute parameter setting.

A rebuild of standby is under way currently.

I always try to learn lessons from incidents like this. In this case there were two unusual requirements – flashback production and go back 48 hours.

My lessons learned

Use db_recovery_file_dest_size for each database where we have more than one and set them appropriately to the size of each database.

Could we have some sort of automated routine that creates a guaranteed restore point every hour and then drops it 48 hours later. That needs thinking about as we could risk running out of space.

 

 

Posted in Oracle | Tagged: , | 3 Comments »

Commas at the beginning or end of a sql code line

Posted by John Hallas on August 18, 2016

Back in Nov 2015 I commented on a Oracle-L discussion about What happened to SQL*Developer SQL Formatter asking why people liked to see a comma at the beginning of the line in sql code and there was a fair bit of input into how it was easier to use and amend  – think of it like bullet points was one comment.

Fair enough I though and I have used that technique when I remember ever since.

Then I saw a post by Connor McDonald on how the syntax formatter  – or your own hand formatted code might give the wrong results if you are not careful about the placing of commas.

Looks like it is back to the drawing board for me then and commas on the right

 

 

Posted in Oracle | Tagged: , | Leave a Comment »

ASM disks – lsdg compared with the v$asm_diskgroup view

Posted by John Hallas on July 8, 2016

What is the difference between the summaries of disk space on these two systems? Look at the free_usable_file_MB column and free space

System 1

ASMCMD [+] > lsdg
State    Type    Rebal  Sector  Block       AU   Total_MB   Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512   4096  4194304   72769536   2031032           433152          798940              0             N  DATA_PEX0/
MOUNTED  NORMAL  N         512   4096  4194304  128286720  20004028          2672640         8665694              0             N  DATA_PEX1/
         Group                 Diskgroup     Total  Req'd     Free   Percent Disk Size Percent Percent  Disk
     Group Name       State      Redundancy       GB     GB       GB Imbalance  Variance    Free    Free Count
---------- ---------- ---------- ---------- -------- ------ -------- --------- --------- ------- ------- -----
         1 DATA_PEX0  MOUNTED    NORMAL       71,064    423    1,560       1.4        .0     2.2     3.5   168
         2 DATA_PEX1  MOUNTED    NORMAL      125,280  2,610   16,925        .1        .0    15.6    15.6    48

System 2

ASMCMD [+] > lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512   4096  1048576   2621412   595744                0          595744              0             N  DATA/
MOUNTED  EXTERN  N         512   4096  1048576   1048568   700029                0          700029              0             N  FRA/

                                                     Mirror                      Percent Minimum Maximum
           Group                 Diskgroup     Total  Req'd     Free   Percent Disk Size Percent Percent  Disk
     Group Name       State      Redundancy       GB     GB       GB Imbalance  Variance    Free    Free Count
---------- ---------- ---------- ---------- -------- ------ -------- --------- --------- ------- ------- -----
         1 DATA       MOUNTED    EXTERN        2,560      0      582        .2        .0    22.7    22.8    20
         2 FRA        MOUNTED    EXTERN        1,024      0      684        .0        .0    66.8    66.8     8

System 1 is an Exadata stack with NORMAL redundancy whereas System 2 uses EXTERNAL redundancy. All of our systems bar Exadata are configured with dynamic multi-pathing with external redundancy to ensure high availability. – we allow the SAN to manage redundancy

In system 1 when using the second query to interrogate the diskgroups it would appear that we have 1.5Tb of free space in DATA_PEX0  and yet the lsdg command indicates we only have 800Gb free. Quite a significant difference when we have a weekly growth rate of ~250Gb in that diskgroup

SELECT g.group_number  "Group" ,      g.name          "Group Name" ,      g.state         "State" ,      g.type          "Type" ,      g.total_mb/1024 "Total GB" ,      g.free_mb/1024  "Free GB" ,      100*(max((d.total_mb-d.free_mb)/d.total_mb)-min((d.total_mb-d.free_mb)/d.total_mb))/max((d.total_mb-d.free_mb)/d.total_mb) "Imbalance" ,      100*(max(d.total_mb)-min(d.total_mb))/max(d.total_mb) "Variance" ,      100*(min(d.free_mb/d.total_mb)) "MinFree" ,      100*(max(d.free_mb/d.total_mb)) "MaxFree" ,      count(*)        "DiskCnt" FROM v$asm_disk d, v$asm_diskgroup g WHERE d.group_number = g.group_number and d.group_number <> 0 and d.state = 'NORMAL' and d.mount_status = 'CACHED' GROUP BY g.group_number, g.name, g.state, g.type, g.total_mb, g.free_mb ORDER BY 1;

PS the code comes from a very good ASM script from this very blog

This blog by Harald van Breederode explains in a much better way than I could why the mirroring in normal redundancy uses additional space .

As we are quite space challenged on the Exadata storage at the moment I have been asked several times to explain the different values of free space that are being reported – now I can just point them to this blog entry which will re-direct them to Harald’s very good explanation.

Job done

 

 

 

 

Posted in ASM, Blogroll, Oracle | Tagged: , , , , | 6 Comments »

Impact of BREXIT on Oracle price list in the UK

Posted by John Hallas on July 7, 2016

I saw this from an Oracle account manager who I deal with.

Oracle’s regional price lists are based on a constant Dollar based price list, and these are usually recalculated at the beginning of each financial quarter. Given the recent sharp decline in the £ against the $, I’m expecting an increase of roughly 15% for Oracle’s next financial quarter, beginning September.

For illustration, a single Oracle DB EE license has a current GBP list price of £31,597. Based on the constant Dollar price list and today’s exchange rates, this price should be £36,862.

I am sure the rate will stabilise soon but it is yet another indicator of the impact of allowing the masses to decide on national matters. Needless to say I voted REMAIN. I also suspect that if we had the vote again it would be 60% in favour of staying – but alas too late now

 

 

 

Posted in Oracle | Leave a Comment »

SQLplus command line password no longer shown on screen

Posted by John Hallas on July 6, 2016

I was showing some basic sqlplus connection tips to a new DBA starter today and my normal warning about putting the password on the line when calling sqlplus didn’t produce what I expected.

For ever and a day if you have entered a line within a server session such as when using putty or similar to this :-

sqlplus john/N0tverysafe 

and then ‘host’ to the unix shell and run a ps -ef|grep sqlplus you will see the password and so will anybody else who looks

Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production

With the Partitioning, Automatic Storage Management and OLAP options

 


SQL> host

$ps -ef|grep sqlplus
oracle   21519 21165  0  Jun 27  ?        00:00:00 sqlplus -s /nolog
oracle   19165 18798  0  Jun 25  ?        00:00:00 sqlplus -s /nolog
oracle  8516  8514  0 18:33:42 pts/0    00:00:00 grep sqlplus
oracle   24027 23642  0  Jul  2  ?        00:00:00 sqlplus -s /nolog
oracle  8511  7910  0 18:33:28 pts/0    00:00:00 sqlplus john/N0tVERYsafe
I performed more experimentation and realised it was still unsecure at 11.2.0.3 however at 11.2.0.4 it had become secure
create user john identified by "N0tVERYsafe"  profile m_std_profile;
grant create session to john;
[sysopr65@teora01x][TST11204]/home/sysopr65 $sqlplus john/N0tVERYsafe

SQL*Plus: Release 11.2.0.4.0 Production on Wed Jul 6 19:03:53 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Automatic Storage Management and OLAP options

SQL> host
[TST11204]/home/sysopr65 $ps -ef|grep john
sysopr65 28309 28306  0 19:04 pts/0    00:00:00 grep john
[TST11204]/home/sysopr65 $ps -ef|grep sqlplus
sysopr65 28256 22815  0 19:03 pts/0    00:00:00 sqlplus
[TST11204]/home/sysopr65 $exit

I will still try to instill good practise by ensuring people do not put username/password as a command string but the loophole does appear to  be fixed now.

I must check whether SQL Loader has had the security modification as well,  as the issue used to be seen when calling a controlfile embedded with a username and password. Although I must admit I tend to run controlfiles interactively from the unix account without a userid in and just use / as sysdba  when asked for a password.  That probably isn’t best practise but do as I say not as I do.

I have just finished typing this up and I do feel that I have written something like this up before. It might be worth me checking my own blog history.

 

Posted in Oracle, security | Leave a Comment »