Wednesday, 27 April 2011

Oracle Management Pack

Kerry Osborne’s Oracle Blog » Blog Archive » Oracle Management Packs – Kerry Osborne’s Oracle Blog

Disabling ADDM in 10g and 11g

You may not be licensed to use ADDM and want to stop automatic statistics collection, which is enabled by default.

The method alters from 10g to 11g


Use the package to modify the snapshot interval to 0

Execute dbms_workload_repository.modify_snapshot_settings
(retention=>20160, interval=> 0);


The initialization parameter should be set to DIAGNOSTIC+TUNING (default) or DIAGNOSTIC to enable automatic database diagnostic monitoring. Setting CONTROL_MANAGEMENT_PACK_ACCESS to NONE disables many Oracle Database features, including ADDM, and is strongly discouraged, but seems to be the only way to disable the automatic monitoring.

Tuning relies on having a diagnostic license which explains why the options are NONE, DIAGNOSTIC or DIAGNOSTIC+TUNING

Sunday, 24 April 2011

Update OEL 5.4 to OEL 5.6

Update OEL 5.4(Enterprise Kernel#) x86 To OEL 5.6(Oracle Kernel#) x86

# cd /etc/yum.repos.d/
# wget
# vi public-yum-el5.repo
-> enable [el5_u4_base] [el5_u5_base] [ol5_u6_base]
# yum update
# reboot

good luck !!

Saturday, 16 April 2011


Here are the some extract from Ask tom....

We are working on an application that users will be uploading about 250 10MB per day into our
database. We are reading the first few lines of the file when it is upload, but other than that we
don't use them. All we are doing is storing these files and distributing them to other users. Then
after 45 days the files are archived. We had originally deicded to use BLOBs but we got some
advice to use BFILEs. Here is that advice making the case for BLOBs and BFILEs acorrding to the

> It is always better (if you can) to use BLOBs because of
> following reasons:
> 1. Faster access for smaller files (few hundred KBs).
> 2. Database security. For BFILEs, OS admin has to maintain
> the security of files. BFILEs are stored outside oracle
> database and it becomes OS admin's responsibility to make
> sure that the files are not overwritten or modifed by someone else.
> For BLOBs, this is not a problem as files are bound by
> database user access security.
> 3. Backup and Recovery is easy in case of BLOBs. BFILEs are not
> automatically backed up when you backup the database. It is
> an extra step of backup and admins have to make sure that
> location (pointer) integrity is maintained while restoring
> such an application with BFILEs.
> 4. There are more features which can be used in case of BLOBs.
> You can easliy modify the content of file in BLOB. I do not
> think this applies to you - if you are looking for store and
> retrieve kind of application.
> So, when do we use BFILEs?? - if you have following considerations:
> 1. BFILEs keep the transaction size smaller - hence smaller redo logs
> hence better overall database performance.
> So, for files larger in size (I think this applies to you) it is
> advisable to use BFILEs. A BLOB with large size will cause lot of
> log activity - slowing the transaction performance.
> 2. Continuing on the first point - since the database size is smaller
> (only the file pointer is stored), it helps in faster backup and
> recovery of the database. Please do not confuse this point with
> point#3 mentioned above. That is about "easier" backup and recovery
> for admins, this one is about "faster" backup and recovery.
> If you plan to use BLOBs in your case, I would recommend that you increase > the "log_buffer"
size to few MBs.

If we use BLOBs, it sounds like the issue with the redo log would create a performance problem for
us. What do you think?

Followup April 11, 2003 - 8am Central time zone:

it depends on your needs.

It is day 1. You load up your files. One day 2, your disk crashes.

What do you do? do you care you lost the files? if not, file system may be ok for you. If the
lost data causes you a problem -- sounds like you need a database.

o security
o backup, recovery
o single point of control (no chance someone will "accidently" erase the files)



Tuesday, 5 April 2011

Manual Standby Database Under Standard Edition

Oracle's Standby technology has been rebranded as DataGuard in recent versions of Oracle. Oracle has added a layer of technology automation on top of the standby technology, making automatic Standby technology seamless. But what about the folks on Standard Edition Oracle? Are they left out in the cold?

Well, it turns out that it is still possible to create a *manual* standby database on Oracle SE. Here's how you do it.

1. First you need to create the initial standby database. Here are the steps to do that:

a. Put the primary database in archivelog mode, if it is not already, and add at least LOG_ARCHIVE_DEST and LOG_ARCHIVE_START to your init.ora.


b. Next, create a hotbackup of the primary database. Although you can do this with RMAN, it is probably easiest to just do it manually so you know what is going on. For each tablespace do:

SQL> alter tablespace EXAMPLE begin backup; SQL> !cp example01.dbf /my/db/backup/ SQL> !cp example02.dbf /my/db/backup/ SQL> !cp example03.dbf /my/db/backup/ SQL> alter tablespace EXAMPLE end backup; 

In the above example, the '!' symbol tells sqlplus to run the command from the shell, so we're using the Unix 'cp' command to make copies of those files (which are now frozen in backup mode) in another location.

c. Now, create a standby controlfile from primary database:


d. At this point, you want to copy everything over to the standby server including datafiles, standby controlfile & config files:

$ scp /my/db/backup/*.dbf oracle@

e. From the standby machine, edit the standby init.ora file. Use this parameter to tell Oracle where files on the primary database will be located on the standby. For example if you had files in /ora/oracle on primary, and they are moved to /export/home/oracle on standby, this would work for you:


Note that you can use MULTIPLE pairs of values here, if you have files in different locations. Alternatively, you can startup and mount the standby database then issue:

SQL> alter database rename file '/ora/oracle/myfile.dbf' to /export/home/oracle/myfile.dbf' as an example.

Now you're also likely to have a new location for your archived redo log files, and that's where the parameter LOG_FILE_NAME_CONVERT comes into play.

Important note, neither of these two parameters work for the ONLINE redolog files. Those you will have to rename yourself. If you do not do so, you will get an error at the time you try to SWITCHOVER your standby database. Such errors are easily remedied by running that command.

f. Now, it's time to start the standby instance and mount it.

SQL> startup nomount pfile=/export/home/oracle/admin/SEAN/pfile/initSEAN.standby SQL> atler database mount standby database; 

g. Almost there. Lastly, we need to recover the standby database using the AUTO option. Note that you should build a simple shell script to startup sqlplus and run these commands. A name like would work well. You can then run this periodically, say every half hour, from cron to apply any new archived redolog files that have showed up via below.

SQL> recover standby database; AUTO

h. Now, of course, you'll want to test your standby database. You do this by starting up in read-only mode.

SQL> alter database open read only;

i. Don't forget to put it back in standby mode so that when your script runs from cron, it won't return errors.

SQL> shutdown immediate; SQL> startup nomount pfile=/export/home/oracle/admin/SEAN/pfile/initSEAN.standby SQL> atler database mount standby database; 

2. What scripts should run via cron on the primary and standby database?

As we mentioned earlier, a script called would work well on the standby database. This script applies new archived redologs that have arrived from the production system. Run it every half hour and see how that works for you. The database must be mounted in standby mode (not read-only) or this script will fail.

You'll also want a script on the production server. Name it, and run it every thirty minutes to start with. This can use rsync to move redolog files from production to standby. A command like this would work:

$ rsync -e ssh -Pazv /ora/oracle/arch/ oracle@remote:/export/home/oracle/arch/

Note that you may want to adjust options to ssh to your needs. In addition, this presumes you have ssh autologin configured. Read up on the ssh-keygen command. The .ssh directory contains a public key, which is shipped over to the standby machine, and put in the "authorized_keys" file. ssh will then login without a password. Rsync uses ssh as the transport mechanism, so it also executes without a password. Rsync is very smart and only copies blocks and pieces of files that are different, so it is very fast, and also does checksums to guarantee consistency.

3. Is the standby database behind the production database?

Yes, keep in mind we are creating a manual standby database. The standby database will tend to be behind production by about half the size of a redolog file. So if those files are 100M, and you generate 100M of transactions in 30 minutes, then on average standby will be fifteen minutes behind.

4. What types of changes and statements on production will not be automatically applied to standby?

In database parlance, any PHYSICAL changes to the db, plus any commands, issues with the NOLOGGING option. Physical changes include creation of new tablespaces, adding new datafiles, renaming datafiles, autosizing of datafiles, altering redolog files, altering controlfiles and so on. In addition, primary database processes or commands using the UNRECOVERABLE option will not be propagated to the standby database.

There are specific and detailed instructions for making some of these physical changes on the standby db manually, however in many cases recreating the entire standby database per the instructions above, might be the best option.

5. How can we verify that the standby database is up to date?

If you already have the script running from cron, disable it.

Then login with sqlplus and issue:

SQL> alter database open read only;

Now that you have the database open read-only, run whatever SQL commands you want to in order to verify some change which you know about on production. When you are done, shutdown, and startup in standby mode again. Don't forget to reenable in the crontab.

6. What happens if the standby system restarts?

You could have it automatically start the standby database. In that case, be sure to just check the logfiles. If you want to do it manually in those instances, fire up sqlplus and then issue:

SQL> startup nomount pfile=/export/home/oracle/admin/SEAN/pfile/initSEAN.standby SQL> alter database mount standby database; 

7. What kind of messages can I expect to see in the standby alert.log?

The alert.log is going to have a lot of extra messages since we are repeatedly trying to RECOVER when there may or may not be new transaction logs. When it does this it will say, "looking for archived logfile 1_356.dbf, not found". On the other hand, if it finds it, it will say that it is applying it. You can use unix commands "grep" and "less" to scan through the alert.log file quickly.

8. What other scripts should be put in place?

a. a script to cleanup old archived redo logs on primary.

b. a script to cleanup old archived redo logs on standby

c. a script to rotate and archive the alert.log file when it gets large

d. a script to watch the alert.log file for ORA-xxxxx errors and report them to nagios if it finds any (on both primary and standby)

e. a script to login (via ssh autologin) and check what the latest archived redolog file is, and then also login to the standby and check the alert.log file to verify that those transactions have been applied.

9. How do we switchover in the event of a failure of the primary?

Switchover *can* be done with a script, however I recommend with our manual standby database that you (a) monitor for emergencies on production and (b) manually perform the failover if necessary. This will avoid false positives. Also, it allows you to ship additional redolog data if you have it available from production.

The switchover is a two-step process.

a. Apply remaining redo as we have done before with commands in

b. Startup the database normally, in a read-write mode.

10. What network changes need to happen to failover?

The listener.ora file should be already configured. You can use the same config as primary with a different IP, or you can give this db a different tnsname. For instance, you could call primary SEANA and standby SEANB. Then in your application server configs, when you failover, your database connection configurations need to be updated to point to SEANB. The app servers will probably also need to be restarted at this point.

11. Why can't the primary ship redologs and synchronous changes?

Basically they call it a *manual* standby database for a reason. DataGuard supports options that look like the following:


Again, these are not available in Oracle SE.

12. Once we've failed over, how do we switch back to the primary?

Switching back to the primary database involves these steps:

a. Follow the steps in item 1 above to create a standby database on what was the primary system.

b. If you want to be perfectly clean syncing, do the following:

SQL> shutdown immediate SQL> startup restrict SQL> alter system switch logfile SQL> shutdown immediate

c. Copy over the last archived redolog files

d. Apply them and switchover as described in item 8 above.

13. Are there special init.ora parameters? What makes our standby database special?

The main two things that make it a standby database are:

a. The standby control file (created from primary)

- alter database create standby controlfile as '/my/path/to/standby.ctl

b. The process of mounting as a standby database

- startup nomount pfile=standby.ora

- alter database mount standby database;

There are of course some init.ora parameters which are special for the standby database as well:


So if you do a "shutdown immediate" on the standby, you would start again with:

SQL> startup nomount pfile=/export/home/oracle/admin/SEAN/pfile/initSEAN.standby SQL> alter database mount standby database;   


Standby database technology in Oracle is a powerful high availability solution. Even if you're using Oracle SE, you can still take advantage of these features built into Oracle, with just a little scripting, hand holding, and ample monitoring. Do your research, test, test, and test again on a development server. And don't forget to monitor all your logfiles for errors. Following these guidelines, you should be in very good shape, at a much lower cost.