Nov 5, 2007

Migration stuffs on whole October

Lots of things were going on for the whole October. We just gave a production release version-7.1 of our web application. I passed another stressful day on that 7.1 judgment day :P

Let me keep some note here about the release. In the point release, we usually do new features and existing features improvement. These stuffs require database changes and also data migration. For data migration, we run DDL, DML and some java programs which basically run DML in database.

configuration due to some bad experiences with We have multiple sites geographically located on multiple places in United States. We need to migrate those databases as well. We use SharePlex (hoping that it will be replaced by GoldenGate replication software or by Oracle Streams soon!) for data replication. We turned off the DDL replication features in our SplexSplex replication process. What we do in each migration-

1. Turn off the Web Application
2. Run DDL statements on each database machines (since DDL replication if turned off from Splex)
3. Activate the new Splex configuration file for that release (which includes new tables, sequences and stuffs like that)
4. Run DMLs in primary site which replicates data on other db sites.
5. Run java program in primary site which are also replicated in the above way.

The time estimation for database migration were 7 hours and total application down time were estimated about 8 hours.

Everything were going well on that day. But suddenly found a new problem, one java migration were taking too much time than estimated. It supposed to be completed within 30 min but I found 50% done after 4 hours! It was very alarming.

Usually, the java migration were done by the respective module developer and it is locally tested and again tested in Beta release. It was running without a delay in both test cases due to having the db on same network. But in production, the scenario was different - we have machines on different networks.

The developer was using extra connection and unnecessary query to get the sequence values from database to perform insert operations which were taking a considerable amount of time in each value fetching. Anyways, I had to look into it at the middle of our migration process (with hotted head) and had to rewrite the code to make it faster. Finally I managed to finish it within 30 min and met the downtime deadline.

Now for last 3 days, production is running fine :)

Oct 3, 2007

Setting up SQL*Plus Environment

When I connect to multiple databases in multiple machines, I need to know who I am and where I am connected to prevent any kind of accidental damage.

I took the following "login.sql" script from Tom Kyte's book (Expert Oracle) and modified according to my need-

#login.sql
define _editor=vi
set serverout on size 1000000
set trimspool on
set long 5000
set timing on
set linesize 200
set pagesize 1000
column plan_plus_exp format a80
column global_name new_value gname
set termout off

select lower(user) || '@' || UPPER(SUBSTR(global_name, 1, INSTR(global_name,'.')-1)) global_name from global_name;
set sqlprompt &gname>
set termout on

When I login (connect) to database, it gives me the user and machine information like -
prod7@TERMINUS>

Cool!

Oct 1, 2007

Spool with auto file name and current date-time

When I run the scheduled job to do manual cleanup in database - I need to spool those things for any future need. To do so, I have to dynamically generate spool file name to keep track while running the scripts by crontab. I modified my cleanup.sql file to generate spool file name-

#cleanup.sql
col sysdt noprint new_value sysdt
SELECT TO_CHAR(SYSDATE, 'yyyymmdd') sysdt FROM DUAL;
col systm noprint new_value systm
SELECT TO_CHAR(SYSDATE, 'hhmiam') systm FROM DUAL;

SPOOL /u/shawon/manual-cleanup/db-manual-cleanup-&sysdt-&systm..log
....
....

COMMIT;
SPOOL OFF

When this file is called by scheduler, it selects spool file name accordingly - "db-manual-cleanup-20071001-0354am.log"

Sep 30, 2007

Cron job to run programs on a schedule

I was planning to run some scripts on daily basis to perform some cleanup jobs in database. Keeping that intention in mind, I reviewed cron scheduler.

cron is a Linux system process that will execute a program at a preset time. To use cron, I have to prepare a text file that describes the program that I want to execute and the schedule for them. Then I have to use crontab program to load the text file that describes the cron jobs into cron process.

The scheduler format of a cron job is -

[min] [hour] [day of month] [month] [day of week] [program to be run]


I created the cron rule text file (number.cron, describes how often the shell script will run) and set the rules to execute another program in every minute (a small shell script, print-number.run). See below -

#file: number.cron
* * * * * /home/shawon/cron-test/print-number.run

Note: To execute a command/script, the full path should be provided

I wrote the sample program to test the cron -

#file: print-number.run
seq 1 100 > /tmp/cron-test.log
sleep 10
rm /tmp/cron-test.log

To load my cron.number I issued-
#crontab number.cron
#crontab -l
* * * * * /home/shawon/cron-test/print-number.run

Now I watched /tmp/cron-test.log where log file should be created and deleted after 10 sec. Cron was doing good as expected.


To see the cron activities -
# tail -f /var/log/cron

To see whether cron was runnig (being root user) -
$ ps aux | grep crond

To run cron (if not running) -
$ crond

To see the list currently loaded in crontab -
#crontab -l


To add/edit the current crontab I used -
# export EDITOR=vi

#crontab -e

One helpful link related to cron scheduler-

http://www.scrounge.org/linux/cron.html

Bug in Oracle SQL Developer!

We use light weight Oracle SQL Developer with vpn client tunneling for formatting output in excel or csv files to generate reports from remote database machines.

We are using-
SQL Developer version: 1.1.2.25
Build MAIN: 25.79
For Linux x86 machine

I was working with a query result and got an error "ORA-00918: column ambiguously defined" while exporting data to excel (XLS) file.

The query, in it's simplest form was-

SELECT p.name as name, p.phone as phone, p.emerg1_name as emergency_name, p.emerg1_telno as phone
FROM provider p;

This query executed fine in SQL Developer but after getting the result, when I wanted to export the result, it showed error!

The problem is - if two output columns have same aliases (in my case, 2 phone column), the export mechanism could not distinguish the columns. I found some confusing discussions on OTN - nobody pointed out the main reason!

http://forums.oracle.com/forums/thread.jspa?messageID=1206736&#1206736

Sep 23, 2007

TLA Arch: Finding changes from patch

Arch is a revision control, source code management, and configuration management tool.

Today I need to see the changes made in patch-1052 and patch-1044 by other developers.

I issued the command to get the changeset first by patch number.

# tla get-changeset projectname--devel--7.1--patch-1052 patch-1052

get-changeset retrieves the changeset from the archive and, in this case, stores
it in a directory called patch-1052

Now I want to see changes made in that patch. I wrote-

# tla show-changeset --diffs patch-1052

Hummmm... things are clear. What I am looking for - is there!

Sep 18, 2007

Undo vs Rollback Segment

There might be confusion while undo and rollback segment terms are used interchangeably in db world. It is due to the compatibility issue of oracle.

Undo

Oracle Database must have a method of maintaining information that is used to roll back, or undo, changes to the database. Such information consists of records of the actions of transactions, primarily before they are committed. These records are collectively referred to as undo.

Undo records are used to -

* Roll back transactions when a ROLLBACK statement is issued
* Recover the database
* Provide read consistency
* Analyze data as of an earlier point in time by using Flashback Query

When a ROLLBACK command is issued, undo records are used to undo changes that were made to the database by the uncommitted transaction. During database recovery, undo records are used to undo any uncommitted changes applied from the redo log to the datafiles. Undo records provide read consistency by maintaining the before image of the data for users who are accessing the data at the same time that another user is changing it.

Undo vs Rollback

Earlier releases of Oracle Database used rollback segments to store undo. Oracle9i introduced automatic undo management, which simplifies undo space management by eliminating the complexities associated with rollback segment management. Oracle strongly recommends (Oracle 9i and on words) to use undo tablespace (automatic undo management) to manage undo rather than rollback segments.

To see the undo management mode and other undo related information of database-

SQL> show parameter undo

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS1

Since the advent of Oracle9i, less time consuming and suggested way is — using Automatic Undo Management, in which Oracle Database creates and manages rollback segments (now called "undo segments") in a special-purpose undo tablespace.

Unlike with rollback segments, we don't need to create or manage individual undo segments—Oracle Database does that for us when we create the undo tablespace. All transactions in an instance share a single undo tablespace. Any executing transaction can consume free space in the undo tablespace, and when the transaction completes, its undo space is freed (depending on how it's been sized and a few other factors, like undo retention). Thus, space for undo segments is dynamically allocated, consumed, freed, and reused—all under the control of Oracle Database, rather than manual management by someone.

Switching Rollback to Undo

1. We have to create an Undo tablespace. Oracle provides a function that provides information on how to size new undo tablespace based on the configuration and usage of the rollback segments in the system.

DECLARE
utbsiz_in_MB NUMBER;
BEGIN
utbsiz_in_MB ;= DBMS_UNDO_ADV.RBU_MIGRATION;
END;
/

CREATE UNDO TABLESPACE UNDOTBS
DATAFILE '/oradata/dbf/undotbs_1.dbf'
SIZE 100M AUTOEXTEND ON NEXT 10M
MAXSIZE UNLIMITED RETENTION NOGUARANTEE;

Note: In undo tablespace creation, "SEGMENT SPACE MANAGEMENT AUTO" can not be set

2. Change system parameters

SQL> alter system set undo_retention=900 scope=both;
SQL> alter system set undo_tablespace=UNDOTBS scope=both;
SQL> alter system set undo_management=AUTO scope=spfile;
SQL> shutdown immediate
SQL> startup

UNDO_MANAGEMENT is a static parameter. So database needs to be restarted.

One related article -
http://www.oracle.com/technology/oramag/oracle/05-jul/o45tuning.html

Sep 17, 2007

Keeping objects in KEEP cache

As a part of oracle instance tuning, I was looking for the potential candidates for database KEEP pool. I looked at the statspack report and sort out the most physical read operations and their impacts. I planned to put those very frequently accessed tables to KEEP cache to avoid physical reads.

Now, I had to know the size (physical size of data on disk) of the candicate tables.
I found size related information using the following 2 queries-

--To see all the object size for a given user
SELECT segment_type, SUBSTR (segment_name, 1, 30) segment_name,
SUBSTR (tablespace_name, 1, 30) tablespace_name,
SUM (TRUNC (BYTES / (1024 * 1024 ), 2)) "Size in MB"
FROM dba_segments
WHERE segment_type IN ('TABLE', 'TABLE PARTITION', 'TABLE SUBPARTITION')
AND owner ='DEV7'
GROUP BY segment_type,
SUBSTR (segment_name, 1, 30),
SUBSTR (tablespace_name, 1, 30);

-- To find the size of a particular table
select (x.blocks *8)/(1024) "Size in MB" from user_tables x where table_name = 'DASH_COUNT_CACHE';

To see any existing table in keep pool- I issued

SQL> select x.table_name from user_tables x where x.buffer_pool = 'KEEP';
and found nothing.

Then I looked at the system parameter to know the current status of "db_keep_cache_size"

SQL> show parameter db_keep_cache_size

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_keep_cache_size big integer 0

I set the cache value

SQL> alter system set db_keep_cache_size=100M scope=both;

Now, to put in keep, I issued
SQL> alter table DASH_COUNT_CACHE storage (buffer_pool KEEP);

--To remove from keep
--alter table DASH_COUNT_CACHE storage (buffer_pool DEFAULT);

*** This ends the caching process ***

One more thing, there is another parameter called "buffer_pool_keep" which is very confusing. Basically it is deprecated. Here is a note from Oracle -

BUFFER_POOL_KEEP - This parameter is deprecated in favor of the DB_KEEP_CACHE_SIZE parameter. Oracle recommends that you use DB_KEEP_CACHE_SIZE instead. Also, BUFFER_POOL_KEEP cannot be combined with the new dynamic DB_KEEP_CACHE_SIZE parameter; combining these parameters in the same parameter file will produce an error. BUFFER_POOL_KEEP is retained for backward compatibility only.

Beta Migration of database

Each 3/4 months, we deliver a new version release of the application.

Prior to production release, we do have a Beta for users. It is done on a different machine. The important task is that I have to set up the database environment as in production box. Then I have to do the migration (DDL and DML operation) for the new release as needed. For one small project's Beta release, I did the following things this mornig -

In Beta box -

1. I created the user tablespace

SQL> CREATE TABLESPACE "COLUMBIA01" DATAFILE '/oracle/oradata/tdevdb/columbia_01.dbf' SIZE 200M AUTOEXTEND ON NEXT 50M MAXSIZE UNLIMITED LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;

2. Created the user and granted access

SQL> create user columbia1 identified by ****** default tablespace columbia01;

SQL> grant connect, resource, create view to columbia1;

SQL> grant read, write on directory DBDUMP to columbia1;

3. Imported the production dump using datapump of 10g

# impdp columbia1/****** directory=DBDUMP dumpfile=db101-columbia-20070910.dmp

4. I ran the migration scripts

Migration scripts for a new release is usually developed by the application developers and the database team members. It includes DDL, DML and small java programs to be run for the new version of application.

Sep 16, 2007

ORA-22924: snapshot too old

We have few pet projects along with our main stream development project. Yesterday we got an error from one of the projects which loads huge data and perform biometric search operation. The investigation result requires modification in database.

I instantly checked the undo tablespace whether it was undersized or not. It was set to "auto extend on" as expected. Then I checked out the undo retention parameter-

SQL> show parameter undo_retention

It was set to default 900. I was thinking to modify it but was not so sure as oracle supporting documents were not available at hand.

Later, when I got Internet access, found that the error comes for the LOB segment undo problem. When data is modified, oracle keeps a version of the data in undo segment for read consistency.

I had two options-

1. Increase PCTVERSION in LOB table columns
2. Increase RETENTION in LOB table columns

1. PCTVERSION is the percentage of all used LOB data space that can be occupied by old versions of LOB data pages. As soon as old versions of LOB data pages start to occupy more than the PCTVERSION amount of used LOB space, Oracle Database tries to reclaim the old versions and reuse them. In other words, PCTVERSION is the percent of used LOB data blocks that is available for versioning old LOB data.

PCTVERSION has a default of 10 (%), a minimum of 0, and a maximum of 100. Setting PCTVERSION to twice the default value allows more free pages to be used for old versions of data pages. Because large queries may require consistent reads of LOB columns, it may be useful to retain old versions of LOB pages. In this case, LOB storage may grow because the database will not reuse free pages aggressively.

2. As an alternative to the PCTVERSION parameter, one can specify the RETENTION in CREATE TABLE or ALTER TABLE statement. Doing so, configures the LOB column to store old versions of LOB data for a parameter in the LOB storage clause of the period of time, rather than using a percentage of the table space.

I have planned to go with option 2 -

ALTER SYSTEM SET UNDO_RETENTION=2700 scope=both;
ALTER TABLE TABLE_NAME MODIFY LOB (COLUMN_NAME RETENTION);

The LOB RETENTION will take value from UNDO_RETENTION parameter.

Summary -
Cause: The version of the LOB value needed for the consistent read was already overwritten by another writer.
Action: Use a larger version pool/retention time.

One very important article of Tom Kyte -
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:275215756923

Sep 13, 2007

Customer says, "It's slow for us"- Monitoring Database Health

Sometimes we get complains from customers about the application performance. Yesterday I got an AFID saying that during peak hours - application seems a bit slower but still quite reasonable. As she is a good customer, she did not forget to send an AFID.

Well, I have investigated it. Firstly, I took the db load statistics and generated graphs of three consecutive working days to see the load average on database. These days graphs are almost similar. Then I went back one month earlier and then one more month earlier to see the previous load graphs. It seems that in 2 months, db load increased about 10% - not stunning as the average db load is still around 1 during peak hours.


Secondly, I took Statspack reports to see the overall database health. The report seemed that we have a fantastic instance efficiency.

Instance Efficiency Percentages (Target 100%)

Buffer Nowait %: 100.00Redo NoWait %: 100.00
Buffer Hit %: 99.64In-memory Sort %: 100.00
Library Hit %: 98.94Soft Parse %: 99.01
Execute to Parse %: 80.40Latch Hit %: 99.99
Parse CPU to Parse Elapsd %: 100.05% Non-Parse CPU: 98.50


Impressive! Nothing to do with instance tuning right at this moment. The small hard parse thing is well known to us and is unavoidable from development perspective. In load profile, Redo size and Logical reads were noticeable.

Load Profile


Per SecondPer Transaction
Redo size: 38,515.67 3,663.59
Logical reads: 26,862.97 2,555.19


We have huge updates and inserts in the database. Concurrent 400 + users doing a lot of stuffs with the applications- oracle is generating big amount of redo. When we will integrate some caching solutions in memory for very frequently updated tables, we will get rid of it.

But Logical reads is alarming. We have an option here to reduce logical reads. We have plan to use larger database block size (currently using the default 8K) in near future. It reduce logical reads and thus elapsed time ensuring good db health.

I was also looking for some performance related article and got one very interesting - Oracle Myths

http://www.dba-oracle.com/t_oracle_myths_performance_testing.htm

Sep 11, 2007

Streams continuing......

Setting up Streams

Today I was experimenting the replication process. If we can finally adapt Oracle Streams, it will save a huge amount of money ($10K - $20K) as we don't have to buy another third party product (like shareplex) for replication.

I was working with the following two databases as created earlier.

Source database: SRCSWN
Destination database: DSTSWN

I followed the following steps-

1. Set up ARCHIVELOG mode
The source database must run in ARCHIVELOG mode.

SQL> archive log list
SQL> ALTER SYSTEM SET log_archive_dest ='/export/home/shawon/oradata/archivelog' SCOPE=both;
SQL> show parameter log_archive_dest
SQL> shutdown immediate
SQL> startup mount
SQL> ALTER DATABASE ARCHIVELOG;
SQL> ALTER DATABASE OPEN;
SQL> archive log list

2. Create tablespace for stream admin

The Streams administrator creates some tables in its default tablespace. We should specify a tablespace other than
SYSTEM for this purpose.

SQL> CREATE TABLESPACE streams_tbs
DATAFILE '/export/home/shawon/oradata/srcswn/streams_tbs01.dbf' SIZE 20M;

3.
Create the Streams administrator user

The Streams environment is managed by an administrative database user.

CREATE USER streamadmin IDENTIFIED BY streamadmin DEFAULT TABLESPACE streams_tbs TEMPORARY TABLESPACE temp
QUOTA UNLIMITED ON streams_tbs;

GRANT CONNECT, RESOURCE, DBA TO streamadmin;

BEGIN
DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(
grantee => 'streamadmin',
grant_privileges => true);
END;
/

--NOT REQUIRED FOR STREAMING BUT MIGHT BE HELPFUL FOR --ADMINISTRATION

GRANT SELECT_CATALOG_ROLE TO streamadmin;
GRANT SELECT ANY DICTIONARY TO streamadmin;

I have created the stream admin account in destination database using the same process.

4. Set up some important parameters

GLOBAL_NAMES - This parameter must be set to TRUE in both the source and the destination databases. This influences the database link creation.
JOB_QUEUE_PROCESSES - This parameter must be set to 2 or higher in the source database.
STREAMS_POOL_SIZE - The default value for this parameter is 0, which means that Streams instead uses memory from the shared pool. If I leave STREAMS_POOL_SIZE at its default value 0, Streams will plan to use up to 10 percent of the SHARED_POOL_SIZE. Oracle recommends setting the STREAMS_POOL_SIZE parameter to at least 200MB.

SQL> show parameter global_names
SQL> show parameter job_queue_processes
SQL> show parameter streams_pool_size

SQL> alter system set global_names=true scope=both;
SQL> alter system set JOB_QUEUE_PROCESSES=2 scope=both;

I was leaving streams_pool_size to its default as I did a test run.

5. Create a database link

I had to create database link from source database to destination database. I connected to destination database to see the value of service parameter

SQL> show parameter service_name
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
service_names string DSTSWN

In Source database:

SQL> conn streamadmin/streamadmin
SQL> CREATE DATABASE LINK DSTSWN CONNECT TO streamadmin IDENTIFIED BY streamadmin USING 'DSTSWN_CON_STR';

6. Put connection string to tnsnames.ora

I used the following command from shell to find the tnsnames.ora file
-bash-3.00$ lsnrctl status
It shows that the file is in this location
/oracle/app/oracle/product/10.2.0/db_1/network/admin/tnsnames.ora
I edited the file and put my new things for destination database

DSTSWN_CON_STR =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = terminus)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = DSTSWN)
)
)

Now I checked from source database to make sure that the db link was working properly-

SQL> conn streamadmin/streamadmin@DSTSWN_CON_STR

7. Set up source and destination queues

The data moves from the source to the destination database through queues. I used SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package to set up the queues. By default, this procedure creates a queue table named streams_queue_table and a queue named streams_queue. We can override these names by specifying the queue_table and queue_name parameters of the procedure SET_UP_QUEUE.

SQL> conn streamadmin/streamadmin
SQL> EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE();

8. Create schema and objects to test replication

In source database-
SQL> create user testsrc identified by testsrc default tablespace users;
SQL> grant connect, resource to testsrc;

SQL> conn testsrc/testsrc

SQL>
create table emp (
id number(10),
name varchar2(64),
comments varchar2(255),
primary key (id)
);

9.
Set up supplemental logging at the source database

Before I started capturing changes at the source database, I had to add supplemental logging on the tables being changed. Supplemental logging puts additional information in the redo logs that helps during the apply process.

SQL> conn testsrc/testsrc
SQL> ALTER TABLE emp ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE) COLUMNS;

10. Configure the capture process at the source database

Create a capture process to extract the changes from the redo logs.
While creating a capture process, you add rules to specify which changes to capture and which ones to discard. The rules are combined into rule sets. The capture process can have a positive rule set and a negative rule set.

SQL> conn streamadmin/streamadmin
SQL> BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'testsrc.emp',
streams_type => 'capture',
streams_name => 'capture_stream',
queue_name => 'streamadmin.streams_queue',

include_dml => true,
include_ddl => true,
inclusion_rule => true);
END;
/
11.
Configure the propagation process

I had to create a propagation process and associate the source queue with the destination queue.

BEGIN DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
table_name => 'testsrc.emp',
streams_name => 'SRCSWN_TO_DSTSWN',
source_queue_name => 'streamadmin.streams_queue',
destination_queue_name => 'streamadmin.streams_queue@DSTSWN',
include_dml => true,
include_ddl => true,
source_database => 'SRCSWN',
inclusion_rule => true);
END;
/
--To drop a propagation process
EXEC DBMS_PROPAGATION_ADM.DROP_PROPAGATION('
SATURN_to_TERMINUS');

12. Create the destination schema and objects

SQL> CREATE TABLESPACE users
DATAFILE '/export/home/shawon/oradata/dstswn/users_tbs01.dbf' SIZE 20M;

SQL> create user testsrc identified by testdst default tablespace users;
SQL> grant connect, resource to testdst;
SQL> create table emp (
id number(10),
name varchar2(64),
comments varchar2(255),
primary key (id)
);

On destination database-

SQL> conn / as sysdba
SQL> GRANT ALL ON
testsrc.emp TO streamadmin;

Note: If destination schema name is different from source schema- it does not work!

13.
Set the instantiation System Change Number (SCN)

It needed to set the instantiation SCN for the table from the source database I want to replicate. This ensures that the changes in the source table captured before the instantiation SCN is set will not be applied at the destination database.

DECLARE
source_scn NUMBER;
BEGIN
source_scn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN@DSTSWN(
source_object_name => 'testsrc.emp',
source_database_name =>'SRCSWN' ,
instantiation_scn => source_scn);
END;
/

14.
Configure the apply process at the destination database

I created an apply process and associate the destination queue with it. I also add ed rules for the apply process.

In destination database-
SQL> conn streamadmin/streamadmin
SQL> BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => '
testsrc.emp',
streams_type => 'apply',
streams_name => 'apply_stream',
queue_name => 'streamadmin.streams_queue',
include_dml => true,
include_ddl => true,
source_database => 'SRCSWN',
inclusion_rule => true);
END;
/
15.
Start the capture and apply processes

In source database-

BEGIN
DBMS_CAPTURE_ADM.START_CAPTURE(
capture_name => 'capture_stream');
END;
/

In destination database-

To start the apply process, connect to the destination database.However, before I did that, I set the disable_on_error parameter of the apply process to n, so that the apply process will continue even if it encounters some errors (
best practices recommends).

BEGIN
DBMS_APPLY_ADM.SET_PARAMETER(
apply_name => 'apply_stream',
parameter => 'disable_on_error',
value => 'n');
END;
/

BEGIN
DBMS_APPLY_ADM.START_APPLY(
apply_name => 'apply_stream');
END;
/

**Done**

To test replication from source database to destination database - I did

In source database-

SQL> insert into emp values (1,'Shawon','Employee');
1 row created.
SQL> insert into emp values (2,'Anik','Boss');
1 row created.
SQL> commit;
Commit complete.

From destination database-

SQL> select * from emp;

ID NAME COMMENTS
---------- -------------------- --------------------
1 Shawon Employee
2 Anik Boss

YES!! REPLICATED!!!


Sep 10, 2007

Oracle INSERT Performance

Yesterday, we had an issue to increase the speed of insert statements. We had to insert about 500K record to database as quickly as possible. We were inserting at a rate of 10 records/sec. Well, I was thinking the following approaches to gain speed-

1. Use a larger data block size: By defining large (i.e. 16k or 32k) blocksizes for the target tables, we can reduce I/O because more rows fit onto a block before a "block full" condition (as set by PCTFREE) unlinks the block from the freelist.

SQL> DROP TABLESPACE web_key_tbs INCLUDING CONTENTS AND DATAFILES;

SQL> CREATE TABLESPACE web_key_tbs DATAFILE '/mnt/extra/test1.dbf' SIZE 1024M AUTOEXTEND ON NEXT 10M MAXSIZE UNLIMITED blocksize 16K;

2. Increase the size of UNDO tablespace -

SQL> CREATE UNDO TABLESPACE UNDOTBS DATAFILE '/mnt/extra/test_undo1.dbf' SIZE 200M BLOCKSIZE 16K;

SQL> alter system set undo_tablespace='UNDOTBS' scope=both;

SQL> DROP TABLESPACE UNDOTBSP1 INCLUDING CONTENTS AND DATAFILES;

3. APPEND into tables: By using the APPEND hint, it ensures that Oracle always grabs fresh data blocks by raising the high-water-mark for the table.

SQL> insert /*+ append */ into customer values ('hello',';there');

4. Table into NOLOGGING mode: If possible, take the database to NO ARCHIVE LOG mode. Putting the table into NOLOGGING mode, which will allow Oracle to avoid almost all redo logging.

SQL> SELECT logging FROM user_tables WHERE table_name = 'LOGIN';

SQL> ALTER TABLE login NOLOGGING;

Again, to enable logging -
SQL> ALTER TABLE login LOGGING;

5. Disable/drop indexes: It's far faster to rebuild indexes after the data load, all at-once. Also indexes will rebuild cleaner, and with less I/O if they reside in a tablespace with a large block size.

6. Parallelize the load: We can invoke parallel DML (i.e. using the PARALLEL and APPEND hint) to have multiple inserts into the same table. For this INSERT optimization, make sure to define multiple freelists and use the SQL "APPEND" option.

Unfortunately, I was not be able to apply most of these tricks, the insertions were done by a third party application. I only tried the first two options - which did not help :-(

I could not even managed to analyze the wait events to find the exact reason of waiting. That third party software, which did those inserts, uses Oracle Call Interface (OCI) - so statements were not available to the database.

One more thing- database was runnnig on NOARCHIVELOG mode.

One helpful url -
http://www.dba-oracle.com/t_optimize_insert_sql_performance.htm

Sep 6, 2007

Creating Oracle database with scripts

I took the manual approach (without DBCA) for creating database. Steps followed-

1. Setup ORACLE_HOME and ORACLE_SID

I edited my .profile file and put accordingly

export ORACLE_HOME=/oracle/app/oracle/product/10.2.0/db_1
export ORACLE_SID=SRCSWN

To use sqlplus

export PATH
export PATH=$PATH:/usr/sfw/bin:$ORACLE_HOME/bin

To activate my profile without relogin - I used

# . .profile

2. Create a minimal pfile

$ORACLE_HOME/dbs/initSRCSWN.ora

*.control_files = '/export/home/shawon/oradata/srcswn/control01.ctl','/export/home/shawon/oradata/srcswn/control02.ctl','/export/home/sh
awon/oradata/srcswn/control03.ctl'
*.undo_management = AUTO
*.undo_tablespace = UNDOTBS1
*.db_name = SRCSWN
*.db_block_size = 8192
*.sga_max_size = 262144000
*.sga_target = 262144000


3. Start the instance

# sqlplus / as sysdba

4. Create Database

create database SRCSWN
logfile group 1 ('/export/home/shawon/oradata/srcswn/redo1.log') size 100M,
group 2 ('/export/home/shawon/oradata/srcswn/redo2.log') size 100M,
group 3 ('/export/home/shawon/oradata/srcswn/redo3.log') size 100M
character set WE8ISO8859P1
national character set utf8
datafile '/export/home/shawon/oradata/srcswn/system.dbf' size 500M autoextend on next 10M maxsize unlimited extent management local
sysaux datafile '/export/home/shawon/oradata/srcswn/sysaux.dbf' size 100M autoextend on next 10M maxsize unlimited
undo tablespace undotbs1 datafile '/export/home/shawon/oradata/srcswn/undotbs1.dbf' size 100M
default temporary tablespace temp tempfile '/export/home/shawon/oradata/srcswn/temp01.dbf' size 100M;


5. Run catalog and catproc scripts

@$ORACLE_HOME/rdbms/admin/catalog.sql
@$ORACLE_HOME/rdbms/admin/catproc.sql

6. Change sys and system password

alter user sys identified by *****;
alter user system identified by *****;

One helpful URL here -
http://www.dba-oracle.com/oracle_create_database.htm#extreme_minimal_create_database

Creating User Account in Linux

I need to create an OS user so that I can create separate database to experiment Oracle Streams.

# useradd -g dba -d /export/home/shawon -m -k /etc/skel -s /bin/bash shawon
# passwd shawon

Parameters
-g primary user group
-d home directory
-m create home directory
-k skeleton information (such as .profile) that can be copied into a new user's home directory.
-s user shell

To modify user
# usermod -G oinstall shawon

Sep 5, 2007

Oracle Streams

We have production databases on multiple geographical locations. We have to keep our system available on 24 X 7. That's why real time up to date data is necessary across multiple databases. Again, for load balancing, we have multiple web servers and each web server is pointed to a different database - for seamless users transitions, we need all the databases synchronized.

We started data replication with Quest SharePlex software. As our requirement changed and we started using Oracle Enterprise Edition, it's time to migrate to oracle's proprietary technologies like
Oracle Streams.

Oracle Streams captures database changes at a source database, stages the changes, propagates the changes to one or more destination databases, and then applies the changes at the destination database(s).

Oracle Streams begins by capturing changes. The changes that take place in a database are recorded in the database redo log files. The Streams capture process extracts these changes from the redo log files and formats each change into a logical change record (LCR). The LCRs are then stored in a queue (staged). Next, Streams propagates LCRs from one queue (the producer queue) to another (the consumer queue) and can then apply the LCRs from the consumer queue to the destination database.

Next few days, I will be learning and experimenting Oracle Streams. Some related articles are-

http://download.oracle.com/docs/cd/B19306_01/server.102/b14229/title.htm
http://www.oracle.com/technology/oramag/oracle/04-nov/o64streams.html

Why people write blogs?

There may be several reasons for writing blogs. For me, I want to keep track what I have learnt. To make things permanent in memory, I think blogging might help. For future reference and searching, it could be a good way. And definitely it enriches language skills.