Sep 11, 2007

Streams continuing......

Setting up Streams

Today I was experimenting the replication process. If we can finally adapt Oracle Streams, it will save a huge amount of money ($10K - $20K) as we don't have to buy another third party product (like shareplex) for replication.

I was working with the following two databases as created earlier.

Source database: SRCSWN
Destination database: DSTSWN

I followed the following steps-

1. Set up ARCHIVELOG mode
The source database must run in ARCHIVELOG mode.

SQL> archive log list
SQL> ALTER SYSTEM SET log_archive_dest ='/export/home/shawon/oradata/archivelog' SCOPE=both;
SQL> show parameter log_archive_dest
SQL> shutdown immediate
SQL> startup mount
SQL> ALTER DATABASE ARCHIVELOG;
SQL> ALTER DATABASE OPEN;
SQL> archive log list

2. Create tablespace for stream admin

The Streams administrator creates some tables in its default tablespace. We should specify a tablespace other than
SYSTEM for this purpose.

SQL> CREATE TABLESPACE streams_tbs
DATAFILE '/export/home/shawon/oradata/srcswn/streams_tbs01.dbf' SIZE 20M;

3.
Create the Streams administrator user

The Streams environment is managed by an administrative database user.

CREATE USER streamadmin IDENTIFIED BY streamadmin DEFAULT TABLESPACE streams_tbs TEMPORARY TABLESPACE temp
QUOTA UNLIMITED ON streams_tbs;

GRANT CONNECT, RESOURCE, DBA TO streamadmin;

BEGIN
DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(
grantee => 'streamadmin',
grant_privileges => true);
END;
/

--NOT REQUIRED FOR STREAMING BUT MIGHT BE HELPFUL FOR --ADMINISTRATION

GRANT SELECT_CATALOG_ROLE TO streamadmin;
GRANT SELECT ANY DICTIONARY TO streamadmin;

I have created the stream admin account in destination database using the same process.

4. Set up some important parameters

GLOBAL_NAMES - This parameter must be set to TRUE in both the source and the destination databases. This influences the database link creation.
JOB_QUEUE_PROCESSES - This parameter must be set to 2 or higher in the source database.
STREAMS_POOL_SIZE - The default value for this parameter is 0, which means that Streams instead uses memory from the shared pool. If I leave STREAMS_POOL_SIZE at its default value 0, Streams will plan to use up to 10 percent of the SHARED_POOL_SIZE. Oracle recommends setting the STREAMS_POOL_SIZE parameter to at least 200MB.

SQL> show parameter global_names
SQL> show parameter job_queue_processes
SQL> show parameter streams_pool_size

SQL> alter system set global_names=true scope=both;
SQL> alter system set JOB_QUEUE_PROCESSES=2 scope=both;

I was leaving streams_pool_size to its default as I did a test run.

5. Create a database link

I had to create database link from source database to destination database. I connected to destination database to see the value of service parameter

SQL> show parameter service_name
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
service_names string DSTSWN

In Source database:

SQL> conn streamadmin/streamadmin
SQL> CREATE DATABASE LINK DSTSWN CONNECT TO streamadmin IDENTIFIED BY streamadmin USING 'DSTSWN_CON_STR';

6. Put connection string to tnsnames.ora

I used the following command from shell to find the tnsnames.ora file
-bash-3.00$ lsnrctl status
It shows that the file is in this location
/oracle/app/oracle/product/10.2.0/db_1/network/admin/tnsnames.ora
I edited the file and put my new things for destination database

DSTSWN_CON_STR =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = terminus)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = DSTSWN)
)
)

Now I checked from source database to make sure that the db link was working properly-

SQL> conn streamadmin/streamadmin@DSTSWN_CON_STR

7. Set up source and destination queues

The data moves from the source to the destination database through queues. I used SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package to set up the queues. By default, this procedure creates a queue table named streams_queue_table and a queue named streams_queue. We can override these names by specifying the queue_table and queue_name parameters of the procedure SET_UP_QUEUE.

SQL> conn streamadmin/streamadmin
SQL> EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE();

8. Create schema and objects to test replication

In source database-
SQL> create user testsrc identified by testsrc default tablespace users;
SQL> grant connect, resource to testsrc;

SQL> conn testsrc/testsrc

SQL>
create table emp (
id number(10),
name varchar2(64),
comments varchar2(255),
primary key (id)
);

9.
Set up supplemental logging at the source database

Before I started capturing changes at the source database, I had to add supplemental logging on the tables being changed. Supplemental logging puts additional information in the redo logs that helps during the apply process.

SQL> conn testsrc/testsrc
SQL> ALTER TABLE emp ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE) COLUMNS;

10. Configure the capture process at the source database

Create a capture process to extract the changes from the redo logs.
While creating a capture process, you add rules to specify which changes to capture and which ones to discard. The rules are combined into rule sets. The capture process can have a positive rule set and a negative rule set.

SQL> conn streamadmin/streamadmin
SQL> BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'testsrc.emp',
streams_type => 'capture',
streams_name => 'capture_stream',
queue_name => 'streamadmin.streams_queue',

include_dml => true,
include_ddl => true,
inclusion_rule => true);
END;
/
11.
Configure the propagation process

I had to create a propagation process and associate the source queue with the destination queue.

BEGIN DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
table_name => 'testsrc.emp',
streams_name => 'SRCSWN_TO_DSTSWN',
source_queue_name => 'streamadmin.streams_queue',
destination_queue_name => 'streamadmin.streams_queue@DSTSWN',
include_dml => true,
include_ddl => true,
source_database => 'SRCSWN',
inclusion_rule => true);
END;
/
--To drop a propagation process
EXEC DBMS_PROPAGATION_ADM.DROP_PROPAGATION('
SATURN_to_TERMINUS');

12. Create the destination schema and objects

SQL> CREATE TABLESPACE users
DATAFILE '/export/home/shawon/oradata/dstswn/users_tbs01.dbf' SIZE 20M;

SQL> create user testsrc identified by testdst default tablespace users;
SQL> grant connect, resource to testdst;
SQL> create table emp (
id number(10),
name varchar2(64),
comments varchar2(255),
primary key (id)
);

On destination database-

SQL> conn / as sysdba
SQL> GRANT ALL ON
testsrc.emp TO streamadmin;

Note: If destination schema name is different from source schema- it does not work!

13.
Set the instantiation System Change Number (SCN)

It needed to set the instantiation SCN for the table from the source database I want to replicate. This ensures that the changes in the source table captured before the instantiation SCN is set will not be applied at the destination database.

DECLARE
source_scn NUMBER;
BEGIN
source_scn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN@DSTSWN(
source_object_name => 'testsrc.emp',
source_database_name =>'SRCSWN' ,
instantiation_scn => source_scn);
END;
/

14.
Configure the apply process at the destination database

I created an apply process and associate the destination queue with it. I also add ed rules for the apply process.

In destination database-
SQL> conn streamadmin/streamadmin
SQL> BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => '
testsrc.emp',
streams_type => 'apply',
streams_name => 'apply_stream',
queue_name => 'streamadmin.streams_queue',
include_dml => true,
include_ddl => true,
source_database => 'SRCSWN',
inclusion_rule => true);
END;
/
15.
Start the capture and apply processes

In source database-

BEGIN
DBMS_CAPTURE_ADM.START_CAPTURE(
capture_name => 'capture_stream');
END;
/

In destination database-

To start the apply process, connect to the destination database.However, before I did that, I set the disable_on_error parameter of the apply process to n, so that the apply process will continue even if it encounters some errors (
best practices recommends).

BEGIN
DBMS_APPLY_ADM.SET_PARAMETER(
apply_name => 'apply_stream',
parameter => 'disable_on_error',
value => 'n');
END;
/

BEGIN
DBMS_APPLY_ADM.START_APPLY(
apply_name => 'apply_stream');
END;
/

**Done**

To test replication from source database to destination database - I did

In source database-

SQL> insert into emp values (1,'Shawon','Employee');
1 row created.
SQL> insert into emp values (2,'Anik','Boss');
1 row created.
SQL> commit;
Commit complete.

From destination database-

SQL> select * from emp;

ID NAME COMMENTS
---------- -------------------- --------------------
1 Shawon Employee
2 Anik Boss

YES!! REPLICATED!!!


No comments: