Jul 21, 2008

Data pump export error ORA-39095 with FILESIZE and PARALLEL clauses

First of all, let me tell you one thing - if you are using "parallel" clause and "filesize" altogether and got the error, it's probably not your fault, rather it could be because of the bug I am going to tell about.

Oracle provides nice data pump features to move data around for large database. Taking the whole database to a single dump causes problem in handling big files and managing disk efficiently. With this good intention in mind, I tried to export our large data and objects in smaller chunks.

I issued the following command, which ended up with errors, later discovered that it triggered a bug -


expdp user/pass \
EXCLUDE=TABLE:"LIKE'ACTIVITY'" \
DUMPFILE=EXPDIR:prod-20080721_01%U.dmp,EXPDIR:prod-20080721_02%U.dmp \
FILESIZE=4G \
PARALLEL=2 \
JOB_NAME=PROD_20080721
LOGFILE=EXPDIR1: split-dump.log

This means - export the user schema without the table ACTIVITY, create several dump files with maximum size 4 GB (size would help me to move data around) and create as many dumpfiles as needed with the convention and perform the task in 2 threads.

It supposed to generate files like follows -

prod-20080721_0101.dmp
prod-20080721_0201.dmp
prod-20080721_0102.dmp
prod-20080721_0202.dmp
...

and so forth as needed, because the the substitution variable %U, which indicates that multiple files (01 to 99) would be generated using the specified filename as a template.

Well, I thought that I would go and grab a cup of coffee, and by that time the export job would be completed. But the reality was different, just after few minutes, I saw this error message on my console -

ERROR: ORA-39095: Dump file space has been exhausted: Unable to allocate 8192 bytes Job "USER"."PROD_20080721" stopped due to fatal error at 00:37:29

What I did?

Like others, I googled. Unfortunately, I was not satisfied with the documents I found at that time telling about the error. All the available documents were saying that this error only results when the specified dump file size is not enough to export all the data - which was not the case with me!

Again with a fresh mind, I googled to understand the unexpected problem – but did not find any satisfactory explanation. There are lots of documents saying the possible solutions for this error but did not find any case like my one. Interestingly noticed, in some blogs, people are interested to give solution without specifying the problem :-)

Anyways, this cause could be appropriate if I had not mentioned the substitution variable %U. My dump size was about 30 GB and as I mentioned 2 dump files with substitution variable %U, export process would have been able to generate 2*99 files with 4 GB file size each. So, the “ORA-39095: Dump file space has been exhausted” did not make sense to me in this case. FYI, there were around 80 GB free space left on the DUMPDIR location – more than enough for the task.


Now it's time for the master attempt. I searched in metalink.oracle.com to find any possible bug related to this. Bingo! It was there!!

Here are few lines from Metalink -

Symptoms

When trying to export using PARALLEL clause with one dumpfile, or a number of dumpfile less than the parallelism value.
The export may fail with the following error:
ORA-39095: Dump file space has been exhausted: Unable to allocate 8192 bytes

Cause


When using one dumpfile -or a number less than parallelism value-, several slave processes wait for the file locked by the other process to write. And so we are not benefiting from the parallelism anyway.Sometimes the slave process locking the file does not release the lock after finishing as it's supposed to release the lock when the dump process ends, and it'll not end cause the other processes are waiting to write to the file.

Possible Solutions


1- Use a number of dumpfiles equal to, or more than the parallelism value.
2- Avoid using PARALLEL clause.


So, this was an obvious bug I was facing. I concluded the workaround as follows-

* I have to mention many files – the number of files must be a bit more than the parallelism mentioned. For example, if I mention parallelism 2, then I should mention 4 files with the substitution variable %U

* When the problem encounters – I could add dump files and continue the job

* I should not use the parallelism clause

To accomplished the half done task, I went for the option 2. Data pump feature allows to restart a job and add additional stuffs in the middle of a job – a nice one! It involves few simple commands -

1. I queried database to find the job status -

SQL> select job_name, state from dba_datapump_jobs;
JOB_NAME STATE
------------------------------ ------------------------------
PROD_20080721 NOT RUNNING

2. I had to attach the job with the export process

# expdp user/pass attach=PROD_20080721

3. I added an additional file

Export> ADD_FILE=EXPDIR:prod-20080721_03%U.dmp
Export> CONTINUE_CLIENT

However, I would recommend to use the following simple version (without PARALLEL clause) of export to avoid troubles. This would work for database up to around 290 GB size -

expdp user/pass \
exclude=TABLE:\"like\'ACTIVITY\'\" \
DUMPFILE=EXPDIR1:prod-20080721_%U.dmp \
FILESIZE=3G \
JOB_NAME=EXPORT_0721 \
LOGFILE=EXPDIR1:splited-dump.log

That was the story of using oracle expdp with filesize and parallel clause! Please let me know if this helps.

5 comments:

Anonymous said...

i'm facing the same problem without setting PARALLEL.. (e.g parallel=1).

Roger Serfling said...

Thank you so much for this article....I have been trying to figure this out all day but I wasn't able to find the metalink article that you had found.

Omar Faruq said...

Good to know that it is helpful for others!

Anonymous said...

Genial fill someone in on and this fill someone in on helped me alot in my college assignement. Thank you for your information.

Walter Nirenberg said...

Great blog Omar. I like your approach - understanding what the root cause of a problem is, rather looking for a quick fix. Keep up the great work.