We were suspecting some corruption in a data file. Here is a few quick things to be remembered to check the thing quickly and making sure everything was alright -
1. Using Oracle DBVERIFY
# dbv file=/oradata2/DATAFILES_INDX_20100318/indx01_01.dbf
The output is like -
DBVERIFY - Verification complete
Total Pages Examined : 281344
Total Pages Processed (Data) : 217134
Total Pages Failing (Data) : 0
Total Pages Processed (Index): 44048
Total Pages Failing (Index): 0
Total Pages Processed (Other): 11358
Total Pages Processed (Seg) : 0
Total Pages Failing (Seg) : 0
Total Pages Empty : 8804
Total Pages Marked Corrupt : 0
Total Pages Influx : 0
Highest block SCN : 204553469 (0.204553469)
2. Using dictionary view -
SELECT * FROM V$DATABASE_BLOCK_CORRUPTION;
3. Running queries against the data file (tablespace) -
I analyze the schema where all the tables were on a single tablespace (and data file) we suspected. If there is any corrupted block, oracle should not be able to read during the process -
execute dbms_stats.gather_schema_stats(ownname => 'ASSISTDB', cascade=>true);
***
Oct 30, 2010
Oct 17, 2010
MySQL to Oracle data migration - resolving the mystery of 3 times more space usage in Oracle!
We have been migrating one big MySQL table (about 10 TB) into Oracle 11g R2. The table contains biometric data in 18 BLOB columns - it's a kind of storage table for all biometric information in the form of BLOB. The avg. row length is about 140KB. We had a rough estimation - it might take about 14 TB of disk spaces after migrating the thing into Oracle. We planned to put this table in a 32K bock size tablespace. The table contains about 85 million records.
After we finished loading the data, we discovered that about 30 millions of records already consumed more than 11 TB of disk space! In this fashion, we might end up with consuming 33TB of disk spaces after loading the 85 million of records!
When I came back form a nice vacation, I had been assigned to solve the mystery. Ahh.. the situation was not good! I was kind of clueless at the beginning and looking for right approach to debug the problem.
After playing with some real data I found that, out of 18 BLOBs -
* Some columns are < 4K
* Some of the BLOBs > 4K but < 7K
* Some are > 8K but less than < 14K
* Some are > 20K
Then, I reorganized the table and moved BLOBs in different block size (8K, 16K, 32K) tablespaces based on their data size. This saved space wastage for data which were > 4K. Earlier, entire table was on 32K block size tablespace. Before the reorganization, the size was over 11.5 TB, which is now 6.6 TB! It took about 7 days (nights as well :-) ) to move all LOB columns to proper tablespaces.
Now, the total size of the database would be within 19TB - not 33 TB!
Some important scripts I used -
SELECT l.column_name, s.bytes/1024 KB, s.blocks, s.extents
FROM user_segments s, user_lobs l
WHERE l.segment_name = s.segment_name
AND l.table_name = 'BIOMETRIC';
ALTER TABLE biometric MOVE LOB (wsq) STORE AS (tablespace BMDATA_16K CHUNK 16K pctversion 3);
Good lessons here -
* We should have tested with some real data before the actual data loading
* We should have observed disk usage after a day from initiating the data loading
---
After we finished loading the data, we discovered that about 30 millions of records already consumed more than 11 TB of disk space! In this fashion, we might end up with consuming 33TB of disk spaces after loading the 85 million of records!
When I came back form a nice vacation, I had been assigned to solve the mystery. Ahh.. the situation was not good! I was kind of clueless at the beginning and looking for right approach to debug the problem.
After playing with some real data I found that, out of 18 BLOBs -
* Some columns are < 4K
* Some of the BLOBs > 4K but < 7K
* Some are > 8K but less than < 14K
* Some are > 20K
Then, I reorganized the table and moved BLOBs in different block size (8K, 16K, 32K) tablespaces based on their data size. This saved space wastage for data which were > 4K. Earlier, entire table was on 32K block size tablespace. Before the reorganization, the size was over 11.5 TB, which is now 6.6 TB! It took about 7 days (nights as well :-) ) to move all LOB columns to proper tablespaces.
Now, the total size of the database would be within 19TB - not 33 TB!
Some important scripts I used -
SELECT l.column_name, s.bytes/1024 KB, s.blocks, s.extents
FROM user_segments s, user_lobs l
WHERE l.segment_name = s.segment_name
AND l.table_name = 'BIOMETRIC';
ALTER TABLE biometric MOVE LOB (wsq) STORE AS (tablespace BMDATA_16K CHUNK 16K pctversion 3);
Good lessons here -
* We should have tested with some real data before the actual data loading
* We should have observed disk usage after a day from initiating the data loading
---
Subscribe to:
Posts (Atom)