Mar 29, 2009

Oracle Technology Solutions - go beyond the database server!

Oracle changed it's identity!

Years ago, there was a perception - the words "Oracle" and "Oracle Database" were being used interchangeably. This is not the case anymore - we should remember that :-)

Today, I attended a whole day seminar organized for the Oracle partners in Bangladesh. There were three guys from Oracle Corporation conducted those five back to back sessions.

The sessions were basically based on the following five main product lines -

1. Oracle Technology Infrastructure
2. Oracle Fusion Middleware
3. Enterprise Security
4. Business Process Management and
5. Oracle Enterprise Management Solutions

Mr. Jonathan Tan, Senior Solutions Manager, Oracle Fusion Middleware, conducted the session on middleware architectures and various product lines Oracle provides. He also covered the Business Process Management products fits to Enterprise a.

Mr. Mazhar Ali, Sotutions Manager, SAGE West, covered a number of topics on Oracle Technology Infrastructures, Enterprise Security and Oracle Enterprise Management Solutions. The sessions were very informative and gave a very good sense about the variety of product lines for Enterprise solutions.

I found a number of interesting things to look at further -

* Information Lifecycle Management (ILM)
* Information Rights Management (IRM)
* Application Diagnostics for Java (AD4J)
* Application Testing Suite (ATS)
* SQL Performance Analyzer (SPA)
* Oracle Real Application Testing
* Oracle Coherence

And a lot more! So, Oracle is not just the database provider - rather it got all the products from the bottom to top on the stack of Enterprise product lines.

More information is available on Oracle biz site -

http://www.oracle.com/

Mar 21, 2009

Oracle User Profile and Resource Management

Sometimes we don't notice that there is something called 'User Profile' used to manage user resources and other important things. I would like to say in simple words - 'User profile' is some system parameters specific to a user.

The good use of profile could be password management for the user.

What is the profile assigned to a user?

When we create a database user, the default profile is assigned automatically.

SQL> SELECT x.username, x.profile
2 FROM dba_users x
3 WHERE username='SCOTT'
4 ORDER BY x.username;

USERNAME PROFILE
------------- -----------------------------------------------
SCOTT DEFAULT

For example, how many time(s) a database user/schema can attempt with incorrect login/password before the account got locked.

What are there in default profile?

SQL> SELECT p.resource_name, p.limit
2 FROM dba_users u, dba_profiles p
3 WHERE u.profile=p.profile
4 and u.username = 'SCOTT'
5 ORDER BY p.resource_name;


RESOURCE_NAME LIMIT
--------------------- ---------------------------
COMPOSITE_LIMIT UNLIMITED
CONNECT_TIME UNLIMITED
CPU_PER_CALL UNLIMITED
CPU_PER_SESSION UNLIMITED
FAILED_LOGIN_ATTEMPTS 10
IDLE_TIME UNLIMITED
LOGICAL_READS_PER_CALL UNLIMITED
LOGICAL_READS_PER_SESSION UNLIMITED
PASSWORD_GRACE_TIME UNLIMITED
PASSWORD_LIFE_TIME UNLIMITED
PASSWORD_LOCK_TIME UNLIMITED
PASSWORD_REUSE_MAX UNLIMITED
PASSWORD_REUSE_TIME UNLIMITED
PASSWORD_VERIFY_FUNCTION NULL
PRIVATE_SGA UNLIMITED
SESSIONS_PER_USER UNLIMITED


How to create a profile?

SQL> CREATE PROFILE TEMP_10_DAYS_USER LIMIT PASSWORD_LIFE_TIME 10 SESSIONS_PER_USER 5;

If this new profile is assigned to 'SCOTT' user, the account will be valid for 10 days and the user could open 5 concurrent sessions.

SQL> ALTER USER SCOTT PROFILE TEMP_10_DAYS_USER;
The detailed Oracle document could be found here -

How to change a profile?

SQL> ALTER PROFILE TEMP_10_DAYS_USER LIMIT PASSWORD_LIFE_TIME 10 SESSIONS_PER_USER 10;

http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_6010.htm

Mar 17, 2009

Oracle Exadata Servers - For high performance database computing..

I had an opportunity last week to meet and discuss with one of the Oracle gurus and also top level Oracle professional Mr. Christopher G. Chelliah, Senior Director & Chief Architect, Asia Pacific Strategic Customers. He was visiting Dhaka for a business trip, I think.

Mr. Christopher gave a nice overview of Exadata - the high performance storage servers. He described the architectural design and some fringe benefits of the new product. I went there on behalf of another company (which is a sister concern of my present company) - we work in a national project. We were evaluating the high performance server for the probable National ID Databases.

Exadata is a complete hardware and software solution that Oracle targets for high performance data warehouses. This is a solution for both server and storage.

There are basically two components of Exadata.

1. The Server
2. The Storage

The server is clustered RAC, configured with multiple nodes, 8 nodes for us I think. The storage consists of 16 cells primarily, all these build the storage system which is basically another 16 database servers altogether!

The cool feature is, the database intelligence is distributed in server side and also in storage. Most of the data filtering are done on storage and the actual results are sent to server. It offloads the unnecessarily data transfer overheads of the network and also less processing for the actual server.

Some fringe benefits/features I marked are -

* Query predicate offloading feature
* Smart scan processing on storage server
* Reduced load on database servers - processings are distributed to storage side
* Optimized I/O and disk functionality
* Reliable hardware
* RAC with 8 nodes cluster for high performance and reliability
* High speed up and down link - 1 Gbps I think
* ASM for transparent storage management

And a lot more!

The technical stuffs could be found in Oracle's business site -

http://www.oracle.com/database/exadata.html

Mar 16, 2009

LogMiner to analyze online or archived redo logs

We had a situation - yes, other than having a situation, why should I dig stuffs in archive logs! Log mining is not my hobby :-)

I had to find what were going on in database with nightly cron scheduler for a particular day. I looked at archive logs to get all the answer of my questions - LogMiner made my life easy!


What is LogMiner?

Anyone can easily guess, it a tool or one of the capabilities provided by Oracle to look at online redo logs or old archive logs. These logs keep history of activities performed in database. We can find all the activities those happened to database either by the application users or by the system itself - amazing! By initiating LogMiner, we can simply query some database views through SQL interface to find desired information.

It can be used as a data audit tool or for any other sophisticated data analysis.

It's nice to go through the LogMiner benefits -

http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/logminer.htm#i1005606


Prerequisites

Supplemental logging must be enabled prior to the redo/archive logs are being generated - this option will put additional information to those logs which will be analyzed by LogMiner later.

SQL>ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
SQL>SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;

How do I do that?

We need to do all these being a sys user. Otherwise, some special roles will be required explicitely - EXECUTE_CATALOG_ROLE and SELECT ANY TRANSACTION.

Step-1: Add logfile(s)

The procedure will add archive log file as the log mining candidate from where we might extract information.

SQL>EXECUTE DBMS_LOGMNR.ADD_LOGFILE('C:\oracle\product\10.2.0\flash_recovery_area\SHAWON_D\ARCHIVELOG\2009_03_15\O1_MF_1_10_4VRZ1DT4_.ARC', DBMS_LOGMNR.ADDFILE);

We could add as many as we want in the above way.

Step-2: Start LogMiner with data dictionary information

LogMiner requires data dictionary information to translate Object ID (kept in redo/archive logs) to Object Names when it returns data as a part of data analysis.

The dictionary options are -

1. Using the Online Catalog
2. Extracting a LogMiner Dictionary to the Redo Log Files
3. Extracting a LogMiner Dictionary to the Redo Log Files

I used the online catalog option as I could use the database during off peak hours for log analysis.

SQL>EXECUTE DBMS_LOGMNR.START_LOGMNR( -
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

Step-3: Query LogMiner view to retrieve desired information

The main source of data is V$LOGMNR_CONTENTS. Just describe the view and queried as I wanted.

Forexample, I wanted to know all the operations within a specified period of time by a user -

SELECT OPERATION, SQL_REDO, SQL_UNDO, TIMESTAMP
FROM V$LOGMNR_CONTENTS
WHERE USERNAME = 'TEST'
AND TIMESTAMP
BETWEEN TO_DATE('03-15-2009 09:40:00 am','mm-dd-yyyy hh:mi:ss am')
AND TO_DATE('03-15-2009 09:50:00 am','mm-dd-yyyy hh:mi:ss am')
ORDER BY TIMESTAMP;


Step-4: Close LogMiner

SQL>EXECUTE DBMS_LOGMNR.END_LOGMNR;

That's it! You may or may not close. Witout closing the previous one, we can start another LogMinger session.


I used this detailed Oracle doc during my activities -

http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/logminer.htm

Mar 12, 2009

Oracle: SQL_TRACE &TKPROF Usage

I saw fringe benefits of writing blogs. In child days, teachers were saying that - one time writing is better than ten times reading! This is kind of valuable statements I always realize.

I did use SQL_TRACE and TKPROF earlier - before 3 years may be. But again last week, I had to look at Oracle docs to refresh my memory! This time what I did, I am trying to put it here so that next time, if needed, I won't have to spend much time on that :-)

What is SQL_TRACE?

Simply, it is a dynamic parameter could be set in session or system level to enable some detailed tracing of database activities.

What is TKPROF?

Ha! This was a common question I asked many times in interviews earlier days. It is a command line tool supplied by Oracle to format tracing output. That's it!

Why SQL_TRACE & TKPROF come altogether?

TKPROF has made our life easier to understand the contents of trace file. When we enable tracing - I mean when we set SQL_TRACE=TRUE, Oracle does good stuffs for us which is not formatted well to retrieve useful information from those files. TKPROF makes the output readable.

SQL Trace facility and TKPROF are two basic performance diagnostic tools used to analyze query and server performance. Now a days, I am using other tools (AWR, ADDM etc) more than this for performance analysis. Actually I used the SQL_TRACE this time to diagnose a BLOB problem - I was interested to see some information when BLOB got inserted into database using jdbc, anyways.


Considerations before using SQL_TRACE

We need to check the following 3 parameter before enabling sql tracing.

1. TIMED_STATISTICS - > This enables and disables the collection of timed statistics, such as CPU and elapsed times

2. MAX_DUMP_FILE_SIZE - > When the SQL Trace facility is enabled at the instance level, every call to the server produces a text line in a file in the operating system's file format. This controls the size of the file.

3. USER_DUMP_DEST - > Determines the destination of log files.

Come to the point - how to enable this?

--session level
ALTER SESSION SET SQL_TRACE = TRUE;

--system level
ALTER SYSTEM SET SQL_TRACE = TRUE;

How to use TKPROF?

In simple form, TKPROF takes source trace file and then the name of output file that will be generated by TKPROF.

# tkprof trace_file output=output_file_name

In windows, the command would be -

(running from command prompt...)
C:\Users\shawon>tkprof C:\oracle\product\10.2.0\admin\shawon_db\udump\shawon_ora_4288.trc output = C:\oracle\product\10.2.0\admin\shawon_db\udump\tmp.out


The detailed documents could be found from this site -

http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96533/sqltrace.htm#8723

Mar 11, 2009

Date conversion according to timezone

We do often need to generate reports from database. Usually dates are stored in database formatted to a specific timezone - preferably in server timezone. When we retrieve those date related data, we have to format those according to users' timezone to put sense in date information.

We could easily do that using the date function NEW_TIME(). We have to need the source/base timezone and the desired timezone we want to convert.

For example, if I want to convert a date which is stored in 'US/Eastern' timezone to user's timezone 'US/Mountain' - I could do it easily in the following ways -

--To look up the timezone short names to be used in the function.
SELECT * FROM v$timezone_names;

Example - 1:

--US/Eastern to US/Mountain
SELECT TO_CHAR(REPORT_DATE, 'DD-MON-YY HH:MI:SS AM') ORIGINAL_EST, TO_CHAR(NEW_TIME(REPORT_DATE, 'EST','MST'), 'DD-MON-YY HH:MI:SS AM') "Eastern to Mountain"
FROM USER_REPORTS
WHERE rownum <4;

Example - 2:

--sysdate conversion, Central to Pacific
SELECT TO_CHAR(NEW_TIME(sysdate, 'CDT','PDT'), 'DD-MON-YY HH:MI:SS AM') "Central to Pacific"
FROM DUAL;