DBMS_REDEFINITION, clustering and how an outline helps to make it completely ONLINE

At last a new post from my hand after months of silence. Well, silence in writing then: in the mean time we got our third child and I participated in the World Championships Double Decathlon in Delft. Those two things were enough for me to handle next to regular work 😉

In this post I want to share good experience I had in the use of DBMS_REDEFINITION to speed up the Siebel OLTP application I am responsible for (at least, for the performance part of it). It is about clustering of data and rebuilding tables as ONLINE as can be and how I used a trick with a stored outline to overcome a performance problem during rebuild of one table.

In every application there are certain tables that are queried heavily. Some applications have tables that are even used in almost every query. A lot of times the access path into these tables is concentrated around one or two columns or at least a few “important” columns. In Siebel for example there is a table called S_POSTN_CON, which stores links between Contacts (Professionals) and Positions. A Sales Representative for a certain product can be a “position” and a District manager is another “position”. The Contacts they are allowed to ‘see’ (visibility) is defined by assignment rules and these assignment rules are the driving force for the contents of the S_POSTN_CON table and its counterpart for Organizations (Accounts), S_ACCNT_POSTN – Don’t you love such “standardization”, why not S_POSTN_ACCNT?). An extreme lot of queries join S_POSTN_CON or S_ACCNT_POSTN with other tables, to control the Contacts and Accounts the logged in user is allowed to see. These tables are always accessed using the POSTN_ID, which is the Siebel ID for the position the logged in user has.

A lot has been written already on the topic of data clustering in Oracle and why it is beneficial from a performance perspective (for example Jonathan Lewis: http://jonathanlewis.wordpress.com/2007/05/17/data-cluster/). It touches things like the clustering factor for Indexes and the db_file_sequential_read statistic. To be short: it really helps if the data you are after is clustered in a few Oracle blocks instead of spread over many blocks, especially when an index is used to get the data from the table. That is because the most expensive part of using an index is not the index access itself, but visiting the table to get the data blocks containing the rows identified by the index. Provided that all the data needed cannot be retrieved from the index alone (another good thing to look at). The more different blocks that need to be retrieved (using single block reads), the longer the execution time.

So with that in mind, we started looking at the ‘key’ tables (the heavily used tables) in the Siebel OLTP application and looked at their most common access paths. As said for S_POSTN_CON and S_ACCNT_POST it was definitely the POSTN_ID column. And for example the S_CONTACT_XM table was accessed mostly via the PAR_ROW_ID (the link between the S_CONTACT_XM records and the S_CONTACT table, which stores the Contacts/Professionals) and the TYPE column.

Now what we did was a rebuild of these tables and we pre-sorted them on the access path columns. In that way, Oracle stores the table data sorted (well initially, it does not maintain it afterwards when new records enter the table) which means related data is stored tightly in as few Oracle blocks as possible. Which means: when S_POSTN_CON is queried for all the records for a given POSTN_ID, Oracle uses the index on S_POSTN_CON to get the rowid’s for the records with that POSTN_ID and only needs to get one or at most 2 or 3 Table blocks for all the records, instead of 20 or more blocks it needed to get before the rebuild.

The results were significant. A lot of queries became 50% or even a lot faster after the rebuild of in total 7 key tables. It sometimes lead to different execution plans, because the clustering factor of certain index became much more interesting. It was mainly after this action that the entire system constantly stayed at a performance level of more than 99% screens within 3 seconds. Before that we hardly reached that 99%! And even though the ordering is not maintained, a lot of data is stable but nevertheless queried over and over again: these parts of the table more or less stay ‘ordered’. For some tables we will be repeating the ordering process periodically. There are other ways to maintain clustering within  table like partitioning, IOT’s, sorted hash clusters etc, but all of these do not apply to use in a Siebel OLTP application. For partitioning you need a lot of money for licenses when you have a system with 16 CPU’s as well as you have to involve Siebel support to enable partitioning in OLTP.

But how do you rebuild a table and pre-sort its data in an always up and running application? This is where DBMS_REDEFINITION shows up as very useful. This package, available since 9i but much more powerful in 10g, makes it possible to completely redefine tables online with a minor lock moment at the end. I have become to love this built-in package.

With DBMS_REDEFINITION, you can:

• rebuild a table because you want to reorganize (and do not want to use ALTER TABLE MOVE, which is not entirely online, especially regarding indexes that are unusable after the MOVE and need a rebuild)
• alter the structure of a table: add columns, drop columns, change data types
• alter its physical structure: convert a heap organized table into an IOT, convert to a partitioned table, etc
• Change the order of the records in the table

Here is an example of the code I executed to rebuild the Siebel S_ADDR_PER table

-- Check if the table can be rebuilt at all:
BEGIN
  DBMS_REDEFINITION.CAN_REDEF_TABLE('SIEBEL', 'S_ADDR_PER',
                     DBMS_REDEFINITION.CONS_USE_ROWID);
END;
/

-- Create an intermediate table with the same structure or
-- (if required, with structure changes)

CREATE TABLE "SIEBEL"."BCK_S_S_ADDR_PER"
( "ROW_ID" VARCHAR2(15 CHAR) NOT NULL ENABLE
,"CREATED" DATE DEFAULT sysdate NOT NULL ENABLE
,"CREATED_BY" VARCHAR2(15 CHAR) NOT NULL ENABLE
,"LAST_UPD" DATE DEFAULT sysdate NOT NULL ENABLE
,"LAST_UPD_BY" VARCHAR2(15 CHAR) NOT NULL ENABLE
,"DCKING_NUM" NUMBER(22,7) DEFAULT 0
,"MODIFICATION_NUM" NUMBER(10,0) DEFAULT 0 NOT NULL ENABLE
,"CONFLICT_ID" VARCHAR2(15 CHAR) DEFAULT '0' NOT NULL ENABLE
,"ADDR_NAME" VARCHAR2(100 CHAR) DEFAULT 'x' NOT NULL ENABLE,
[...]
all remaining columns exactly matching the original S_ADDR_PER table
[...]
)
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
TABLESPACE "DATA_TS";

-- The actual work: start the online redefinition

DECLARE
  l_num_errors NUMBER;
BEGIN
  -- This will build a materialized view log on the table to be rebuild,
  -- to capture changes during the rebuild process
  -- it will also copy all table data initially, ordered by "orderby_cols"
  SYS.DBMS_REDEFINITION.START_REDEF_TABLE('SIEBEL', 'S_ADDR_PER', 'BCK_S_S_ADDR_PER'
  , col_mapping => null
  , options_flag => SYS.DBMS_REDEFINITION.CONS_USE_ROWID
  , orderby_cols => 'x_bu_id, country, x_brick_id, zipcode, city');
  -- Copy all triggers/indexes/privileges/optionally constraints and statistics
  SYS.DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS('SIEBEL', 'S_ADDR_PER'
  , 'BCK_S_S_ADDR_PER'
  , copy_triggers=>TRUE
  , copy_constraints=>FALSE
  , copy_privileges=>TRUE
  , copy_indexes=>dbms_redefinition.cons_orig_params
  , copy_statistics=>FALSE
  , num_errors=>l_num_errors);
  SYS.DBMS_REDEFINITION.SYNC_INTERIM_TABLE('SIEBEL', 'S_ADDR_PER'
                                                                  , 'BCK_S_S_ADDR_PER');
EXCEPTION
  WHEN OTHERS
  THEN
    DBMS_OUTPUT.put_line ( '# of errors while copying dependent objects: '
                                      ||l_num_errors);
  RAISE;
END;
/

BEGIN
  DBMS_STATS.gather_table_stats (ownname => 'SIEBEL',tabname => 'BCK_S_S_ADDR_PER',
  estimate_percent => NULL,degree => 5,method_opt => 'FOR ALL COLUMNS SIZE 1',
  CASCADE => TRUE );
END;
/

-- Finish the redefinition: switch the two tables, drop the materialized view log
BEGIN
  SYS.DBMS_REDEFINITION.FINISH_REDEF_TABLE('SIEBEL', 'S_ADDR_PER', 'BCK_S_S_ADDR_PER');
END;
/

-- For some reason, after redefinition, the indexes were set to NOLOGGING

ALTER INDEX SIEBEL.S_ADDR_PER_EI LOGGING;
[...]
--set all remaining indexes to LOGGING again
[...]
ALTER INDEX SIEBEL.S_ADDR_PER_F1_X LOGGING;

spool off

The end result is an exact copy of S_ADDR_PER, but with ‘clustered’ data around the columns x_bu_id, country, x_brick_id, zipcode, city. The original table has now been renamed to the intermediate table’s name, as well as its indexes, triggers and other dependent objects.

Internal SQL

During the execution of DBMS_REDEFINITION.START_REDEF_TABLE, an INSERT statement is executed that inserts the original table’s data into the intermediate table (with an ORDER BY in our case). In one case (the rebuild of table S_CONTACT_XM) it happened that the execution plan Oracle used to execute this INSERT AS SELECT statement used an existing index as its driving index because this index was defined on the columns to order on. Now I could not remove that index or set it to UNUSABLE, because that would be violating the “100% Online” goal of the exercise. But the difference between a FULL table scan and the use of the index as driving index was 1 versus 7 hours. So I wanted to force a FULL table scan. Now that is worth tuning, but how could I do that? I tried optimizer parameters at the session level, but could not find one that eliminated the index. Now regardless of it being my lack of knowledge or there really is no magic optimizer parameter to set here, I used a trick with an outline to force the FULL table scan.

A few challenges there:

1. The internal SQL adds a column to the intermediate table, called m_row$$ of type VARCHAR2(15) which is needed to store the Oracle ROWID value of the original table. It uses it during the SYNC_INTERIM_TABLE calls of DBMS_REDEFINITION to quickly find the records to update or delete. The internal INSERT statement inserts the ROWID into the intermediate table’s m_row$$ column. And that is where the creation of an outline on that internal statement would fail: it cannot parse that statement because the m_row$$ column does not exist yet!

2. It is a challenge to create outlines and adding your own hints to it. I use a technique learned from colleagues to do that easily, even though it does not seem to be documented and I am in doubt if it is supported. It works perfectly however.

So here are the steps:

I temporarily add the m_row$$ column to the intermediate table myself:

ALTER TABLE "SIEBEL".bck_s_contact_xm ADD m_row$$ VARCHAR2(15);

And then I can create an outline:

DECLARE
  bq VARCHAR2 (32767); --sql statement bad query
  gq VARCHAR2 (32767); --sql statement bad query
  outln_name_bq VARCHAR2 (30); -- Final outline name (for corrected bad query)
  outln_name_gq VARCHAR2 (30); -- Temporary outline name
BEGIN
  outln_name_bq := 'REDEF_CONTACT_XM_ORDERED'; -- Final name
  outln_name_gq := 'REDEF_CONTACT_XM_ORDERED_GD'; -- Temporary name
  bq :=
    'INSERT /*+ BYPASS_RECURSIVE_CHECK APPEND */
    INTO "SIEBEL"."BCK_S_CONTACT_XM"(M_ROW$$,"ROW_ID","CREATED",
    "CREATED_BY","LAST_UPD","LAST_UPD_BY", [...]
    SELECT "S_CONTACT_XM"."ROWID","S_CONTACT_XM"."ROW_ID",
    "S_CONTACT_XM"."CREATED",
    "S_CONTACT_XM"."CREATED_BY","S_CONTACT_XM"."LAST_UPD", [...]
    FROM "SIEBEL"."S_CONTACT_XM" "S_CONTACT_XM" ORDER BY par_row_id, type';

  gq :=
    'INSERT /*+ BYPASS_RECURSIVE_CHECK APPEND */
    INTO "SIEBEL"."BCK_S_CONTACT_XM"(M_ROW$$,"ROW_ID","CREATED",
    "CREATED_BY","LAST_UPD","LAST_UPD_BY", [...]
    SELECT /*+ FULL(S_CONTACT_XM) */ "S_CONTACT_XM"."ROWID",
   "S_CONTACT_XM"."ROW_ID", "S_CONTACT_XM"."CREATED",
    "S_CONTACT_XM"."CREATED_BY","S_CONTACT_XM"."LAST_UPD", [...]
    FROM "SIEBEL"."S_CONTACT_XM" "S_CONTACT_XM" ORDER BY par_row_id, type';

  EXECUTE IMMEDIATE 'create outline "' || outln_name_bq || '" on ' || bq;
  EXECUTE IMMEDIATE 'create outline "' || outln_name_gq || '" on ' || gq;

  -- delete the hints from the bad outline
  DELETE FROM outln.ol$hints
  WHERE ol_name = outln_name_bq;

  -- rename the good hints so they apply to the bad outline
  UPDATE outln.ol$hints
     SET ol_name = outln_name_bq
   WHERE ol_name = outln_name_gq;

  -- update count of hints on the bad outline to that of the good one
  UPDATE outln.ol$
  SET hintcount = (SELECT hintcount
                     FROM outln.ol$
                    WHERE ol_name = outln_name_gq)
  WHERE ol_name = outln_name_bq;

  COMMIT;

  EXECUTE IMMEDIATE 'drop outline "' || outln_name_gq || '"';
END;
/
 

Now the outline is created, because the query can be parsed. And the ‘lucky’ thing is: I can drop that m_row$$ column again without the outline becoming invalid:

ALTER TABLE bck_s_contact_xm DROP COLUMN m_row$$;

Now the intermediate table is in the state it should be and I can continue with the DBMS_REDEFINITION. START_REDEF_TABLE, which will add the m_row$$ column to the intermediate table again and the outline will be used during the internal INSERT statement.

2 thoughts on “DBMS_REDEFINITION, clustering and how an outline helps to make it completely ONLINE

  1. This is a very nice post. We have also used dbms_redefinition with Siebel. I wanted to add an update – the orderby_cols parameter is useful in ordering the values of the data (for performance), but another way to avoid the index scan during the initial sync is to add a column which doesn’t have an index (to avoid having to set an outline or a baseline).

    Given that LAST_UPD isn’t indexed:

    If you wanted to order by MEMBER_ID, you could specify the orderby_cols as MEMBER_ID, LAST_UPD

Leave a comment