Monday, June 25, 2012

Beware of (fast) Materialized Views created with DBLinks

Today I was looking at a particular instance's MLOGs (logs created by Materialized Views with REFRESH FAST ON DEMAND) and noticed that the source MVs were not located on this server at all. These MVs were built on another instance referencing tables on this server by use of DBLinks.

Recap of MLOGs (or snapshot logs):
When DML changes are made to master table data, Oracle Database stores rows describing those changes in the materialized view log and then uses the materialized view log to refresh materialized views based on the master table. This process is called incremental or fast refresh. Without a materialized view log, Oracle Database must re-execute the materialized view query to refresh the materialized view. This process is called a complete refresh. Usually, a fast refresh takes less time than a complete refresh.
A materialized view log is located in the master database in the same schema as the master table. A master table can have only one materialized view log defined on it. Oracle Database can use this materialized view log to perform fast refreshes for all fast-refreshable materialized views based on the master table.
To fast refresh a materialized join view, you must create a materialized view log for each of the tables referenced by the materialized view.
When I used this query to look at the underlying MVs, all of them had not had a refresh since 2009:

From dba_base_table_mviews a,dba_REGISTERED_MVIEWS m

Since these were never analyzed, I couldn't initially determine how much data they were holding so I analyzed these MLOG tables and they summed to about 80 million rows of data. 

With that information, and the knowledge that these MVs were no longer needed, I dropped the MVs on the remote instance and reanalyzed the MLOGs again. Most all the rows were removed! Great right? Well, now some cleanup on these tables needed to take place to give back this wasted space to the Tablespace.

Being on 10g, the "shrink" commands were used:

alter materialized view log on < table > enable row movement;
alter materialized view log on < table > shrink space compact;
alter materialized view log on < table > shrink space;
alter materialized view log on < table > disable row movement;
alter materialized view log on < table > deallocate unused;

Now onto what remaining MVs are requiring the MLOGs?

After verifying that no MVs existed referencing these Master tables, I dropped the snapshot logs since they were still being written to using:


Monday, June 18, 2012

SOA Composites Dashboard

So I wanted to create a simple SSRS report, and have it emailed on a regular basis that showed some basic information about our running Composites instead of clicking into each Group/Composite in SOA Enterprise Manager (EM).

Here is the query I initially came up with that seems to work pretty well:

         max(id) as composite_id,
         decode(min(state), 0, 'Running',
         1, 'Completed',
         2, 'Running with faults',         
         3, 'Completed with faults',
         4, 'Running with recovery required',
         5, 'Completed with recovery required',
         6, 'Running with faults and recovery required',
         7, 'Completed with faults and recovery required',
         8, 'Running with suspended',
         9, 'Completed with suspended',
         10,'Running with faults and suspended',
         11,'Completed with faults and suspended',
         12,'Running with recovery required and suspended',
         13,'Completed with recovery required and suspended',
         14,'Running with faults, recovery required, and suspended',
         15,'Completed with faults, recovery required, and suspended',
         16,'Running with terminated',
         17,'Completed with terminated',
         18,'Running with faults and terminated',
         19,'Completed with faults and terminated',
         20,'Running with recovery required and terminated',
         21,'Completed with recovery required and terminated',
         22,'Running with faults, recovery required, and terminated',
         23,'Completed with faults, recovery required, and terminated',
         24,'Running with suspended and terminated',
         25,'Completed with suspended and terminated',
         26,'Running with faulted, suspended, and terminated',
         27,'Completed with faulted, suspended, and terminated',
         28,'Running with recovery required, suspended, and terminated',
         29,'Completed with recovery required, suspended, and terminated',
         30,'Running with faulted, recovery required, suspended, and terminated',
         31,'Completed with faulted, recovery required, suspended, and terminated',
         64,'???????') run_state, 
         MAX (Created_time) AS last_run_time,
         SYSDATE - MAX (Created_time) AS Elapsed,
         || SOAGROUP ||'/' || SOACOMPOSITE || '%20[' || SOAVERSION || ']&type=oracle_soa_composite&soaContext='
         || SOAGROUP || '/' || SOACOMPOSITE || '!' || SOAVERSION || '/' || max(id) AS URL       
    FROM (SELECT SUBSTR (composite_dn, 1, INSTR (composite_dn, '/') - 1) AS SOAGroup,
                 SUBSTR (composite_dn, INSTR (composite_dn, '/') + 1, (INSTR (composite_dn, '!') - INSTR (composite_dn, '/') - 1)) AS SOAComposite,
                 SUBSTR (composite_dn, INSTR (composite_dn, '!') + 1, (INSTR (composite_dn, '*') - INSTR (composite_dn, '!') - 1)) AS SOAVersion,
            FROM prd_soainfra.composite_instance)
GROUP BY SOAGroup, SOAComposite, SOAVersion
ORDER BY SOAGroup, SOAComposite, SOAVersion

I then created a Data Source to the SOA (PRD_SOAINFRA user) Server and scheduled a report with email delivery. The Composite ID column was hyperlinked using the URL column (not displayed) in SSRS. This allows you to click and be taken directly to the Composite in EM once logged in.

Thursday, June 14, 2012

Database Resource Manager - CREATE_SIMPLE_PLAN

How to use Oracle's CREATE_SIMPLE_PLAN to implement a Database Resource Manager (DRM) in 10g.

You can use the following as a easy way to segregate certain user connections based on CLIENT_PROGRAM. For example: Oracle Discoverer (client and web-based) and Oracle SQL Developer.

This plan will demonstrate how to put sessions using these programs in a lower priority queue while maintaining the rest of your users in the "default" queue. SYS and SYSTEM users are in the highest queue.

Step 1:

  DBMS_RESOURCE_MANAGER.set_consumer_group_mapping (
    attribute      => DBMS_RESOURCE_MANAGER.client_program,
    value          => 'DIS51USR.EXE',  
    consumer_group => 'LOW_GROUP');

  DBMS_RESOURCE_MANAGER.set_consumer_group_mapping (
    attribute      => DBMS_RESOURCE_MANAGER.client_program,
    value          => 'DIS51WS@OCTEST.OCINT.COM (TNS V1-V3)',
    consumer_group => 'LOW_GROUP');

  DBMS_RESOURCE_MANAGER.set_consumer_group_mapping (
    attribute      => DBMS_RESOURCE_MANAGER.client_program,
    value          => 'SQL DEVELOPER',
    consumer_group => 'LOW_GROUP');
Step 2:


Step 3:
exec dbms_resource_manager.submit_pending_area();

Now let's validate what was created using the following SQL.

Validate the Plan creation:

  FROM dba_rsrc_plans;

Groups within a Plan:SELECT     plan
  FROM dba_rsrc_plan_directives
User Mappings:
   FROM dba_users
 ORDER BY 1,2;

Special User Mappings (showing client_program):
FROM dba_rsrc_group_mappings;

Ready to Activate your Plan?

(If RAC - run on each node. This means you can have different plans on different Nodes.)

View active plans:

Monitor your Plans activity:

To allow for Group Switching between Default_Consumer_Group and Low_Group you can do the following:

plan => 'XXOC_SIMPLE_PLAN1',
group_or_subplan => 'DEFAULT_CONSUMER_GROUP',
new_comment => ' ',
new_cpu_p1 => 0, new_cpu_p2 => 80, new_cpu_p3 => 0, new_cpu_p4 => 0,
new_cpu_p5 => 0, new_cpu_p6 => 0, new_cpu_p7 => 0, new_cpu_p8 => 0,
new_parallel_degree_limit_p1 => -1,
new_active_sess_pool_p1 => -1,
new_queueing_p1 => -1,
new_switch_group => 'LOW_GROUP',
new_switch_time => -1,
new_switch_estimate => true,
new_max_est_exec_time => -1,
new_undo_pool => -1

Tuesday, June 12, 2012

A Creative Use of Lookups in EBS 12

I recently worked on a SOA Middleware project that interfaced our EBS item-related data (including inventory levels) with various third-party websites. Since it was the same inventory being sent to multiple websites, they wanted a way to manage (throttle back) available inventory sent to each site.

Previously, everything was coded using a combination of C#.NET and PL/SQL and any change or adjustment required IT intervention since everything was hard-coded. I was asked to come up with a way to use Standard EBS screens to allow the Users to manage these inventory feeds.

In the following example, I will demonstrate how to use standard lookup functionality within EBS to derive a set of Trading Partner (TP) specific rules to allow the Business Units to manage how inventory is sent at these three levels:
  1. Item Level
  2. Type Level
  3. All Level (this is a generic level)
Here is a sample Lookup screen in EBS:

Sample EBS Lookup Screen Example

Since Code and Meaning values must be unique, I used a "_1_" as ways to increment similar values. You would setup this screen for each TP. This is not always pretty but it works.

Here is how the hierarchy is implemented: 

If an item is listed by SKU, then the SKU_<ordered_item> mapping is chosen first. Based on the actual inventory values obtained from the Description column, it uses the multiplier in the Tag column to recalculate what amount of inventory to send. If no match there, it then checks the item's Type (TYP_<n>_<type>). If no match there, it defaults to GEN_<n> values. All that is required is a "GEN_" setup. 

For Tags less than 1, the amount in the screen is used as a multiplier. For numbers greater than 1, it is an absolute number. 

These entries in the above Lookup screen:

Code     Description    Tag
-----    -----------    ---
GEN_1    <=3             1
GEN_2    >3 and <=30    .5
GEN_3    >30            .99

translate to:

If our current inventory is 3 or less, send 1 item. 
If our current inventory is between 4 and 30, send half. So if 10 – send 5. If 20 – send 10.
If our current inventory is more than 30 – send that number. It will either be the original number or one less depending on the rounding used in the Function below.

Here is the Function that was created:

CREATE OR REPLACE FUNCTION APPS.xxoc_tp_inv_send(p_partner IN VARCHAR2, p_sku IN VARCHAR2, p_type IN VARCHAR2, p_quantity IN NUMBER)
    v_result NUMBER;
    p_lookup  VARCHAR2(50);

    p_lookup   := 'OC_' || p_partner || '_INVENTORY_RULES';
    SELECT CASE WHEN to_number(tag, '999.99') >= 1 THEN to_number(tag, '999.99') ELSE to_number(tag, '999.99') * p_quantity END
      INTO v_result
      FROM (SELECT *
              FROM (SELECT sku,
                           TO_NUMBER(NVL2(gt, REGEXP_SUBSTR(description, '[0-9]+'), NULL)) rmin,
                                             NVL2(gt, 2, 1)
                      FROM (SELECT meaning,
                   CASE WHEN meaning LIKE 'SKU%' THEN SUBSTR (meaning, 5) END sku,
                   CASE WHEN meaning LIKE 'TYP%' THEN SUBSTR (meaning, 7) END typ,
                   CASE WHEN meaning LIKE 'GEN%' THEN 'GEN' END gen,
                   REGEXP_SUBSTR (description, '>=?') gt,
                   REGEXP_SUBSTR (description, '<= ?') lt
             WHERE LOOKUP_TYPE = p_lookup AND ENABLED_FLAG = 'Y' AND SYSDATE BETWEEN flv.start_date_active AND NVL (flv.end_date_active, SYSDATE + 1)))
             WHERE (sku = p_sku OR typ = p_type OR gen = 'GEN')
               AND (CASE
                        WHEN gt = '>' AND p_quantity > rmin THEN 1
                        WHEN gt = '>=' AND p_quantity >= rmin THEN 1
                        WHEN gt IS NULL THEN 1
                    END = 1
                AND CASE
                        WHEN lt = '<' AND p_quantity < rmax THEN 1
                        WHEN lt = '<=' AND p_quantity <= rmax THEN 1
                        WHEN lt IS NULL THEN 1
                    END = 1)
            ORDER BY CASE WHEN sku = p_sku THEN 1 WHEN typ = p_type THEN 2 ELSE 3 END) x
     WHERE ROWNUM = 1;
    -- v_result := CEIL(v_result);  -- Round the result up. If round down - use FLOOR
    v_result := FLOOR(v_result);  -- Round the result down. If round up - use CEIL
    RETURN v_result;

Sample uses – test of the Generic translation:

21:16:10 GOLD1> select xxoc_tp_inv_send('BUY','mysku123','mytype321',3) from dual;
21:17:12 GOLD1> select xxoc_tp_inv_send('BUY','mysku123','mytype321',6) from dual;
21:17:51 GOLD1> select xxoc_tp_inv_send('BUY','mysku123','mytype321',32) from dual;

Test of the Type translation:

21:18:08 GOLD1> select xxoc_tp_inv_send('BUY','mysku123','OC_KIT_AB_OPENBOX',10) from dual;

21:22:54 GOLD1> select xxoc_tp_inv_send('BUY','mysku123','OC_KIT_AB_OPENBOX',40) from dual;
Test of the SKU translation:

21:41:23 GOLD1> select xxoc_tp_inv_send('BUY','074101011265','mytype321',100) from dual;
21:41:50 GOLD1> select xxoc_tp_inv_send('BUY','074101011265','mytype321',20000) from dual;

In the above example, since we went over the SKU Rule's amount of 10,000, then the GEN Rule processed using a multiplier of .99 then rounded down (FLOOR).

Thursday, June 7, 2012

AutoExtend Unlimited

I used to create uniform 2GB Data Files without AutoExtend. Reason being, I liked to know what was creating rows in my database and the "set it and forget it" attitude was OK for Test Systems. But if you don't keep on top of things, you might find yourself with a Table with 175m rows of unneeded data.

Let's talk about AutoExtend - Unlimited with a simple command:

alter database datafile 'c:\oradata\mysid\XYZ.dbf' autoextend on next 10m maxsize unlimited;

Turns out, Data files are not exactly unlimited in size (even though you may check the box or specify it like in the SQL above), so the term "Unlimited" refers to the maximun size your datafile is allowed to reach, and this depends on the Oracle Block Size. Oracle 10g does have a maximun Data File limit of 128GB by the way.

To find your real maximum file size, multiply block size by 4194303 (2 ^22). This is the actual maximum size:

Maximum datafile size = db_block_size * maximum number of blocks (which is 4194303)

A datafile cannot be oversized, otherwise it could get corrupted. Let's say if your database is 8k blocks (which most are) - that means that one file can not exceed approximately 34GB (34,359,730,176 bytes) without having database corruption.

Setting your Data Files to AUTOEXTEND and UNLIMITED growth could give you a false sense of well being as your Tablespace reaches a hard ceiling. Periodically check your Data Files for once approaching this hard-limit.

Tuesday, June 5, 2012

How to change your db_block_size - NOT

Was reading this white paper from Virident today:

Accelerating Oracle Databases and Reducing Storage Complexity and Costs. Virident FlashMAX SCM as Primary Storage.

Offered at:  (as of 6/5/2012)

They are making a point that switching from 8k block size to 4k would reap performance benefits:

"When Oracle data is stored on FlashMAX devices, reducing Oracle database block size from the default value of 8192 bytes (8KB) to 4096 bytes (4KB) can provide substantial performance benefits in many applications. With HDDs, reading/writing 4KB takes essentially the same amount of time as 8KB as most of the time is spent on moving heads. In contrast, FlashMAX can perform 2x the amount of IOPS with 4KB block size compared to 8KB block size, or the same amount of IOPS at lower latencies."

They offered the steps on how to change your block size:

"You can set this parameter in several different ways:
  1. By adding it to initORACLE_SID.ora file (or changing if the parameter already exists)
  2. By setting the parameter in the SPFILE:
    1. SQL>alter system set db_block_size=4096 scope=spfile;
    2. SQL>shutdown immediate
    3. SQL>startup
  3. By setting it on Initialization Parameters -> Sizing tab of the DBCA"

Last I checked, you could only change the default block size at DB creation. Sure you can have multiple block sizes supported at the Tablespace level (must also specify cache sizes) but it's not as simple as changing a init parameter and bouncing your instance – otherwise, you will see:

ORA-00209: control file blocksize mismatch, check alert log for more info

Hard to take this white paper seriously or did I miss something?

Vendor response received 6/5/2012:

Hello Anthony,

You are bringing up a valid point. The block size change needs to be done before creating the DB . We will fix the whitepaper to clarify this.

Thanks a lot you for your feedback!