Thursday, August 23, 2012

Mystery Tables (Part I)

Recently,  I have been trying to figure out the use/need for three mystery tables in our EBS 12 system. These tables were basically archiving Global Temporary Table (GTT) data generated by Advanced Pricing tables such as: QP.QP_PREQ_LINE_ATTRS_TMP_T via Trigger. There were no comments in the trigger to indicate why this was done.

Usually, the first place I look is for custom code within the database that accesses this table in any way. One of my favorite statements is:

SELECT *
  FROM dba_objects
 WHERE UPPER (text) LIKE '%XXOC_QP_PREQ_LINE_ATTRS_TMP_T%';

However, only the Trigger appeared - no other Procedures or Packages. I also checked on other Production systems just to check if a DBLink was being used. Nothing.

I also checked for any SQL History:

SELECT * FROM DBA_HIST_ACTIVE_SESS_HISTORY WHERE SQL_ID IN (
        select SQL_ID from DBA_HIST_SQL_PLAN
        where OBJECT_NAME = 'XXOC_QP_PREQ_LINE_ATTRS_TMP_T');

Since we use Oracle Discoverer, I checked there next using something like:

   SELECT Obj.Obj_Name,
         Obj.Obj_Ba_Id,
         Seg.Seg_Chunk1,
         Seg.Seg_Chunk2,
         Seg.Seg_Chunk3,
         Seg.Seg_Chunk4
    FROM Eul_Us.Eul5_Segments Seg, Eul_Us.Eul5_Objs Obj
   WHERE     Seg.Seg_Seg_Type = 5
         AND Seg.Seg_Cuo_Id = Obj.Obj_Id
         AND (   upper(Seg.Seg_Chunk1) LIKE '%XXOC_QP_PREQ_LDETS_TMP_T%'
              OR upper(Seg.Seg_Chunk2) LIKE '%XXOC_QP_PREQ_LDETS_TMP_T%'
              OR upper(Seg.Seg_Chunk3) LIKE '%XXOC_QP_PREQ_LDETS_TMP_T%'
              OR upper(Seg.Seg_Chunk4) LIKE '%XXOC_QP_PREQ_LDETS_TMP_T%')
ORDER BY Obj.Obj_Id, Seg.Seg_Sequence;

which showed me a report that was using two of the smaller tables. Great, I can purge unwanted data according to the report's user's specifications using the following methodology:
  1. Create a CTAS statement containing keep records in each table 
  2. Truncate original table 
  3. Perform 10g SHRINK on table 
  4. Insert records from Table created in #1 back into original table 
  5. Re-analyze table
However, I'm still left with one last table - and it contains about 190m rows (20GB) of data. Another interesting fact about this table is that it has no indexes on it - and no date columns. What ever could be using it? Any use clause would force a Full Table Scan of 190m rows?

Instead of turning on Auditing in our Production EBS system,  I decided to use Fine Grain Auditing (FGA) which is new since Oracle 9i. 

To start, I created this little Table to capture some of this audit information:

create table XXOC.PREQ_audit
( whodidit varchar2(40)
, whenwasit timestamp
, sql_executed varchar2(4000)
)
/

Next, I created a Handler:

create or replace
package XXOC.AUDIT_HANDLER
is
PROCEDURE HANDLE_PREQ_ACCESS
( object_schema VARCHAR2
, object_name VARCHAR2
, policy_name VARCHAR2
);
end;
/

create or replace
package body XXOC.AUDIT_HANDLER
is
PROCEDURE HANDLE_PREQ_ACCESS
( object_schema VARCHAR2
, object_name VARCHAR2
, policy_name VARCHAR2
) is
  PRAGMA AUTONOMOUS_TRANSACTION;
begin
  insert into XXOC.PREQ_audit
  ( whodidit, whenwasit, sql_executed)
  values
  ( user, systimestamp, sys_context('userenv','current_sql'))
  ;
  commit;
end HANDLE_PREQ_ACCESS;
end;
/

Lastly, I created this FGA Policy:

begin
  dbms_fga.add_policy
  ( object_schema=>'XXOC'
  , object_name=>'XXOC_QP_PREQ_LINE_ATTRS_TMP_T'
  , policy_name=>'PREQ_ACCESS_HANDLED'
  , handler_schema => 'XXOC'
  , handler_module => 'AUDIT_HANDLER.HANDLE_PREQ_ACCESS'
  );
end;
/

To change/remove, you can simply use:

begin
DBMS_FGA.DROP_POLICY(
   object_schema  =>'XXOC', 
   object_name    =>'XXOC_QP_PREQ_LINE_ATTRS_TMP_T',    
   policy_name    =>'PREQ_ACCESS_HANDLED'
   );
end;
/

to Drop and Recreate if needed.

This will now capture any Select operations on this target table, and by default, log information to: SYS.FGA_LOG$ table. The above example illustrates that you can extent this functionality to do just about anything with this information - including sending a email alert.

Depending on what comes through with FGA, will determine next steps. More later.

Tuesday, August 14, 2012

Why does TOAD prompt me to Commit or Rollback?

I have been using Toad for awhile now and noticed that from time-to-time, it would present this dialog when I was closing a DB connection:


Being unsure, I would always click on 'Rollback' thinking I had some orphaned DML statement. I'm usually very deliberative on how and when I update rows so I thought this was odd at the time and continued on.

Turns out this is simply caused by performing a normal SELECT statement - while using a DB_Link to a remote server.  

Question you may be asking yourself: Why does a SELECT over a DB_Link require a commit after execution ?

Answer: When Oracle performs a distributed SQL statement Oracle reserves an entry in the rollback segment area for the two-phase commit processing.  This entry is held until the SQL statement is committed even if the SQL statement is a query. The requirement to commit distributed SQL exists even with automated undo management available with version 9i and newer.

Notice that the SELECT statement is noted here:

A review of our Developers' code, it should thus be a habit to always issue a COMMIT after any SELECT where a DB_Link was used to satisfy Oracle's two-phase commit processing - kind of in the same light of CLOSING/DISPOSING.


Wednesday, August 1, 2012

Oracle Materialized Views with Atomic Refresh

If you find yourself supporting a mature DB that makes used of Materialized Views (MV), there are two aspects that deserve some investigation on your system: Refresh Types and Atomic Refresh

I won't go into much detail here but I did want to mention that if you have any MV's set to Fast refresh, check the associated M$LOG table sizes. If these tables haven't been refreshed in awhile, the M$LOG's will have grown to rather large sizes. I have seen forgotten MV's have a log with 100m rows in it. Of course, if you are still using it, after you refresh the MV, you will have to rebuild the M$LOG to recover any disk space to the Tablespace. For more information, see this previous post.

The other piece I wanted to mention concerns how the MV is being refreshed. If you are noticing high amounts of Redo/Arch activity and suspect a portion of this to be caused by the frequent refreshes of all your MV's - explore the Atomic Refresh = False command for non-Fast (Complete) refreshes.

For example: 

SQL> EXEC DBMS_MVIEW.REFRESH('xx_mv', 'C', atomic_refresh=>FALSE);

This command uses a Truncate instead of a Delete - with two obvious benefits: no Redo generated and it is faster with MVs of some size.

"This is also why I do NOT setup a refresh schedule in the mat view creation itself (START WITH/NEXT syntax). Stick with dbms_scheduler to setup refresh schedule, calling dbms_mview.refresh" – tbone Jul 3 at 11:29