Monday, October 15, 2012

Oracle SOA 11g - the DBA Side

When a Company decides to go-live with Oracle Fusion Middleware (SOA), one piece of advice I would give their DBA is to have ironed out the Oracle Purge scripts and a routine purge procedure sooner than later. I would suggest real soon - within one month of go-live.

Running SOA generates massive amounts of data in the *_soainfra schema which is purely transactional in nature and, after a few weeks, does not provide much benefit to the Company (at least the composites we had running were like this). If left unattended, you will have a massive Tablespace to deal with in a short amount of time. Lots of LOB data  too. Research this topic online and you will quickly understand why some prefer to simply truncate these Tables in their Test/Dev environments. 

If you work with the suggested purge scripts from Oracle, you may notice that some of your older data does not get purged for one reason or another. This was true in patchset (PS) 4 (11.1.1.5) that I was seeing and is a common complaint online. If this accounts for a significant amount of data in your environment, you may be faced with a manual purge process. There is a good example I would suggest to try out in your Test/Dev account such as:

http://orasoa.blogspot.com/2011/07/purging-soa-suite-11g-extreme-edition.html
http://itnewscast.com/middleware/purging-soa-suite-11g-extreme-edition

After studying these Tables and their relationships, you may notice that there are a few additional Tables/Indexes/LOBs that were left out in the above but these should give you a good start on what is needed.

Basically, you want to follow this approach:

  1. Research "SOA Purge" online and get familiar with the issues involved.
  2. Have the Oracle Purge Scripts running daily or weekly (via cron or similar).
  3. Perform a periodic (manual) cleanup of any older data that may be left behind from the above.
  4. If you found yourself having to perform any significant amounts of row deletes, you need to address the Tables and Datafiles to recover this space back.

The last item can be the most costly. For starters, you can do a Shrink/Compact on the Tables/LOBs and rebuild the Indexes for good measure. Then, check the Datafiles to see if any space can be easily recovered. For me, I didn't have any luck here - Objects were stored at the end of the Datafiles and I could not recover 60+ GB of wasted space. Ideally, you will want to simply create a new Tablespace and move all the Objects (ALTER TABLE MOVE TABLESPACE) and rebuild the Indexes in the new Tablespace. Then shrink the old and move Objects back or simply run with this new Tablespace name. Since "MOVE TABLESPACE" breaks all the Indexes (from changing ROWIDS), this approach will require some downtime.

In addition to purging SOA data, also pay attention to BAM data as well. For that, I have purges scheduled in BAM that run daily to keep our dashboards performing well. The BAM GUI may be a little different than what you are used to. Work out the criteria in a Test/Dev environment first of course. It's too easy to purge everything. 

Here are some basics on how to set up a new ALERT in BAM Architect to delete:






Remember to keep an eye on those Datafiles!


Friday, October 12, 2012

Mystery Tables (Part II - the end)

continued from: Mystery Tables (Part I)

The results from running FGA on this Table for a couple of weeks and viewing the results in the SYS.FGA_LOG$ table showed no access to this table other than statistics collection. Yet another place that could be checked, as pointed out by John Clarke on ORACLE-L, was our application server ($APPL_TOP directories) for any custom java code using:

[oracle@octest appl]$ find xx* -type f|xargs strings -a|grep -i XXOC_QP_PREQ_LINE_ATTRS_TMP_T
[oracle@octest appl]$ find xx* -type f|xargs strings -a|grep -i xxoc_qp_preq_line_attrs_tmp_t
[oracle@octest appl]$ find /u01/oracle/GOLD/apps/apps_st/appl -type f -exec grep -H '*preq_line_attrs_tmp*' {} \;
[oracle@octest appl]$ find /u01/oracle/GOLD/apps/apps_st/appl/xx* -type f -exec grep -H '*PREQ_line_attrs_tmp*' {} \;

(nothing returned)

With these results, I took this task to the next steps:

Disabled the Trigger
Renamed the Table
Recompiled Trigger to point to new Table name
Enabled Trigger

This continued for another 6 weeks. The neat feature I noticed about this is that FGA continued to monitor the newly-named Table and kept logging to the SYS.FGA_LOG$ without the need for any updates/changes.

Also during this time, I noticed this error in our Alert log:

Errors in file /u05/GOLD/bdump/gold2_p000_26119.trc:
ORA-00600: internal error code, arguments: [kzaSqlTxtLob1], [10382], [], [], [], [], [], []
ORA-10382: parallel query server interrupt (reset)

The Oracle 600-error tool led me to: Bug 4581220 OERI:kzaSqlTxtLob1 with fine grained auditing and parallel query. This was probably due to the FGA I had running on my "mystery table" - when STATS runs (in parallel) - so I turned off FGA and have not seen this error again (this is a 10.2.0.3 db by the way).

Since no failures or errors were noted across two financial month-end closes, I decided to:

Disable the Trigger
TRUNCATE the Table and return space back to the Tablespace

The Table had grown to approximately 190 million rows of data - growing since basically going live on EBS.

Based on all information gathered on this issue, and from what I've seen on other processes setup very similar to this one, I have come up with a theory of what this was all about:

In July of 2008, our go-live consulting firm (Oracle Partner)  was creating a DISCOVERER Report for our Purchasing Department and were looking for what Temp Tables in EBS to capture to a static Table for this reporting function in DISCOVERER. Documentation either never was provided or does not exist. They created this Trigger and Table. Then they realized that there were two other Tables they needed instead and created two new Triggers/Tables to accomplish the task. The created the report in DISCOVERER using these two other Tables. They then simply forgot to cleanup after themselves and left the original Trigger writing since (2008).