Wednesday, November 7, 2012

Row Counts using DBMS_XMLGEN.getxml

Sometimes it's handy to have a script that counts rows in various Tables (and Views) in one place. You can list individual objects or provide a SELECT statement:

    FROM (  SELECT owner,
                   'TABLE' as TYPE,
                   TO_NUMBER (
                      EXTRACTVALUE (
                         xmltype (
                            DBMS_XMLGEN.getxml (
                                  'select count(*) c from '
                               || owner
                               || '.'
                               || table_name)),
              FROM all_tables
             WHERE table_name IN
                      ('<list Tables here or provide SELECT>')
                    UNION ALL
                      SELECT owner,
                   'VIEW' as TYPE,
                   TO_NUMBER (
                      EXTRACTVALUE (
                         xmltype (
                            DBMS_XMLGEN.getxml (
                                  'select count(*) c from '
                               || owner
                               || '.'
                               || view_name)),
              FROM all_views
             WHERE view_name IN
                      ('<list Views here or provide SELECT>')
ORDER BY owner, view_name)

This script was derived from the this post.

Monday, October 15, 2012

Oracle SOA 11g - the DBA Side

When a Company decides to go-live with Oracle Fusion Middleware (SOA), one piece of advice I would give their DBA is to have ironed out the Oracle Purge scripts and a routine purge procedure sooner than later. I would suggest real soon - within one month of go-live.

Running SOA generates massive amounts of data in the *_soainfra schema which is purely transactional in nature and, after a few weeks, does not provide much benefit to the Company (at least the composites we had running were like this). If left unattended, you will have a massive Tablespace to deal with in a short amount of time. Lots of LOB data  too. Research this topic online and you will quickly understand why some prefer to simply truncate these Tables in their Test/Dev environments. 

If you work with the suggested purge scripts from Oracle, you may notice that some of your older data does not get purged for one reason or another. This was true in patchset (PS) 4 ( that I was seeing and is a common complaint online. If this accounts for a significant amount of data in your environment, you may be faced with a manual purge process. There is a good example I would suggest to try out in your Test/Dev account such as:

After studying these Tables and their relationships, you may notice that there are a few additional Tables/Indexes/LOBs that were left out in the above but these should give you a good start on what is needed.

Basically, you want to follow this approach:

  1. Research "SOA Purge" online and get familiar with the issues involved.
  2. Have the Oracle Purge Scripts running daily or weekly (via cron or similar).
  3. Perform a periodic (manual) cleanup of any older data that may be left behind from the above.
  4. If you found yourself having to perform any significant amounts of row deletes, you need to address the Tables and Datafiles to recover this space back.

The last item can be the most costly. For starters, you can do a Shrink/Compact on the Tables/LOBs and rebuild the Indexes for good measure. Then, check the Datafiles to see if any space can be easily recovered. For me, I didn't have any luck here - Objects were stored at the end of the Datafiles and I could not recover 60+ GB of wasted space. Ideally, you will want to simply create a new Tablespace and move all the Objects (ALTER TABLE MOVE TABLESPACE) and rebuild the Indexes in the new Tablespace. Then shrink the old and move Objects back or simply run with this new Tablespace name. Since "MOVE TABLESPACE" breaks all the Indexes (from changing ROWIDS), this approach will require some downtime.

In addition to purging SOA data, also pay attention to BAM data as well. For that, I have purges scheduled in BAM that run daily to keep our dashboards performing well. The BAM GUI may be a little different than what you are used to. Work out the criteria in a Test/Dev environment first of course. It's too easy to purge everything. 

Here are some basics on how to set up a new ALERT in BAM Architect to delete:

Remember to keep an eye on those Datafiles!

Friday, October 12, 2012

Mystery Tables (Part II - the end)

continued from: Mystery Tables (Part I)

The results from running FGA on this Table for a couple of weeks and viewing the results in the SYS.FGA_LOG$ table showed no access to this table other than statistics collection. Yet another place that could be checked, as pointed out by John Clarke on ORACLE-L, was our application server ($APPL_TOP directories) for any custom java code using:

[oracle@octest appl]$ find xx* -type f|xargs strings -a|grep -i XXOC_QP_PREQ_LINE_ATTRS_TMP_T
[oracle@octest appl]$ find xx* -type f|xargs strings -a|grep -i xxoc_qp_preq_line_attrs_tmp_t
[oracle@octest appl]$ find /u01/oracle/GOLD/apps/apps_st/appl -type f -exec grep -H '*preq_line_attrs_tmp*' {} \;
[oracle@octest appl]$ find /u01/oracle/GOLD/apps/apps_st/appl/xx* -type f -exec grep -H '*PREQ_line_attrs_tmp*' {} \;

(nothing returned)

With these results, I took this task to the next steps:

Disabled the Trigger
Renamed the Table
Recompiled Trigger to point to new Table name
Enabled Trigger

This continued for another 6 weeks. The neat feature I noticed about this is that FGA continued to monitor the newly-named Table and kept logging to the SYS.FGA_LOG$ without the need for any updates/changes.

Also during this time, I noticed this error in our Alert log:

Errors in file /u05/GOLD/bdump/gold2_p000_26119.trc:
ORA-00600: internal error code, arguments: [kzaSqlTxtLob1], [10382], [], [], [], [], [], []
ORA-10382: parallel query server interrupt (reset)

The Oracle 600-error tool led me to: Bug 4581220 OERI:kzaSqlTxtLob1 with fine grained auditing and parallel query. This was probably due to the FGA I had running on my "mystery table" - when STATS runs (in parallel) - so I turned off FGA and have not seen this error again (this is a db by the way).

Since no failures or errors were noted across two financial month-end closes, I decided to:

Disable the Trigger
TRUNCATE the Table and return space back to the Tablespace

The Table had grown to approximately 190 million rows of data - growing since basically going live on EBS.

Based on all information gathered on this issue, and from what I've seen on other processes setup very similar to this one, I have come up with a theory of what this was all about:

In July of 2008, our go-live consulting firm (Oracle Partner)  was creating a DISCOVERER Report for our Purchasing Department and were looking for what Temp Tables in EBS to capture to a static Table for this reporting function in DISCOVERER. Documentation either never was provided or does not exist. They created this Trigger and Table. Then they realized that there were two other Tables they needed instead and created two new Triggers/Tables to accomplish the task. The created the report in DISCOVERER using these two other Tables. They then simply forgot to cleanup after themselves and left the original Trigger writing since (2008).  

Thursday, September 27, 2012 - Mounting a diskgroup fails with error ORA-600 [kfcema02]

Just yesterday I became familiar with a Bug (6163771) that can rear its ugly head when ever you perform a SHUTDOWN ABORT. This was reported fixed in and has the following description details:

During instance recovery, mounting a diskgroup can fail with ORA-600[KFCEMA02].
There is a mismatch between the FCN recorded in the block and the FCN recorded
in the ACD.  block FCN < ACD fcn.

The top functions in the call stack are:

    kfgInitCache -> kfcMount ->kfrcrv -> kfrPass2 -> kfcema 

The trace file contains the FCN for the current block been recovered.  

 kfbh_kfcbh.fcn_kfbh = 0.5538283

 BH: (0x3807959c0)  bnum=13 type=FILEDIR state=rcv chgSt=not modifying
      flags=0x00000000  pinmode=excl  lockmode=null  bf=0x38040c000
      kfbh_kfcbh.fcn_kfbh = 0.5538283  lowAba=0.0  highAba=0.0
      last kfcbInitSlot return code=null cpkt lnk is null

The ACD fcn is the second argument on the ORA-600 [KFCEMA02]

This patch does not fix a diskgroup with the error already introduced.  
It will prevent future occurrences.


The cusotmer had a maintenance window (for something else) this morning on 
this development RAC. We could not shutdown cleanly. Then after the 
maintenance window, FRA diskgroup would not mounted.



At will

ksedmp kgerinv kgeasnmierr kfcema kfrPass2 kfrcrv
kfcMount kfgInitCache kfgFinalizeMount 3088 kfgscFinalize kfgForEachKfgsc
kfgsoFinalize kfgFinalize kfxdrvMount kfxdrvEntry opiexe opiosq0
kpooprx kpoal8 opiodr ttcpip opitsk opiino
opiodr opidrv sou2o opimai_real...

Alert log and trace file uploaded

Development has found a bug in the way checkpoints are maintained and this 
bug is the probable cause of the kfcema02 assert these customers are seeing.  
 We have a high degree of confidence that the bug we found is the cause of 
the customer issues because of what we saw in the AMDU dumps.

The problem is that buffers on the ping queue are not sorted in any 
particular order.  The fix is for kfrbCkpt to scan the entire ping queue to 
find the oldest buffer when computing the new checkpoint.   kfcbDriver is 
also updated to scan the entire ping queue when computing the targetAba for 
kfcbCkpt, but that code change is not critical because the only effect of 
having the targetAba be higher than it should be was that DBWR would write 
more dirty buffer than it really needed to.

After reading this BLOG from awhile ago on ORACLE-L - I was not encouraged to say the least.

Reaching out to Oracle Support helped solved the problem with employing a 11g Tool (can also run on 10g) called: facp (and AMDU). AMDU was released with 11g, and is a tool used to get the location of the ASM metadata across the disks. As many other tools released with 11g, it can be used on 10g environments.  Note 553639.1  is the placeholder for the different platforms. The note include also instructions for the configuration. It only needs to be configured (not run) for this fix since facp calls the AMDU.

Steps taken to resolve:

Transfer amdu and facp to a working directory and include it on LD_LIBRARY_PATH, PATH and other relevant variables.

Download the script facp from SR attachment.

Then, ACD Scanning and generation of pertinent files,

$./facp '/dev/oracleasm/disks*' 'DG6' ALL

And then it will generate files named like facp* in same directory.

Then try to adjust all checkpoints  by 10 blocks:

./facp_adjust  -10

Used after adjusting the checkpoints to verify they are valid.


If you adjusted too much facp_check will not print "Valid Checkpoint". Try adjusting less. 
Till get "Valid Checkpoint" for both thread.

Once facp_check reports "Valid Checkpoint" for all threads, it's the indication
to proceed with the real patching, which means, updating the ACD records

Write ASM metadata with the new data:


Then try to mount this diskgroup manually:

SQL> alter diskgroup DG6 mount; --------->> ASM sqlplus

SQL> select name,state from v$asm_diskgroup; --------->> ASM sqlplus

Everything showed MOUNTED and was able to bring up our Production DB.

If you experience this issue - log a SR with Oracle Support for these tools if not already on your system.

Monday, September 24, 2012

Missing Password Files

I don't usually log in as SYSDBA for normal activities. I prefer to login as the main EBS application user most of the time. However, when trying to login from SQLPLUSW.exe on my local machine, once I specified "as sysdba" - I would receive this:

ORA-01031: insufficient privileges

At this point I would always ssh into a terminal session directly to the server and logon with:

./sqlplus / as sysdba

Later I would notice that:

SQL> show parameter password;

------------------------------------ ----------- ------------------------------
remote_login_passwordfile     string EXCLUSIVE

But when I would query:

SQL> select * from v$pwfile_users;

no rows selected

So, maybe I just need to add a user to the pw file?

SQL> create user my_temp identified by my_temp;

User created.

SQL> grant sysdba to my_temp;
grant sysdba to my_temp
ERROR at line 1:
ORA-01994: GRANT failed: password file missing or disabled

When looking at: $ORACLE_HOME/dbs, I did not see any orapwd<SID> file so I decided to try and create one (in Test) using:

ORACLE_HOME/bin/orapwd file=orapworadev1 password=<password>  entries=5

Once I did this, I was able to:

SQL> select * from v$pwfile_users;

------------------------------ ----- -----

and log into the database with the SYSDBA privilege remotely.

Wednesday, September 19, 2012

Growing MLOG$ Tables with EBS 12

There are several snapshot tables (materialized views) in EBS that are set to Fast Refresh on Demand that, if not accessed, will grow to very large sizes. It's good to take a look at the what schemas these tables might reside in using the following SQL:

  SELECT owner, object_name
    FROM dba_objects
   WHERE object_name LIKE 'MLOG$%' AND object_type = 'TABLE'
ORDER BY owner, object_name

Added Tip: Or even better, run this to see which MLOG tables you should address first:

    FROM (  SELECT owner, table_name, TO_NUMBER (EXTRACTVALUE (xmltype (DBMS_XMLGEN.getxml ('select count(*) c from ' || owner || '.' || table_name)), '/ROWSET/ROW/C')) COUNT
              FROM all_tables
             WHERE table_name LIKE 'MLOG$%'
          ORDER BY owner, table_name)

If you stats are current, you can quickly get an idea of the impact of this on your system. In my case, I had several snapshot logs (MLOG$ tables) that were over 1m rows and as high as 17m rows.

Using Toad, it might be as simple as navigating to the underlying materialized view and right-click and doing a Refresh (Complete, Atomic = False).  To find out what MV this log is tied to, you can use this:

  FROM all_base_table_mviews a, ALL_REGISTERED_MVIEWS m

To find out what a.MASTER is looking for, simply browse the DDL for the MLOG$ object. You will see a comment made:


When refreshing, you may encounter the following error(s) however:

    LIST                 => 'INV.MTL_SYS_ITEMS_SN'
   ,METHOD               => 'C'
   ,PURGE_OPTION         => 1
   ,PARALLELISM          => 0
   ,NESTED               => FALSE);
Error at line 2
ORA-12008: error in materialized view refresh path
ORA-08002: sequence MRP_AP_REFRESH_S.CURRVAL is not yet defined in this session
ORA-06512: at "APPS.MRP_SN_SYS_ITEMS_T1", line 7
ORA-04088: error during execution of trigger 'APPS.MRP_SN_SYS_ITEMS_T1'
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2254
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2460
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2429
ORA-06512: at line 2

At this point, it is just as easy to drop and recreate the MV. If you had very large MLOG$ tables, you will have to do the following to return space back to the tablespace:

11:24:00 GOLD1> alter materialized view log on INV.MTL_MATERIAL_TRANSACTIONS_TEMP enable row movement;

Materialized view log altered.

11:30:33 GOLD1> alter materialized view log on INV.MTL_MATERIAL_TRANSACTIONS_TEMP shrink space compact;

Materialized view log altered.

11:30:53 GOLD1> alter materialized view log on INV.MTL_MATERIAL_TRANSACTIONS_TEMP shrink space;

Materialized view log altered.

11:31:53 GOLD1> alter materialized view log on INV.MTL_MATERIAL_TRANSACTIONS_TEMP disable row movement;

Materialized view log altered.

11:32:28 GOLD1> alter materialized view log on INV.MTL_MATERIAL_TRANSACTIONS_TEMP deallocate unused;

Materialized view log altered.

That should take care of this Maintenance task. Moving forward, you may want to schedule a regular job to address these by using combinations of :


and dropping/creating the MV's that cannot be refreshed.

Monday, September 17, 2012

Bulk Processing with BULK COLLECT and FORALL

Here's a great article discussing using BULK COLLECT and FORALL when doing mass updates as opposed to looping through each row and incurring a context switch between PL/SQL and SQL Engines:

Thanks Steven!

11/21/2012:  Also see this video:

Thursday, August 23, 2012

Mystery Tables (Part I)

Recently,  I have been trying to figure out the use/need for three mystery tables in our EBS 12 system. These tables were basically archiving Global Temporary Table (GTT) data generated by Advanced Pricing tables such as: QP.QP_PREQ_LINE_ATTRS_TMP_T via Trigger. There were no comments in the trigger to indicate why this was done.

Usually, the first place I look is for custom code within the database that accesses this table in any way. One of my favorite statements is:

  FROM dba_objects

However, only the Trigger appeared - no other Procedures or Packages. I also checked on other Production systems just to check if a DBLink was being used. Nothing.

I also checked for any SQL History:

        select SQL_ID from DBA_HIST_SQL_PLAN

Since we use Oracle Discoverer, I checked there next using something like:

   SELECT Obj.Obj_Name,
    FROM Eul_Us.Eul5_Segments Seg, Eul_Us.Eul5_Objs Obj
   WHERE     Seg.Seg_Seg_Type = 5
         AND Seg.Seg_Cuo_Id = Obj.Obj_Id
         AND (   upper(Seg.Seg_Chunk1) LIKE '%XXOC_QP_PREQ_LDETS_TMP_T%'
              OR upper(Seg.Seg_Chunk2) LIKE '%XXOC_QP_PREQ_LDETS_TMP_T%'
              OR upper(Seg.Seg_Chunk3) LIKE '%XXOC_QP_PREQ_LDETS_TMP_T%'
              OR upper(Seg.Seg_Chunk4) LIKE '%XXOC_QP_PREQ_LDETS_TMP_T%')
ORDER BY Obj.Obj_Id, Seg.Seg_Sequence;

which showed me a report that was using two of the smaller tables. Great, I can purge unwanted data according to the report's user's specifications using the following methodology:
  1. Create a CTAS statement containing keep records in each table 
  2. Truncate original table 
  3. Perform 10g SHRINK on table 
  4. Insert records from Table created in #1 back into original table 
  5. Re-analyze table
However, I'm still left with one last table - and it contains about 190m rows (20GB) of data. Another interesting fact about this table is that it has no indexes on it - and no date columns. What ever could be using it? Any use clause would force a Full Table Scan of 190m rows?

Instead of turning on Auditing in our Production EBS system,  I decided to use Fine Grain Auditing (FGA) which is new since Oracle 9i. 

To start, I created this little Table to capture some of this audit information:

create table XXOC.PREQ_audit
( whodidit varchar2(40)
, whenwasit timestamp
, sql_executed varchar2(4000)

Next, I created a Handler:

create or replace
( object_schema VARCHAR2
, object_name VARCHAR2
, policy_name VARCHAR2

create or replace
( object_schema VARCHAR2
, object_name VARCHAR2
, policy_name VARCHAR2
) is
  insert into XXOC.PREQ_audit
  ( whodidit, whenwasit, sql_executed)
  ( user, systimestamp, sys_context('userenv','current_sql'))

Lastly, I created this FGA Policy:

  ( object_schema=>'XXOC'
  , object_name=>'XXOC_QP_PREQ_LINE_ATTRS_TMP_T'
  , policy_name=>'PREQ_ACCESS_HANDLED'
  , handler_schema => 'XXOC'
  , handler_module => 'AUDIT_HANDLER.HANDLE_PREQ_ACCESS'

To change/remove, you can simply use:

   object_schema  =>'XXOC', 
   object_name    =>'XXOC_QP_PREQ_LINE_ATTRS_TMP_T',    
   policy_name    =>'PREQ_ACCESS_HANDLED'

to Drop and Recreate if needed.

This will now capture any Select operations on this target table, and by default, log information to: SYS.FGA_LOG$ table. The above example illustrates that you can extent this functionality to do just about anything with this information - including sending a email alert.

Depending on what comes through with FGA, will determine next steps. More later.

Tuesday, August 14, 2012

Why does TOAD prompt me to Commit or Rollback?

I have been using Toad for awhile now and noticed that from time-to-time, it would present this dialog when I was closing a DB connection:

Being unsure, I would always click on 'Rollback' thinking I had some orphaned DML statement. I'm usually very deliberative on how and when I update rows so I thought this was odd at the time and continued on.

Turns out this is simply caused by performing a normal SELECT statement - while using a DB_Link to a remote server.  

Question you may be asking yourself: Why does a SELECT over a DB_Link require a commit after execution ?

Answer: When Oracle performs a distributed SQL statement Oracle reserves an entry in the rollback segment area for the two-phase commit processing.  This entry is held until the SQL statement is committed even if the SQL statement is a query. The requirement to commit distributed SQL exists even with automated undo management available with version 9i and newer.

Notice that the SELECT statement is noted here:

A review of our Developers' code, it should thus be a habit to always issue a COMMIT after any SELECT where a DB_Link was used to satisfy Oracle's two-phase commit processing - kind of in the same light of CLOSING/DISPOSING.

Wednesday, August 1, 2012

Oracle Materialized Views with Atomic Refresh

If you find yourself supporting a mature DB that makes used of Materialized Views (MV), there are two aspects that deserve some investigation on your system: Refresh Types and Atomic Refresh

I won't go into much detail here but I did want to mention that if you have any MV's set to Fast refresh, check the associated M$LOG table sizes. If these tables haven't been refreshed in awhile, the M$LOG's will have grown to rather large sizes. I have seen forgotten MV's have a log with 100m rows in it. Of course, if you are still using it, after you refresh the MV, you will have to rebuild the M$LOG to recover any disk space to the Tablespace. For more information, see this previous post.

The other piece I wanted to mention concerns how the MV is being refreshed. If you are noticing high amounts of Redo/Arch activity and suspect a portion of this to be caused by the frequent refreshes of all your MV's - explore the Atomic Refresh = False command for non-Fast (Complete) refreshes.

For example: 

SQL> EXEC DBMS_MVIEW.REFRESH('xx_mv', 'C', atomic_refresh=>FALSE);

This command uses a Truncate instead of a Delete - with two obvious benefits: no Redo generated and it is faster with MVs of some size.

"This is also why I do NOT setup a refresh schedule in the mat view creation itself (START WITH/NEXT syntax). Stick with dbms_scheduler to setup refresh schedule, calling dbms_mview.refresh" – tbone Jul 3 at 11:29

Monday, July 30, 2012

Ellison's Law

"The value of information increases exponentially as fragmentation is reduced."

Data Consolidation Creates Value

Data consolidation creates the foundation to understand your business operations.

Each type of consolidation provides increasing levels of return:
  • Hardware/System Services
  • Software/Database (don't forget Licensing and Support fees)
  • Business Processes and Functions
  • Shared Services/Process Administration

Friday, July 27, 2012

Dealing with ORA-01427 Errors Returned by a Stored Procedure

Recently, I received this error in a email from a Stored Procedure:

ORA-01427: single-row subquery returns more than one row

Usually, I would have liked to receive a line number or some other clue as to where in this 2,000 lines of code that the error occurred (I did not write this procedure) but this is all I have to work with.

Thinking back, ever have a SQL statement that you develop to get data only to find when you go to create a Table with it (via CTAS or the like), only then are you confronted with an error? Well, this was the path I chose to solve this ORA-01427 error.

First, I noticed that this Stored Procedure has a large SQL statement broken into two parts that used a UNION statement. To begin, I created a simple CTAS statement using the first part of this SQL. It created the table without reporting any errors. This tells me that the problem was within the UNION statement. When I did the same step with the second part of the SQL, TOAD returned the error and reported the line(s) that were causing the error and even highlighted the line in my Editor.

With this information, I was able to put the fix in place (you will either need to modify your SQL or you have bad data) to get the Stored Procedure running again.

As any system "matures" with new data, sometimes data scenarios present themselves that were not originally anticipated by the Developer. Just as likely, there are times when you just have bad data and someone in the Business Group needs to correct.

Tuesday, July 3, 2012

Link to Single Sales Order in EBS 12

In various SSRS reports I have created, Order Number is a popular request. Rather than just display the raw Order Number, why not provide the User the ability to hyperlink to this item in EBS?

To do so doesn't require much - all you need to obtain is the Header_ID and the Order_Number fields and create a new data column using the following string concatenation in your report's SQL:

1:  SELECT <your columns as you would> ,   
2:  'http://<FQDN>:<port>/OA_HTML/OA.jsp?akRegionCode=ORDER_DETAILS_PAGE&akRegionApplicationId=660&HeaderId=' || MY.header_id || '&Order_Id=' || MY.order_number || '&addBreadCrumb=Y&oapc=2' as URL   
3:  FROM ...   
4:  WHERE ....  
<FQDN is the fully qualified domain name of your EBS server and its PORT>

Using SSRS for an example, then go to the TextBox Properties of the Order Number data item and choose the Navigation tab. Under the 'Hyperlink Action' select 'Jump to URL' and in the pull-down menu, select your URL column named in your SQL.

You may want to style the Order Number's display and make it BLUE and have it Underlined to remind the User that the item is hyperlinked as SSRS doesn't display this on default.

Monday, June 25, 2012

Beware of (fast) Materialized Views created with DBLinks

Today I was looking at a particular instance's MLOGs (logs created by Materialized Views with REFRESH FAST ON DEMAND) and noticed that the source MVs were not located on this server at all. These MVs were built on another instance referencing tables on this server by use of DBLinks.

Recap of MLOGs (or snapshot logs):
When DML changes are made to master table data, Oracle Database stores rows describing those changes in the materialized view log and then uses the materialized view log to refresh materialized views based on the master table. This process is called incremental or fast refresh. Without a materialized view log, Oracle Database must re-execute the materialized view query to refresh the materialized view. This process is called a complete refresh. Usually, a fast refresh takes less time than a complete refresh.
A materialized view log is located in the master database in the same schema as the master table. A master table can have only one materialized view log defined on it. Oracle Database can use this materialized view log to perform fast refreshes for all fast-refreshable materialized views based on the master table.
To fast refresh a materialized join view, you must create a materialized view log for each of the tables referenced by the materialized view.
When I used this query to look at the underlying MVs, all of them had not had a refresh since 2009:

From dba_base_table_mviews a,dba_REGISTERED_MVIEWS m

Since these were never analyzed, I couldn't initially determine how much data they were holding so I analyzed these MLOG tables and they summed to about 80 million rows of data. 

With that information, and the knowledge that these MVs were no longer needed, I dropped the MVs on the remote instance and reanalyzed the MLOGs again. Most all the rows were removed! Great right? Well, now some cleanup on these tables needed to take place to give back this wasted space to the Tablespace.

Being on 10g, the "shrink" commands were used:

alter materialized view log on < table > enable row movement;
alter materialized view log on < table > shrink space compact;
alter materialized view log on < table > shrink space;
alter materialized view log on < table > disable row movement;
alter materialized view log on < table > deallocate unused;

Now onto what remaining MVs are requiring the MLOGs?

After verifying that no MVs existed referencing these Master tables, I dropped the snapshot logs since they were still being written to using:


Monday, June 18, 2012

SOA Composites Dashboard

So I wanted to create a simple SSRS report, and have it emailed on a regular basis that showed some basic information about our running Composites instead of clicking into each Group/Composite in SOA Enterprise Manager (EM).

Here is the query I initially came up with that seems to work pretty well:

         max(id) as composite_id,
         decode(min(state), 0, 'Running',
         1, 'Completed',
         2, 'Running with faults',         
         3, 'Completed with faults',
         4, 'Running with recovery required',
         5, 'Completed with recovery required',
         6, 'Running with faults and recovery required',
         7, 'Completed with faults and recovery required',
         8, 'Running with suspended',
         9, 'Completed with suspended',
         10,'Running with faults and suspended',
         11,'Completed with faults and suspended',
         12,'Running with recovery required and suspended',
         13,'Completed with recovery required and suspended',
         14,'Running with faults, recovery required, and suspended',
         15,'Completed with faults, recovery required, and suspended',
         16,'Running with terminated',
         17,'Completed with terminated',
         18,'Running with faults and terminated',
         19,'Completed with faults and terminated',
         20,'Running with recovery required and terminated',
         21,'Completed with recovery required and terminated',
         22,'Running with faults, recovery required, and terminated',
         23,'Completed with faults, recovery required, and terminated',
         24,'Running with suspended and terminated',
         25,'Completed with suspended and terminated',
         26,'Running with faulted, suspended, and terminated',
         27,'Completed with faulted, suspended, and terminated',
         28,'Running with recovery required, suspended, and terminated',
         29,'Completed with recovery required, suspended, and terminated',
         30,'Running with faulted, recovery required, suspended, and terminated',
         31,'Completed with faulted, recovery required, suspended, and terminated',
         64,'???????') run_state, 
         MAX (Created_time) AS last_run_time,
         SYSDATE - MAX (Created_time) AS Elapsed,
         || SOAGROUP ||'/' || SOACOMPOSITE || '%20[' || SOAVERSION || ']&type=oracle_soa_composite&soaContext='
         || SOAGROUP || '/' || SOACOMPOSITE || '!' || SOAVERSION || '/' || max(id) AS URL       
    FROM (SELECT SUBSTR (composite_dn, 1, INSTR (composite_dn, '/') - 1) AS SOAGroup,
                 SUBSTR (composite_dn, INSTR (composite_dn, '/') + 1, (INSTR (composite_dn, '!') - INSTR (composite_dn, '/') - 1)) AS SOAComposite,
                 SUBSTR (composite_dn, INSTR (composite_dn, '!') + 1, (INSTR (composite_dn, '*') - INSTR (composite_dn, '!') - 1)) AS SOAVersion,
            FROM prd_soainfra.composite_instance)
GROUP BY SOAGroup, SOAComposite, SOAVersion
ORDER BY SOAGroup, SOAComposite, SOAVersion

I then created a Data Source to the SOA (PRD_SOAINFRA user) Server and scheduled a report with email delivery. The Composite ID column was hyperlinked using the URL column (not displayed) in SSRS. This allows you to click and be taken directly to the Composite in EM once logged in.

Thursday, June 14, 2012

Database Resource Manager - CREATE_SIMPLE_PLAN

How to use Oracle's CREATE_SIMPLE_PLAN to implement a Database Resource Manager (DRM) in 10g.

You can use the following as a easy way to segregate certain user connections based on CLIENT_PROGRAM. For example: Oracle Discoverer (client and web-based) and Oracle SQL Developer.

This plan will demonstrate how to put sessions using these programs in a lower priority queue while maintaining the rest of your users in the "default" queue. SYS and SYSTEM users are in the highest queue.

Step 1:

  DBMS_RESOURCE_MANAGER.set_consumer_group_mapping (
    attribute      => DBMS_RESOURCE_MANAGER.client_program,
    value          => 'DIS51USR.EXE',  
    consumer_group => 'LOW_GROUP');

  DBMS_RESOURCE_MANAGER.set_consumer_group_mapping (
    attribute      => DBMS_RESOURCE_MANAGER.client_program,
    value          => 'DIS51WS@OCTEST.OCINT.COM (TNS V1-V3)',
    consumer_group => 'LOW_GROUP');

  DBMS_RESOURCE_MANAGER.set_consumer_group_mapping (
    attribute      => DBMS_RESOURCE_MANAGER.client_program,
    value          => 'SQL DEVELOPER',
    consumer_group => 'LOW_GROUP');
Step 2:


Step 3:
exec dbms_resource_manager.submit_pending_area();

Now let's validate what was created using the following SQL.

Validate the Plan creation:

  FROM dba_rsrc_plans;

Groups within a Plan:SELECT     plan
  FROM dba_rsrc_plan_directives
User Mappings:
   FROM dba_users
 ORDER BY 1,2;

Special User Mappings (showing client_program):
FROM dba_rsrc_group_mappings;

Ready to Activate your Plan?

(If RAC - run on each node. This means you can have different plans on different Nodes.)

View active plans:

Monitor your Plans activity:

To allow for Group Switching between Default_Consumer_Group and Low_Group you can do the following:

plan => 'XXOC_SIMPLE_PLAN1',
group_or_subplan => 'DEFAULT_CONSUMER_GROUP',
new_comment => ' ',
new_cpu_p1 => 0, new_cpu_p2 => 80, new_cpu_p3 => 0, new_cpu_p4 => 0,
new_cpu_p5 => 0, new_cpu_p6 => 0, new_cpu_p7 => 0, new_cpu_p8 => 0,
new_parallel_degree_limit_p1 => -1,
new_active_sess_pool_p1 => -1,
new_queueing_p1 => -1,
new_switch_group => 'LOW_GROUP',
new_switch_time => -1,
new_switch_estimate => true,
new_max_est_exec_time => -1,
new_undo_pool => -1