Wednesday, April 24, 2013

Proxy User in Oracle

If you find yourself with the requirement to allow a user to create objects in another user's schema, you may want to take a look at Proxy User functionality.


As a test, I created a new user (abtest) and, as expected, was not able to create an object in my regular (aballo) Schema. You can perhaps grant ‘EXECUTE PROCEDURE’ and do the DDL in the statement (which is messy)….

I then tried logging is as “ABTEST” as a Proxy User for “ABALLO” and this DID appear to work. A test is below…  maybe this will work for your requirements? This is also useful when you don't want to give out a particular user's password.


16:43:11 ODST> create user abtest
16:43:31   2  identified by pwd4ab
16:43:44   3  default tablespace users
16:44:35   4  temporary tablespace temp_group1
16:45:11   5  quota 1m on users;

User created.

17:00:48 ODST> grant create session,create table to abtest;

Grant succeeded.

17:00:53 ODST> connect abtest/pwd4ab@odst;
Connected.

17:01:46 ODST> CREATE TABLE ABALLO.T2
17:01:47   2  (
17:01:47   3    N1  NUMBER,
17:01:47   4    N2  NUMBER
17:01:47   5  )
17:01:47   6  TABLESPACE USERS
17:01:47   7  RESULT_CACHE (MODE DEFAULT)
17:01:47   8  PCTUSED    0
17:01:47   9  PCTFREE    1
17:01:47  10  INITRANS   1
17:01:47  11  MAXTRANS   255
17:01:47  12  STORAGE    (
17:01:47  13              PCTINCREASE      0
17:01:47  14              BUFFER_POOL      DEFAULT
17:01:47  15              FLASH_CACHE      DEFAULT
17:01:47  16              CELL_FLASH_CACHE DEFAULT
17:01:47  17             )
17:01:47  18  LOGGING
17:01:47  19  NOCOMPRESS
17:01:47  20  NOCACHE
17:01:47  21  NOPARALLEL
17:01:47  22  MONITORING;
CREATE TABLE ABALLO.T2
*
ERROR at line 1:
ORA-01031: insufficient privileges

17:13:02 ODST> connect ###mgr/##########@odst
Connected.

17:24:39 ODST> alter user aballo grant connect through abtest;

User altered.

17:28:44 ODST> connect abtest[aballo]/pwd4ab@odst;
Connected.

17:29:22 ODST> CREATE TABLE ABALLO.T2
17:29:27   2  (
17:29:27   3    N1  NUMBER,
17:29:27   4    N2  NUMBER
17:29:27   5  )
17:29:27   6  TABLESPACE USERS
17:29:27   7  RESULT_CACHE (MODE DEFAULT)
17:29:27   8  PCTUSED    0
17:29:27   9  PCTFREE    1
17:29:27  10  INITRANS   1
17:29:27  11  MAXTRANS   255
17:29:27  12  STORAGE    (
17:29:27  13              PCTINCREASE      0
17:29:27  14              BUFFER_POOL      DEFAULT
17:29:27  15              FLASH_CACHE      DEFAULT
17:29:27  16              CELL_FLASH_CACHE DEFAULT
17:29:27  17             )
17:29:27  18  LOGGING
17:29:27  19  NOCOMPRESS
17:29:27  20  NOCACHE
17:29:27  21  NOPARALLEL
17:29:27  22  MONITORING;

Table created.

17:51:44 ODST> select user from dual;

USER
------------------------------
ABALLO

17:51:59 ODST> select sys_context('userenv','proxy_user') from dual;

SYS_CONTEXT('USERENV','PROXY_USER')
--------------------------------------------------------------------------
ABTEST

17:52:05 ODST>


Another way to view current Proxy Users logged into the instance:


SELECT *
  FROM v$session JOIN V$SESSION_CONNECT_INFO USING (sid, serial#)
 WHERE authentication_type = 'PROXY'
 and network_service_banner = 'TCP/IP NT Protocol Adapter for Linux: Version 12.1.0.1.0 - Production'

Thursday, April 18, 2013

Capturing Elapsed Time of a SQL Statement

Looking to create a dashboard for our BI Tool that will enable me to capture report run times and display a red/yellow/green light for our reporting sub system to end users. 

My first task is to create (and schedule) a sample report that calculates run time deltas. 


COLUMN start_time NEW_VALUE start
SELECT systimestamp(9) start_time FROM dual;

<your sql statement goes here>

COLUMN end_time NEW_VALUE end 
SELECT systimestamp(9) end_time FROM dual; 
Column elapsed_time NEW_VALUE elapsed 
SELECT TO_TIMESTAMP_TZ('&end', 'MM/DD/YYYY HH12:MI:SS.FF9 AM TZH:TZM') - TO_TIMESTAMP_TZ('&start', 'MM/DD/YYYY HH12:MI:SS.FF9 AM TZH:TZM') elapsed FROM dual;  

My time output looks like this:

START_TIME                         
-----------------------------------
4/18/2013 1:10:30.758782000 PM -06:
00                                 
                                                                                
1 row selected.


END_TIME                           
-----------------------------------
4/18/2013 1:10:30.873328000 PM -06:
00                                 
                                                                                
1 row selected.

old: SELECT TO_TIMESTAMP_TZ('&end', 'MM/DD/YYYY HH12:MI:SS.FF9 AM TZH:TZM') - TO_TIMESTAMP_TZ('&start', 'MM/DD/YYYY HH12:MI:SS.FF9 AM TZH:TZM') elapsed FROM dual

new: SELECT TO_TIMESTAMP_TZ('4/18/2013 1:10:30.873328000 PM -06:00', 'MM/DD/YYYY HH12:MI:SS.FF9 AM TZH:TZM') - TO_TIMESTAMP_TZ('4/18/2013 1:10:30.758782000 PM -06:00', 'MM/DD/YYYY HH12:MI:SS.FF9 AM TZH:TZM') elapsed FROM dual

ELAPSED                                           
--------------------------------------------------
+00 00:00:00.114546
                             
1 row selected.

Now you can capture this run-time  maybe add a few other columns of useful information, and store in a Table that a performance metric dashboard report reads from.

Monday, April 15, 2013

Index Compression


On static Tables used for reporting, when an Index properly ordered, we can take advantage of Index Compression which will reduce Disc IO. To test this feature, I created a “regular” Index and then an Index with “COMPRESS 2” on FISCAL_YEAR and FISCAL_PERIOD. Results show a much smaller index (467MB -> 273MB) and even faster execution times. 

The number after the COMPRESS keyword denotes how many columns to compress. The default is all columns in a Non-Unique index and all columns except the last column in a Unique index.

With ‘Regular’ Index:

select count(*) from ODSMGR.OPERATING_LEDGER_MV
where fiscal_year='2012' and fiscal_period = '01'

--------------------------------------------------------------------------------------------
| Id  | Operation         | Name                   | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                        |     1 |   256 |    75   (4)| 00:00:01 |
|   1 |  SORT AGGREGATE   |                        |     1 |   256 |            |          |
|*  2 |   INDEX RANGE SCAN| OPERATING_LEDGER_MV_03 | 85275 |    20M|    75   (4)| 00:00:01 |
--------------------------------------------------------------------------------------------

With Index Compress = 2:

--------------------------------------------------------------------------------------------
| Id  | Operation         | Name                   | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                        |     1 |   256 |    46   (7)| 00:00:01 |
|   1 |  SORT AGGREGATE   |                        |     1 |   256 |            |          |
|*  2 |   INDEX RANGE SCAN| OPERATING_LEDGER_MV_03 | 85275 |    20M|    46   (7)| 00:00:01 |
--------------------------------------------------------------------------------------------

As W. Breitling recently pointed out on ORACLE-L, Index Compression is part of the basic database license - even SE - and is (unfortunately) a badly understood and rarely used feature. Test, test, test...




Monday, April 1, 2013

Why did the Optimizer not choose an index?

I'm always learning about the optimizer (CBO) and enjoy opportunities to investigate to solve real-world problems presented to me. During a recent stress test of some BI (WebFOCUS) reports, I decided to look at the top SQL statement as reported by Quest's Foglight Performance Analysis Tops Report. 



After cleaning the SQL up some, I had something that basically looked like:

SELECT * FROM ODSMGR.OPERATING_LEDGER_MV T1
WHERE (T1."ACCOUNT_LEVEL_4" = '8060') AND
(T1."FUND_LEVEL_3" = '3U0032') AND 
(T1."FISCAL_PERIOD" = '05') 
AND (T1."FISCAL_YEAR" = '2013')


-----------------------------------------------------------------------------------------
| Id  | Operation         | Name                | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                     |     1 | 26471 |   121K  (6)| 00:12:10 |
|*  1 |  TABLE ACCESS FULL| OPERATING_LEDGER_MV |     1 | 26471 |   121K  (6)| 00:12:10 |
-----------------------------------------------------------------------------------------


This table has a index on it - but Period & Year are at the end (columns 5 & 6 of 6):


CREATE INDEX ODSMGR.OPERATING_LEDGER_MV_02 ON ODSMGR.OPERATING_LEDGER_MV
(INDEX_CODE, ORGANIZATION_LEVEL_5, ORGANIZATION_LEVEL_3, FUND, FISCAL_PERIOD, 
FISCAL_YEAR)

After working with the SQL some, I found I could get the optimizer to utilize this index (via skip scan) by passing the hint:  /*+ INDEX_SS_DESC(T1) */ - which made sense: Period and Year were the last two columns.

------------------------------------------------------------------------------------------------------

| Id  | Operation                   | Name                   | Rows  | Bytes | Cost (%CPU)| Time     |

------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                        |     1 | 26471 |  4837   (1)| 00:00:29 |
|*  1 |  TABLE ACCESS BY INDEX ROWID| OPERATING_LEDGER_MV    |     1 | 26471 |  4837   (1)| 00:00:29 |
|*  2 |   INDEX SKIP SCAN DESCENDING| OPERATING_LEDGER_MV_02 |   317 |       |  4580   (1)| 00:00:28 |

------------------------------------------------------------------------------------------------------


So where to go from here? First I would like to point out that this is a static table, refreshed nightly and did not have stats on it. As a quick test to see if stats would help – it didn't change the plan to use the Index when used without the hint. It did remove the use of Dynamic Sampling (Level 2: 64 blocks) though.

Ok, what next? After reviewing our optimizer's init.ora settings:


optimizer_features_enable            string   11.2.0.2
optimizer_index_caching              integer  90
optimizer_index_cost_adj             integer  100
optimizer_mode                       string   ALL_ROWS
optimizer_secure_view_merging        boolean  FALSE
optimizer_use_invisible_indexes      boolean  FALSE
optimizer_use_pending_statistics     boolean  FALSE
optimizer_use_sql_plan_baselines     boolean  TRUE


I decided to explore the reasons why Oracle is not using the Index for Period and Year by first changing the value of OPTIMIZER_INDEX_COST_ADJ from 100 to 90. 

Initially, a FTS is still performed as stated above. Then when OPTIMIZER_INDEX_COST_ADJ is changed from 100 to 90, it still uses a FTS. It is not until statistics is gathered that the optimizer now chooses the Index (Skip Scan) without any hint used. Also note that once stats are gathered, Dynamic Sampling goes away.


OPTIMIZER_INDEX_COST_ADJ lets you tune optimizer behavior for access path selection to be more or less index friendly—that is, to make the optimizer more or less prone to selecting an index access path over a full table scan.

The default for this parameter is 100 percent, at which the optimizer evaluates index access paths at the regular cost. Any other value makes the optimizer evaluate the access path at that percentage of the regular cost. For example, a setting of 50 makes the index access path look half as expensive as normal.

When this test was repeated the following work day, the value at which the Index was used was down from 90 to 87 so I looked into how to use this parameter.

Several trusted sites recommended not setting this at the instance level from the default value (100). I did find a way to calculate the value by using:


select event, average_wait from v$system_event
where event like 'db_file s%read';

which returned:

EVENT                          AVERAGE_WAIT
------------------------------ ------------
db file sequential read                  .4
db file scattered read                 1.27

A single block I/O as performed typically by an index range scan is measured via the “db file sequential read” wait event while the multi-block I/O as typically performed during a FTS is measured via the “db file scattered read” wait event - suggesting a value of around 70 in this case. Rather than setting at the session or system level, I tried setting this for my query with the following hint:


SELECT /*+ opt_param('optimizer_index_cost_adj',70) */ count(*) 
FROM ODSMGR.OPERATING_LEDGER_MV T1  -- 84 rows
WHERE (T1."ACCOUNT_LEVEL_4" = '8060') AND
(T1."FUND_LEVEL_3" = '3U0032') AND 
(T1."FISCAL_PERIOD" = '05') 
AND (T1."FISCAL_YEAR" = '2013')

which selected the Index as expected.

At this point, I would like to add that I suspect a possible issue with System Stats. Based on the scattered and sequential read times shown above, when I query:

select * from sys.aux_stats$
where pname like '%TIM%' or pname LIKE '%IO%';














and observed values for SREADTIM = 32.497 and MREADTIM = 5.515 which would suggest the opposite - that it's 6x faster to read a multiblock than a single block?

When computed manually, these numbers should be closer to:

SREADTIM = (10 + 32768) / 4096 = 8.0ms
MREADTIM = (10 + 32 * 32768) / 4096 = 266ms

(db_block_size = 32k and db_file_multiblock_read_count = 32)

While setting a HINT will help make this particular query run much better, I suspect that the source of the problem is our System Stats.