Useful Oracle DBMS Tips and Tricks


On DW’s NT box, click on the svmgr button on the toolbar. Log in as SYSTEM. Click Storage and the Tablespace tab. The percentages are shown for date files (name ends with a ‘D’) or index files (name ends with a ‘X’).


PAC cannot access the pay_pdt_batch_lines_s sequence on production. First we need to determine what the object is and who owns it.

Select * from dba_objects where object_name like ‘PAY_PDT_BATCH_LINES_S%’;

Note that the object name above must be in upper case.

Next we log in as PA and create a synonym for the object.

create synonym pay_pdt_batch_lines for pay_pdt_batch_lines_s

Then we log on as HR and grant select rights to the object.

grant select on pay_pdt_batch_lines_s to pac


select * from version


SqlDBA will give you response messages indicating when the action you inititiated is complete.

(1) log on as toracle/winter97

(2) from the command line type sqldba

(3) enter connect internal

(4) then enter startup open

(5) restart the concurrent managers


SqlDBA will give you response messages indicating when the action you inititiated is complete.

(1) shut down the concurrent managers

(2) log on as toracle/winter97 (for test)

(3) from the command line type sqldba

(4) enter shutdown immediate

(5) then enter startup open

Note that ‘shutdown immediate’ waits until all current processes running in the DB are completed. Hence, it could take a while for the DB to come down. If you enter ‘shutdown abort’ then all processes are immediately killed. This option is kind of messy and requires the database to clean up the next time it is brought up (hence, the next time you bring the DB up it will take much longer). Shutdown abort should only be used when absolutely neccessary. Most of the time Oracle will clean up after an abort but you never know…


Sometimes the apps crash w/a ‘max % extents’ and the table name. Do the following to increase the extents (in this case we extend pa_expenditure_items from 75 to 100).

(1) Log onto sqlplus as pa/pao.

(2) enter ‘alter table pa_expenditure_items storage (maxextents 100)


SQL is passed to the DBMS engine as human-readable text. The engine ust parse that text message in order for it to be interpreted and executed. First, the statement itself must pass lexical analysis, just to ensure that it is a correctly formed SQL statement. In addition, Oracle must evaluate and look up table names in its internal data dictionary to ensure that they are valid. Then oracle consults its dictionary to ensure that hte individual columns are valid and to ascertain the data type and constrainst on each column. "Parse locks" prevent all of these parsed elements from changing during the parsing process. Next, Oracle validates permissions by checking its internal tables to make sure that the user has permission to perform the operation. Finally, the database engine creates an execution plan, based on either the ‘rule’ or the ‘cost’ optimization method. With the cost-based optimization technique, oracle examines internal statistical data and looks for the presence of indices and other structure to determine how to best fetch the rows (only available in Oracle 7.0 or greater). Rule optimization based its execution plan on the contents and structure of the SQL statement itself rather than using statistics about the affected database objects. Using a feature known as ‘shared SQL’, Oracle stores this information in a library cache. Oracle calculates and stores a hash value for every statement it sees. If that exact statement is seen again, and the parse information is still in memory, Oracle goes directly to execution and saves much of its own overhead.

Here’s an interesting example of a ‘bad’ SQL statement:

SELECT * FROM customers WHERE customer_number = 100;

Whenever this statement came thru with a different number in the WHERE clause, the hash value would differ and hence the entire statement would be parsed again. A better way would be to use a bind variable as:

SELECT * FROM customers WHERE customer_number = :custom

PL/SQL stored procedures, functions, and triggers all reside in the library cache, along with the shared SQL, which is part of the shared pool. To use caching efficiently, consider using PL/SQL packages (a method of associating muliple procedures and functions in a common area with a common interface definition). When a function or procedure contained in a package is called, the enitre package is loaded into cache which means that related logic can be parsed and cached at one time, saving parse time.

The process of forcing a procedure to remain in memory is called ‘pinning’. You can prevent procedure from agin out of the shared pool (more specifically , the library cache) by using the DBMS_SHARED_POOL package. This is a collection of functions that, when run, keep procedures and functions from being flushed by the LRU algorithm for the life of the session. It is especially useful for situations in which logic is used trepetitively and you want to ensure that there are no sudden, seemingly random slowdowns with the application. Use the DBMSPOOL.SQL script to create the DBMS_SHARED_POOL package. Then an application can call the procedure DBMS_SHARED_POOL.KEEP() to pin an object in the shared pool. Once the procedure, function, etc. is pinned then it will not age out of the shared pool or the rest of the life of the instance, or until the application releases it with the DBMS_SHARED_POOL.UNKEEP() procedure.

Index Selectivity

Selectvity of an index refers to the number of data rows to which each index entry points. Higher selectivity means fewer rows per index entry, which is good for any transaction processing environment. The most selective index possible is the one associated with either a primary or unique index, where exactly one data row exists for each index entry. When selectivity is extremely poor, so many data rows are selected per key value that the database may decide that a full table scan is cheaper to use than an index. In such cases, it is better not to bother building an index. Oracle comes with a pair of scripts for calculating and reporting index selectivity: UTLOIDXS and UTLDIDXS. Listing columns and how selective they are allows you to choose a more selective column for your index, concatenate columns to increase index selectivity, or simply eliminate the index altogether.

Remember, if a table is small, Oracle will not even bother with an index so they are not always a good thing.

When you build a concatenated (or compound) index, you must understand that the left most column must be the most selective in order for the index to work efficiently.

Another bad candidate for an index would be status codes. Since there are usually on a handfull of possible values the index is not very selective which could cause Oracle to initiate a full table scan if the Oracle optimizer decides that a full table scan is the cheapest route.

A common rule of thumb is to assume that a full table scan will be performed any time a DBMS determinse that it will access more than 20 percent of the rows in any table.

Oracle provides a means to override the optimizer manually and take control of the execution plan with ‘hints’, a means by which the execution plan of a SQL statement is controlled by the SQL statement itself. A hint looks like the following:

SELECT / * FULL */ name, customer_number FROM customers WHERE sales_rep > 100;

Notice that the hint text FULL is enclosed in comment delimiters.

Nested loops are typically faster when the table with the smaller set of rows is the driving (outer) table of the loop.


Lay out your large tablespaces into small manageable sections. Some UNIX systems have problems backup up files over 1G in size. Also, smaller datafiles make moving them to other disks much easier.

When allocating a tablespace, use a standardized size. This makes swapping the datafiles to other disks easier than if they were of different sizes (swapping to other disks is quite common when you notice one disk being hit much harder than another).

Keep all datafiles (*.dbf) in the same directory. Makes them much more manageable.

Set maxdatafiles parm as high as possible.

Mirror your redo logs. They are a single point of failure. If you you lose, you may lost your entire database.

Remember, if your SYSTEM tablespace fills up, your database freezes.

Each database should have a minimum of 3 control files. Place the control files on sepaate physical devices.


The SGA (system global area) is a segment of memory allocated to Oracle that contains data and control information particlar to an Oracle instance. Sizing of the SGA is partially a hit-and-miss exercise. Oracle allocated memory to the SGA when you start the instance. It is released when the instance is shutdown.

The SGA consists of the database buffer cache (holds copies of data blocks read from the datafiles – this is the cache for the database data), redo log buffer (holds info on insert, update, delete, create, alter, or drop operations and is used for database recovery – LGWR writes the redo log buffer to the redo log file on disk when needed), shared pool (consisting of the library cache which contains shared SQL areas, private SQL areas (bind variables and such), Pl/SQL procedures and packages, and locks), the dictionary cache (info about the database structure and its users).

The SGA should fit into real memory since paging to and from virtual memory defeats the purpose of this internal database caching mechanism.


SQL> set autotrace on explain

SQL> set timing on

SQL> select ename, empno, sal from emp where upper(ename) = ‘KING’;


———- ———- ———-

King 7839 5000

Execution Plan


0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=1 Card=1 Bytes=40)

1 0 TABLE ACCESS (BY INDEX ROWID) OF ‘EMP’ (Cost=1 Card=1 Bytes=40)


Explain plans display the execution plan chosen by the Oracle optimizer. Its the sequence of operations that Oracle performs to execute the statement. Before Explain Plan is used you must run utlxplan.sql to create the plan_table to hold the output results.


You can turn sql trace on at the session level using:

alter session set SQL_TRACE = TRUE;

To enable it from forms, start forms using the -s (statistics) option.

You can also turn on sql trace for other users if you get their SID and SERIAL# from the V$SESSION table. Set sql trace on:

EXECUTE dbms_system.set_sql_trace_in_session([SID],[SERIAL#],TRUE);

Note that there are many options available for tkprof. In the statement below the options are:

tkprof ora_15007.trc ora_15007.out explain=apps/fndo sort=exeela sys=n print=10

explain=apps/fndo – Determines the execution plan for each sql statement in the trace file. TKPROF connects to orace using the given username/password and issues a EXPLAIN PLAN command.

sys=n – disables the listing of sql statements issued by the SYS user (recursive sql statements). The default is Yes so this can be used to supress them. Recursive statements are sometimes issued by oracle when executing a sql statement. For example, if you are executing a sql statement that inserts records in the database and there is not enough space in the database for the insert, Oracle will issue recursive sql to dynamically increase the table space. Recursive calls are marked in the trace output so you should consider how they were generated and how they affect your analysis (for instance, the recursive example above may have happened when you were running a sql trace but won’t happen under normal conditions).

sort=exeela – There are about 20 sort options that can be specified. This one sorts by elapsed time spend executing.

print=10 – lists only the first 10 lines. Used in combination with the sort option, allows you to list the top 10 items that used the most elapsed time executing.

Also note that there is a INPUT parm that can be used to generate a sql statement for inserting the output into the database. This may be useful for storing multiple sql trace runs that you wish to save and compare over time.

The output will list the sql statement, the numeric statistics, summary information, and the explain plan.

The numeric statistics consist of count (number of times OCI procedure was executed), cpu (cpu time in seconds executing), elapsed (elapsed time in seconds executing), disk (number of physical reads of buffers from disk), query (number of buffers gotten for consistent read), current (number of buffers gotten in current mode (usually for update)), rows (number of rows processed by the fetch or execute call)

The statistics are broken up into parse, execute, and fetch. These are the 3 steps of sql statement processing. Parse – the sql is translated into a execution plan, execute – the statement is executed, and fetch – rows are returned for the query (only valid for select sql statements).

Note that timing statistics may show 0 if the TIMED_STATISTICS database parm is turned off.

Also, timing is limited to 10 milliseconds for all statistics. For example, you may see 0 for the parse statistic because the parsed form of the statement was already in the shared SQL area.


These items are stored in the dba_tab_privs table. You must be logged in as system to see the table.


select text from dba_views where view_name =

You must adjust the settings in SQL*Plus for the view text to display properly. Use the set long 99999999999999 command.


select text from dba_source where name=

As stated above, you must use the set long 99999999999 command in sqlplus.

Print Friendly, PDF & Email

Leave a Reply