Tuesday, October 6, 2009

Oracle DBA Questions

How do I find out who is currently using the space in the temporary tablespace ?
SELECT SE.USERNAME,
       SE.SID,
       SU.EXTENTS,
       SU.BLOCKS * TO_NUMBER(RTRIM(P.VALUE)) AS SPACE,
       TABLESPACE,
       SEGTYPE
  FROM V$SORT_USAGE SU, V$PARAMETER P, V$SESSION SE
 WHERE P.NAME = 'db_block_size' AND SU.SESSION_ADDR = SE.SADDR
 ORDER BY SE.USERNAME, SE.SID

I'm trying to make a tablespace read only, but it seems to hang forever. What's wrong ?
If you do a very large update in some tablespace – large enough to ensure that at least some of the updated blocks are flushed from the buffer - commit, and then make that tablespace read only, you will find that some of the blocks in the read only tablespace have not been cleaned out and can no longer be cleaned out. This means that whenever you read any of those blocks from disc Oracle will run through some of its clean-out code – specifically making a call to check the control table from the undo segment header identified in the ITL entry that your transaction used in the data block header (Call ktuwh05: ktugct for those who care about these things).
The overhead is not huge – but there’s no point in paying it. So consider forcing a scan of all objects in a tablespace before switching it to read only mode to make Oracle clean out any blocks that need it. Note – however, that there are at least a couple of reasons why this might be a pointless use of resources. For example, any blocks that were in memory and not cleaned out when you switched the tablespace to read only mode will be cleaned before being written. Moreover, if the contents of the tablespace have been created through ‘create as select’ they will have been created as clean. If there are any other reasons, I can’t think of them right now.
There is a beneficial side-effect to this restriction too. Oracle does not clean out all changed blocks at commit time, and some blocks will end up written to disc in a dirty state. These blocks are normally cleaned out the next time that they are read, but this puts them into a new, dirty, state that requires them to be written again, it can also be an expensive process because it can require Oracle to generate a historic image of the rollback segment header blocks. If a dirty block is found in a read only tablespace though, Oracle knows that it must have been committed before the tablespace was made readonly, and therefore never needs to worry about cleaning the block out and re-writing it - it has a good, cheap, approximation to the commit SCN from the SCN at which the tablespace went read only.
How can I perform bitwise operations in Oracle ?
Ever since Oracle 7 and probably even previous, Oracle has contained a BITAND function that returns the logical AND of two integers. Somewhere around 8i they finally decided to stick it in the documentation. You can safely rely on this function being retained in future versions of Oracle, because it is used extensively throughout the data dictionary. For example, a column is deemed to be "hidden" if bitand(property, 32) = 0 on the SYS.COL$ table.
Note that BITAND returns a binary integer, so to use it in SQL you will need to convert it to a numeric.
SQL> select bitand(49,54) from dual;
ERROR:
ORA-00932: inconsistent datatypes


SQL> select bitand(49,54)+0 from dual;

BITAND(49,54)+0
---------------
             48
Using this function, its trivial to build the other logical operations
function bitor(p_dec1 number, p_dec2 number) return number is
begin
  return p_dec1-bitand(p_dec1,p_dec2)+p_dec2;
end;

function bitxor(p_dec1 number, p_dec2 number) return number is
begin
  return bitor(p_dec1,p_dec2)-bitand(p_dec1,p_dec2);
  -- or you could use: return p_dec1-2*bitand(p_dec1,p_dec2)+p_dec2;
end;
Note that BITAND only supports positive integers, so if you want to include support for negatives you will need to wrap some of your own code around the above to handle them.
Because many people were unaware of the BITAND builtin, they often sought solace in the UTL_RAW package which looks promising when you describe it.
SQL> desc UTL_RAW
FUNCTION BIT_AND RETURNS RAW
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 R1                             RAW                     IN
 R2                             RAW                     IN

FUNCTION BIT_COMPLEMENT RETURNS RAW
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 R                              RAW                     IN

FUNCTION BIT_OR RETURNS RAW
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 R1                             RAW                     IN
 R2                             RAW                     IN

FUNCTION BIT_XOR RETURNS RAW
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 R1                             RAW                     IN
 R2                             RAW                     IN
But you need to be careful here, as can be best shown with an example. First we'll use the standard BITAND to see what the real result should be:
SQL> exec dbms_output.put_line(bitand(147,63));
19

PL/SQL procedure successfully completed.
Now we'll try UTL_RAW to accomplish the same thing
SQL> declare
  2    x  number := 147;
  3    y  number := 63;
  4  begin
  5    dbms_output.put_line(utl_raw.bit_and(x,y));
  6  end;
  7  /

ERROR at line 1:
ORA-06550: line 5, column 24:
PLS-00306: wrong number or types of arguments in call to 'BIT_AND'
ORA-06550: line 5, column 3:
PL/SQL: Statement ignored
This is because the UTL_RAW package (as the name suggests) only works with RAW data. "No problem" we think, we'll just do some datatype conversions...
SQL> declare
  2    x  number := 147;
  3    y  number := 63;
  4    x_raw raw(4) := utl_raw.cast_to_raw(x);
  5    y_raw raw(4) := utl_raw.cast_to_raw(y);
  6  begin
  7    dbms_output.put_line(utl_raw.bit_and(x_raw,y_raw));
  8  end;
  9  /
303037

PL/SQL procedure successfully completed.
So the PL/SQL worked, but the result is not what we were expecting. Then you may click onto the fact that since the result is raw, an additional conversion is needed:
SQL> declare
  2    x  number := 147;
  3    y  number := 63;
  4    x_raw raw(4) := utl_raw.cast_to_raw(x);
  5    y_raw raw(4) := utl_raw.cast_to_raw(y);
  6  begin
  7    dbms_output.put_line(utl_raw.cast_to_varchar2(utl_raw.bit_and(x_raw,y_raw)));
  8  end;
  9  /
007

PL/SQL procedure successfully completed.
Still no joy! The reason for this behaviour is that we not taking into account all of the datatype conversions that are taking place. What is really happening is:
The number 147 is converted to varchar2 '147', which as a raw is "31,34,37" (that is, ascii 49, ascii 52, ascii 55). Similarly, 63 becomes raw "36,33" (ascii 54, ascii 51). It is these values that get passed to the BIT_AND function, which gives
    49      52      55
AND     54      51       -
    ==================
    48      48      55
which when converted back to varchar2 yields 007.
It is possible to use UTL_RAW, but you just need to take care with all of the conversions. A package to do this is:
create or replace
package bitops2 is

function bitand(p_dec1 number, p_dec2 number) return varchar2 ;
function bitor(p_dec1 number, p_dec2 number) return varchar2 ;
function bitxor(p_dec1 number, p_dec2 number) return varchar2 ;

end;
/
create or replace
package body bitops2 is

function raw_ascii(p_dec number) return raw is
  v_result varchar2(1999);
  v_tmp1   number := p_dec;
begin
  loop
    v_result := chr(mod(v_tmp1,256)) || v_result ;
    v_tmp1 := trunc(v_tmp1/256);
    exit when v_tmp1 = 0;
  end loop;
  return utl_raw.cast_to_raw(v_result);
end;

function ascii_raw(p_raw varchar2) return number is
  v_result number := 0;
begin
  for i in 1 .. length(p_raw) loop
    v_result := v_result * 256 + ascii(substr(p_raw,i,1));
  end loop;
  return v_result;
end;

function bitand(p_dec1 number, p_dec2 number) return varchar2 is
begin
  return
   ascii_raw(
     utl_raw.cast_to_varchar2(
       utl_raw.bit_and(
         raw_ascii(p_dec1),
         raw_ascii(p_dec2)
       )
     )
   );
end;

function bitor(p_dec1 number, p_dec2 number) return varchar2 is
begin
  return
   ascii_raw(
     utl_raw.cast_to_varchar2(
       utl_raw.bit_or(
         raw_ascii(p_dec1),
         raw_ascii(p_dec2)
       )
     )
   );
end;

function bitxor(p_dec1 number, p_dec2 number) return varchar2 is
begin
  return
   ascii_raw(
     utl_raw.cast_to_varchar2(
       utl_raw.bit_xor(
         raw_ascii(p_dec1),
         raw_ascii(p_dec2)
       )
     )
   );
end;

end;
/
If your bitwise requirements extend larger than the maximum size of the number datatype, or if you would rather deal with bit strings directly, the following package will assist:
create or replace
package bitops is

function bitand(p_bit1 varchar2, p_bit2 varchar2) return varchar2 ;
function bitor(p_bit1 varchar2, p_bit2 varchar2) return varchar2 ;
function bitxor(p_bit1 varchar2, p_bit2 varchar2) return varchar2 ;

end;
/

create or replace
package body bitops is

function bitand(p_bit1 varchar2, p_bit2 varchar2) return varchar2 is
  v_result varchar2(1999);
  v_tmp1   varchar2(1999) := p_bit1;
  v_tmp2   varchar2(1999) := p_bit2;
  v_len1   number         := length(p_bit1);
  v_len2   number         := length(p_bit2);
  v_len    number         := greatest(v_len1,v_len2);
begin
  v_tmp1 := lpad(v_tmp1,v_len,'0');
  v_tmp2 := lpad(v_tmp2,v_len,'0');
  for i in 1 .. v_len loop
    v_result := v_result || least(substr(v_tmp1,i,1),substr(v_tmp2,i,1));
  end loop;
  return v_result;
end;

function bitor(p_bit1 varchar2, p_bit2 varchar2) return varchar2 is
  v_result varchar2(1999);
  v_tmp1   varchar2(1999) := p_bit1;
  v_tmp2   varchar2(1999) := p_bit2;
  v_len1   number         := length(p_bit1);
  v_len2   number         := length(p_bit2);
  v_len    number         := greatest(v_len1,v_len2);
begin
  v_tmp1 := lpad(v_tmp1,v_len,'0');
  v_tmp2 := lpad(v_tmp2,v_len,'0');
  for i in 1 .. v_len loop
    v_result := v_result || greatest(substr(v_tmp1,i,1),substr(v_tmp2,i,1));
  end loop;
  return v_result;
end;

function bitxor(p_bit1 varchar2, p_bit2 varchar2) return varchar2 is
  v_result varchar2(1999);
  v_tmp1   varchar2(1999) := p_bit1;
  v_tmp2   varchar2(1999) := p_bit2;
  v_len1   number         := length(p_bit1);
  v_len2   number         := length(p_bit2);
  v_len    number         := greatest(v_len1,v_len2);
begin
  v_tmp1 := lpad(v_tmp1,v_len,'0');
  v_tmp2 := lpad(v_tmp2,v_len,'0');
  for i in 1 .. v_len loop
    v_result := v_result || to_char(abs(to_number(substr(v_tmp1,i,1))-to_number(substr(v_tmp2,i,1))));
  end loop;
  return v_result;
end;

end;
/

SQL> declare
  2    x varchar2(20) := '10101010101010101010';
  3    y varchar2(20) := '11110000111100001111';
  4  begin
  5    dbms_output.put_line(bitops.bitand(x,y));
  6  end;
  7  /
10100000101000001010

PL/SQL procedure successfully completed.

Addenda
Bart Pots has extended the package to handle bit strings of differing length and provided numeric input and output. The code is below:
create or replace
package bitops2 is

    function raw_ascii(p_dec number) return raw;
    function ascii_raw(p_raw varchar2) return number;
   
    function bitand(p_dec1 number, p_dec2 number) return number;
    function bitor(p_dec1 number, p_dec2 number) return number;
    function bitxor(p_dec1 number, p_dec2 number) return number;
   
end;
/
create or replace
package body bitops2 is

    function raw_ascii(p_dec number) return raw is
   
        v_result varchar2(1999);
        v_tmp1   number := p_dec;
       
    begin
   
        loop
           
            v_result := chr(mod(v_tmp1,256)) || v_result ;
            v_tmp1 := trunc(v_tmp1/256);
           
            exit when v_tmp1 = 0;
           
        end loop;
       
        return utl_raw.cast_to_raw(v_result);
    end;
   
    function ascii_raw(p_raw varchar2) return number is
   
        v_result number := 0;
       
    begin
       
        for i in 1 .. length(p_raw) loop
       
            v_result := v_result * 256 + ascii(substr(p_raw,i,1));
       
        end loop;
       
        return v_result;
       
    end;
    function bitand(p_dec1 number, p_dec2 number) return number is
   
        v_tmp_raw1 raw(1999);
        v_tmp_raw2 raw(1999);
       
        nr_diff_length number(4,0);
       
        nr_pos number(4,0);
       
        v_tmp_raw_zero raw(1999) := raw_ascii(0);
   
    begin
       
        v_tmp_raw1 := raw_ascii(p_dec1);
        v_tmp_raw2 := raw_ascii(p_dec2);
       
        nr_diff_length := greatest( utl_raw.length(v_tmp_raw1),
                                          utl_raw.length(v_tmp_raw2));

        for nr_pos in utl_raw.length(v_tmp_raw1) + 1.. nr_diff_length loop
       
            v_tmp_raw1 := utl_raw.concat( v_tmp_raw_zero,v_tmp_raw1 );
       
        end loop;
       
        for nr_pos in utl_raw.length(v_tmp_raw2) + 1 .. nr_diff_length loop
       
            v_tmp_raw2 := utl_raw.concat( v_tmp_raw_zero,v_tmp_raw2 );
       
        end loop;

        return ascii_raw( utl_raw.cast_to_varchar2( utl_raw.bit_and( v_tmp_raw1, v_tmp_raw2)));
       
    end;
   
    function bitor(p_dec1 number, p_dec2 number) return number is
   
        v_tmp_raw1 raw(1999);
        v_tmp_raw2 raw(1999);
       
        nr_diff_length number(4,0);
       
        nr_pos number(4,0);
       
        v_tmp_raw_zero raw(1999) := raw_ascii(0);
   
    begin
   
        v_tmp_raw1 := raw_ascii(p_dec1);
        v_tmp_raw2 := raw_ascii(p_dec2);
       
        nr_diff_length := greatest( utl_raw.length(v_tmp_raw1),
                                          utl_raw.length(v_tmp_raw2));

        for nr_pos in utl_raw.length(v_tmp_raw1) + 1.. nr_diff_length loop
       
            v_tmp_raw1 := utl_raw.concat( v_tmp_raw_zero,v_tmp_raw1 );
       
        end loop;
       
        for nr_pos in utl_raw.length(v_tmp_raw2) + 1 .. nr_diff_length loop
       
            v_tmp_raw2 := utl_raw.concat( v_tmp_raw_zero,v_tmp_raw2 );
       
        end loop;
   
        return ascii_raw( utl_raw.cast_to_varchar2( utl_raw.bit_or( v_tmp_raw1, v_tmp_raw2)));
   
    end;
   
    function bitxor(p_dec1 number, p_dec2 number) return number is
   
        v_tmp_raw1 raw(1999);
        v_tmp_raw2 raw(1999);
       
        nr_diff_length number(4,0);
       
        nr_pos number(4,0);
       
        v_tmp_raw_zero raw(1999) := raw_ascii(0);
   
    begin
   
        v_tmp_raw1 := raw_ascii(p_dec1);
        v_tmp_raw2 := raw_ascii(p_dec2);
       
        nr_diff_length := greatest( utl_raw.length(v_tmp_raw1),
                                          utl_raw.length(v_tmp_raw2));

        for nr_pos in utl_raw.length(v_tmp_raw1) + 1.. nr_diff_length loop
       
            v_tmp_raw1 := utl_raw.concat( v_tmp_raw_zero,v_tmp_raw1 );
       
        end loop;
       
        for nr_pos in utl_raw.length(v_tmp_raw2) + 1 .. nr_diff_length loop
       
            v_tmp_raw2 := utl_raw.concat( v_tmp_raw_zero,v_tmp_raw2 );
       
        end loop;
   
        return ascii_raw( utl_raw.cast_to_varchar2( utl_raw.bit_xor( v_tmp_raw1, v_tmp_raw2)));
       
    end;
end;

How can I tune a tablescan?
This particular example is typical of the generic case. We have a query like the following on a 40GB table, but the index is ‘not selective’:
select
{list of columns}
from
very_large_table t
where
colX = 'ABC'
order by
colY
;
The first problem with this question is the one of ambiguity – when the poster says “cost” do they really mean the figure reported in the COST column of the execution plan table, or do they mean the resource consumption (e.g. number of physical disk reads) when the query actually runs.
In the former case, it is not necessarily safe to assume that there will be a direct correlation between the COST, the resources used, and the run-time. In the latter, there may not be a direct correlation between resource usage (e.g. number of physical disk I/Os) and the run-time when the query is transferred from development to production, because there may be more competition for physical resources on production, so response time may drop.
So how can you tune a full table scan for very large tables? In a very real sense, you can’t. Once your are committed to a full tablescan, that’s it – you have to scan every block (and every row) in the table typically using multiblock reads. The block scans are likely to generate a lot of I/O; the row examinations will consume a lot of CPU.
Of course, there are a few steps you can take to minimize the resource requirement – but their side effects need to be considered carefully.
Can you make the scan times faster?
In principle, yes, a little. Increasing the size of the parameter db_file_mulitblock_read_count so that the read requests are fewer and larger may help. In practice it is not always that simple. Apart from the side-effects on the optimizer’s calculation, you may find that odd interference effects from the various hardware levels actually make it hard to find an optimum setting for the parameter. Things like disk track size, disk buffer size, network buffer size, O/S stripe size and so on can result in some very odd “impedance” effects. You may have to find a best fit for your system by a tedious process of trial and error.
In practice, you may find that the hardware “sees” your big tablescan starting, and adopts a read-ahead strategy that makes your tuning attempt redundant. (This is actually a problem with mixed-load SANs, they tend to get over-enthusiastic about tablescans at the cost of random I/Os – so on average their rate of throughput can look good while the end-users (and v$session_event) are complaining about slow disk response times)
Would parallelism help ?
In principle, yes; but only for the query itself. For a tablescan of degree N the table would effectively be broken up into N parts, allowing N separate processes to scan and extract the required data before forwarding the minimum possible data volume on to another set of processes to handle the sorting. So long as the number of slaves was not sufficient to overload the I/O subsystem, and provided the slaves didn’t end up colliding on the same disks all the time, then the speed of the scan would improve by a factor of roughly N.
On the other hand, by making this query run as rapidly as possible – which means the highest degree of parallelism you can get away with – you are likely to put a lot of stress on the I/O subsystem – to you really want to do that, as it will probably slow everything else down.
Can you do something to cache the table?
At 40 GB, probably not (but who knows, maybe you have 128GB of memory to play with and that might be enough to let you put this table into a keep pool) and you’re still going to be using a lot of CPU looking at all those rows whatever you do about caching.
Is it really true that the index has very poor selectivity?
If so, why are you running a query that (apparently) is going to fetch a huge number of rows? Do you really need to fetch all those rows, or are you really after just the first few?
If the volume is actually relatively low, then perhaps a “bad” index is still better than doing a tablescan – it may protect your I/O subsystem for other users.
If the volume of data is high but you only want the first few rows, how about (in the example above) a “function-based index” of the form decode(colX,’ABC’,colY,null) – so holds entries for just the rows you are interested in, with the index entries in the right order for your order by clause. If you do this, you could re-write your query to force a range scan through this index, stopping after a few rows, rather than acquiring all the rows and sorting them before discarding most of them.
Would partitioning (on the colX column in this case) work for you?
A list partition strategy, or possibly even a hash partition strategy, could break the table into a reasonable number of much smaller chunks. You would still have to do a ‘full table scan’ but it would be a scan of just one partition, which could be much smaller, so use less resources, run more quickly, and cause much less damage. But can you partition on this column – or would it interfere with all the other functions of the system; and if it does could it still be worth the penalty?
So, although you can’t really “tune” a large tablescan, there are some strategies you can investigate to see if you can find ways of reducing or restructuring the amount of work that you need to do. Whether any of them is appropriate for your system only you can choose.
Why is dbms_stats so much slower than Analyze?
The person who last posed this question didn’t mention a version number – but the problem appears in many versions of Oracle as you make the change from one technology to the other, mainly because the activity carried out by the dbms_stats does not, by default, match the activity carried out by the analyze command.
Not only does dbms_stats differ from the analyze command in its behaviour, virtually every version of Oracle has introduced a few extra parameters in the calls to dbms_stats and even changed some of the default values from previous versions, so that calls to dbms_stats that used to complete in a timely fashion in one version of Oracle suddenly take much longer after an upgrade – it’s not just the switch from analyze to dbms_stats that causes problems.
In the worst case, when left to its own devices, dbms_stats in 10g will work out for itself the best sample size to use on the tables and indexes, which columns to create histograms for, the number of buckets in the histograms, and the sample size to use for those histograms. The volume of work may be much larger than anything you would choose to do yourself.
The first guideline for using dbms_stats is: read the manual – or the script $ORACLE_HOME/rdbms/admin/dbmsstat.sql to see what it does, and how it changes from release to release. This gives you some chance of making it do what you used to do with the analyze command. The second guideline is to try a test run with lots of tracing (e.g. events 10046, 10033, and calls to v$session_event and v$sesstat) set so that you make sure that you can see how much work the tools do, and where they are losing time.
Partitioned tables have always been a particular problem for statistics – and Oracle is constantly fiddling with the code to try and reduce the impact of collecting reasonable statistics. If you have large partitioned tables, and don’t have to collect global statistics, bear in mind that you probably know more about the data than Oracle does. The best strategy for partitioned tables (probably) is to write your own code that limits Oracle to collecting statistics on the most recently added partition and then derives reasonable table level statistics programmatically by combining the latest statistics with the existing table stats. Look very carefully at the procedure with names like get_column_stats, set_column_stats.
Can Bind Variable Peeking cause problems without a histogram on Predicate columns ?
It is a well known fact that the reason for the majority of problems with the bind variable peek feature is an histogram on the column referenced in the access predicate. The histogram is certainly the main but not the only cause for a non appropriate execution plan. Another less known situation where a different value of bind variables can lead to a change of execution plan is an access predicate on a (sub)partitioned table. This is a particularly important scenario in case of range partitioned fact tables organized as rolling windows. These tables contain two types of partitions, those filled up and those pre allocated to future loads. As the optimiser statistics of both types are very different, the risk of getting the wrong execution plan in case of peeking “in the wrong partition” is relatively high.
How to generate an Execution Plan with Bind Variable Peeking?
Bind variable peeking introduced in Oracle Release 9 can be extremely helpful in some situations where the additional information leads to a better execution plan. On the other side this feature makes it much more difficult to see the 'real' execution plan using explain plan or autotrace for statements with bind variables as those tools don't perform the peek. The trivial answer to the question above is to substitute the bind variable with a literal value but there are some subtle issues with variable data type that could lead to a different plan.
Why do I keep seeing tables with garbage names like BIN${something}
In 10g, Oracle introduced an option for “undropping” a table, which we can demonstrate with the following script in 10g Release 2:
create table t1(n1 number);
create index i1 on t1(n1);
select table_name from user_tables;
select index_name from user_indexes;
select object_name, object_type from user_objects;
drop table t1;
select table_name from user_tables;
select index_name from user_indexes;
select object_name, object_type from user_objects;
flashback table t1 to before drop;
select table_name from user_tables;
select index_name from user_indexes;
select object_name, object_type from user_objects;
In my test on 10.2.0.1, the queries after the create table and index showed the following (expectd) results:
TABLE_NAME
--------------------------------
T1
INDEX_NAME
--------------------------------
I1
OBJECT_NAME OBJECT_TYPE
---------------------------------------- -------------------
I1 INDEX
T1 TABLE
The queries following the drop showed the following:
No rows for user_tables – earlier versions may show a strangely named table
No rows for user_indexes - earlier versions may show a strangely named table
OBJECT_NAME OBJECT_TYPE
---------------------------------------- -------------------
BIN$HULdSlmnRZmbCXAl/pkA9w==$0 TABLE
BIN$pyWpLnQwTbOUB9rQrbwgPA==$0 INDEX
The table and its indexes have not been eliminated from the database, or from the data dictionary. They have been renamed, and hidden from the table and index data dictionary views (at least, they are hidden in later versions of 10g, you could still see them in some of the earlier versions). You can get them back if you want to, you can clear them out explicitly if you want to, and (most importantly) although they count against your space quota, they will vanish spontaneously – i.e. be “properly dropped” if you account needs more space in the relevant tablespace and it isn’t available.
After the “undrop” command, the queries showed the following:
TABLE_NAME
--------------------------------
T1
INDEX_NAME
--------------------------------
BIN$pyWpLnQwTbOUB9rQrbwgPA==$0
OBJECT_NAME OBJECT_TYPE
---------------------------------------- -------------------
T1 TABLE
BIN$pyWpLnQwTbOUB9rQrbwgPA==$0 INDEX
Note how the table name has re-appeared with the correct name, but Oracle has failed to restore the index name properly – although it has made it visible. If you do “undrop” objects, make sure you check the names of dependent objects (including such things as constraints). You may need to rename them. I assume that Oracle has not renamed the secondary objects because there is a risk that you may have created other objects with conflicting names: manual resolution is the only sensible approach. Bear in mind it is also possible for Oracle to get rid of a dropped index when there is pressure for space, so when a table is “undropped”, some of its indexes could actually be missing. Because there are non-standard naming characters in the index name, you will have to quote the name when renaming it, e.g.
alter index " BIN$pyWpLnQwTbOUB9rQrbwgPA==$0" rename to i1;
To get rid of the objects manually, you can use one of three commands (if you have the relevant privilege)
purge user_recyclebin;
purge dba_recyclebin;
purge recyclebin;
Alternatively, if you want to avoid the recycle bin completely, you can change your drop table call:
drop table t1 purge;
Note – when you drop just an index, it does not go into the recycle bin, it really is dropped.
How to pin a table in memory.
Sometimes you wonder if you could pin or keep a table or index in memory. Many application would benefit, if some key tables could be accessed very quick and faster then average. Fast access means, that the table has to be cached in memory, to avoid wait time on disk reads and other waits associated with buffer pool management.
PL/SQL objects can be pinned in SGA, but technically there is no alter table t_name pin. But the trick can be accomplished by clever use of the new buffer pools; default, keep and recycle (version 8 and up). Default pool is not new actually, it's been around and used for at while.
Having a situation, where you want to pin a table in memory i.e. keep the table in the buffer pool. That would accomplish, that the table would not be aged out of the buffer cache i.e. replaced (memory space reused) by other objects through the Least Recently Used (LRU) algoritmn and table access would always be fast.
Prior to version 8, this could not be accomplished. The only alternative, was to alter table t_name cache. The effect was, that full table scans would not go to the least recently used end of the lru list, but to the most recently used end, hereby surviving longer, but seldom forever in the buffer cache.
After version 8, you can control if an object is loaded in default, keep or recycle pool. And that changes things.
Using buffer_pool_keep for pinning a table
First you have to create the keep (and/or recycle) buffer pool by editing the initSID.ora file. (NOTE: there is no difference in the behavior of the three buffer pools, the naming of the pools is merely for memo technically reasons and intended use.
Just moving a table to the keep pool, does not guarantee that the table is always kept, it can be aged out of the keep pool by the LRU algoritmn.)
Part of initSID.ora file using arbitrary numbers (italics is new lines)
        db_block_buffers        =  65536
        db_block_lru_latches    =  4
        buffer_pool_keep        =  16384                            -- version 8.0.x
        buffer_pool_keep        = (buffers:16384, lru_latches:1)    -- version 8.1.x
        buffer_pool_recycle     =  .........
After restarting the instance, you can isolate the table(s) in the keep pool by changing table storage
        alter table t_name storage ( buffer_pool keep);
Now you can pre page the table and load all table rows into memory by a full table scan as select * from t_name. If the number of blocks the table occupies is less than the number of blocks in the keep pool. (tested on 8.1.7.1, win2k) Or you can let the application populate the buffer pool, having slower access first time the data block is accessed. The advantage is that data blocks never used, won't be loaded, and won't take up valuable memory space.
You might have several indexes associated with the table. You can choose to drop the indexes and always do a full table scan. If the table is small and full scan is fast and cheap. Even memory scans has cpu cost and can give rise to latch contention and memory latency.
The alternative is to cache the index(es) and reserve space for the index(es) in the pool. Remember that the optimizer doesn't know that the table is fully cached, and will try to use index lookup, if not told otherwise by hints like /*+ full (t_name) */. Hints can be over ruled by the optimizer. Test your statements to be sure.
If schema and table design fits, evaluate to convert table and index into IOT, index organized table. That can save memory (and disk) space.
If the table (and index?) is frequently updated, reserve extra space for block copies until the db_writer wakes up and cleans out the dirty blocks.

Oracle document A76992-01 (EE doc for 8.1.6/7) has following guidelines.
Identifying Segments to Put into the keep and recycle Buffer Pools
A good candidate for a segment to put into the recycle buffer pool is a segment that is at least twice the size of the default buffer pool and has incurred at least a few percent of the total I/Os in the system.
A good candidate for a segment to put into the keep pool is a segment that is smaller than 10% of the size of the default buffer pool and has incurred at least 1% of the total I/Os in the system.
The trouble with these rules is that it can sometimes be difficult to determine the number of I/Os per segment if a tablespace has more than one segment. One way to solve this problem is to sample the I/Os that occur over a period of time by selecting from v$session_wait to determine a statistical distribution of I/Os per segment.
        select file#, count(block#), count (distinct file# || block#)
        from v$bh
        group by file# ;

For monitoring buffer pool usage, I have created 3 views in sys schema, to help me do a quick check, now and then. I created the views due to complexity and runtime problems (slow running) with a single select. And this works nice.
A view for getting an objects buffer_ pool defaults:
create or replace view oci_buffer_pools
as
select table_name object, buffer_pool from dba_tables
   where buffer_pool is not null
union
select table_name object, buffer_pool from dba_tab_partitions
  where buffer_pool is not null
union
select table_name object, buffer_pool from dba_tab_subpartitions
   where buffer_pool is not null
union
select index_name object, buffer_pool from dba_indexes
   where buffer_pool is not null
union
select index_name object, buffer_pool from dba_ind_partitions
  where buffer_pool is not null
union
select index_name object, buffer_pool from dba_ind_subpartitions
   where buffer_pool is not null
/
A view to select the objects and types in the buffer cache:
create or replace view oci_block_header as
-- For performance, queries against this view should use cost based optimizer
select b.indx,             -- Entry in buffer cache
       b.hladdr,           -- ADDR of v$latch_children entry for cache buffer chain latch
       b.ts# tblspace,     -- Tablespace id
       b.file# fileid,     -- File id
       b.dbablk blockid,   -- Block id
       b.obj objid,        -- Object id
       u.name owner,       -- Object owner
       o.name object_name,       -- Object name
       o.subname subobject_name, -- Subobject name
       decode (o.type#,
                 1, 'INDEX',
                 2, 'TABLE',
                 3, 'CLUSTER',
                19, 'TABLE PARTITION',
                20, 'INDEX PARTITION',
                21, 'LOB',
                34, 'TABLE SUBPARTITION',
                35, 'INDEX SUBPARTITION',
                39, 'LOB PARTITION',
                40, 'LOB SUBPARTITION',
                    'UNDEFINED'
              ) object_type  -- Object type
from x$bh b, obj$ o, user$ u
where b.obj = o.dataobj#
and   o.owner# = u.user#
/
And a view to merge the information together and give me the information I want.
create or replace view oci_buffer_cache_use
as
select a.owner, a.object_name, a.object_type, count(a.object_name) blocks#, b.buffer_pool
from oci_block_header a, oci_buffer_pools b
where owner not in ( 'SYS', 'SYSTEM', 'PERFSTAT')
and a.object_name = b.object
group by b.buffer_pool, a.owner, a.object_type, a.object_name
-- having count(a.object_name) > 100    -- if you don't want small objects
/

As a curiosity, I will mention that I once tried to cache a very large table (25+ mill rows) on a Sun 6500, running Solaris 7 and 8.1.6EE.
A full table scan would not populate the keep pool. I tried to alter table t_name cache and that didn't help. I never figured why. I had to create a small procedure, selecting rows by primary key lookup in a loop. As all rows were very similar and no empty columns, calculations showed that I could advance primary key number by 400, hereby moving two third of a block forward, select that row and repeat. Hereby having a fair chance of hitting every data lock at least once. The table and index was partitioned in 5 hash partitions. By running 5 procedures parallel, the table loading finished in approx. 16 minutes.
Converting the table to IOT with 5 hash partitions, the same loading procedure lasted only 9 minutes and saved ½ Gbyte memory space.

Update Feb 2006 – Phillipe Ebersohl
I use mainly Oracle 9iR2 versions. I had a set of tables defined in a couple of schemas, namely V2 and DALIM_UPDATE, and I use a KEEP buffer as suggested in the main article. But, using:
SELECT * FROM sys.oci_buffer_cache_use
WHERE BUFFER_POOL IN('KEEP', 'DEFAULT')
ORDER BY BUFFER_POOL DESC, blocks# DESC ;

I had the unpleasant surprise to see some tables and indexes of my DALIM_UPDATE schema appearing as being in the KEEP buffer. Investigation showed up that the object_name was not associated to the object_owner in the OCI_BUFFER_POOLS view. I corrected it as follows:
CREATE OR REPLACE VIEW OCI_BUFFER_POOLS AS
select t.owner object_owner, table_name object, buffer_pool from dba_tables t
where buffer_pool is not null
union
select tabp.table_owner , table_name object, buffer_pool
from dba_tab_partitions tabp where buffer_pool is not null
union
select tabsp.table_owner, table_name object, buffer_pool
from dba_tab_subpartitions tabsp where buffer_pool is not null
union
select i.owner, index_name object, buffer_pool
from dba_indexes i where buffer_pool is not null
union
select ip.index_owner, index_name object, buffer_pool
from dba_ind_partitions ip where buffer_pool is not null
union
SELECT ips.index_owner, index_name object, buffer_pool
from dba_ind_subpartitions ips where buffer_pool is not null
;

CREATE OR REPLACE VIEW OCI_BUFFER_CACHE_USE AS
select a.owner, a.object_name, a.object_type, count(a.object_name) blocks#, b.buffer_pool
from oci_block_header a, oci_buffer_pools b
where owner not in ( 'SYS', 'SYSTEM', 'PERFSTAT')
and a.object_name = b.OBJECT AND a.owner = b.object_owner
group by b.buffer_pool, a.owner, a.object_type, a.object_name
-- having count(a.object_name) > 100 -- if you don't want small objects
;
Why and when should one tune?
One of the biggest responsibilities of a DBA is to ensure that the Oracle database is tuned properly. The Oracle RDBMS is highly tunable and allows the database to be monitored and adjusted to increase its performance.
One should do performance tuning for the following reasons:
The speed of computing might be wasting valuable human time (users waiting for response);
Enable your system to keep-up with the speed business is conducted; and
Optimize hardware usage to save money (companies are spending millions on hardware).
Although this site is not overly concerned with hardware issues, one needs to remember than you cannot tune a Buick into a Ferrari.
Where should the tuning effort be directed?
Consider the following areas for tuning. The order in which steps are listed needs to be maintained to prevent tuning side effects. For example, it is no good increasing the buffer cache if you can reduce I/O by rewriting a SQL statement.
Database Design (if it's not too late):
Poor system performance usually results from a poor database design. One should generally normalize to the 3NF. Selective denormalization can provide valuable performance improvements. When designing, always keep the "data access path" in mind. Also look at proper data partitioning, data replication, aggregation tables for decision support systems, etc.
Application Tuning:
Experience showed that approximately 80% of all Oracle system performance problems are resolved by coding optimal SQL. Also consider proper scheduling of batch tasks after peak working hours.
Memory Tuning:
Properly size your database buffers (shared_pool, buffer cache, log buffer, etc) by looking at your wait events, buffer hit ratios, system swapping and paging, etc. You may also want to pin large objects into memory to prevent frequent reloads.
Disk I/O Tuning:
Database files needs to be properly sized and placed to provide maximum disk subsystem throughput. Also look for frequent disk sorts, full table scans, missing indexes, row chaining, data fragmentation, etc.
Eliminate Database Contention:
Study database locks, latches and wait events carefully and eliminate where possible.
Tune the Operating System:
Monitor and tune operating system CPU, I/O and memory utilization. For more information, read the related Oracle FAQ dealing with your specific operating system.
What tools/utilities does Oracle provide to assist with performance tuning?
Oracle provide the following tools/ utilities to assist with performance monitoring and tuning:
ADDM (Automated Database Diagnostics Monitor) introduced in Oracle 10g
TKProf
Statspack
Oracle Enterprise Manager - Tuning Pack (cost option)
Old UTLBSTAT.SQL and UTLESTAT.SQL - Begin and end stats monitoring
When is cost based optimization triggered?
It's important to have statistics on all tables for the CBO (Cost Based Optimizer) to work correctly. If one table involved in a statement does not have statistics, and optimizer dynamic sampling isn't performed, Oracle has to revert to rule-based optimization for that statement. So you really want for all tables to have statistics right away; it won't help much to just have the larger tables analyzed.
Generally, the CBO can change the execution plan when you:
Change statistics of objects by doing an ANALYZE;
Change some initialization parameters (for example: hash_join_enabled, sort_area_size, db_file_multiblock_read_count).
How can one optimize %XYZ% queries?
It is possible to improve %XYZ% (wildcard search) queries by forcing the optimizer to scan all the entries from the index instead of the table. This can be done by specifying hints.
If the index is physically smaller than the table (which is usually the case) it will take less time to scan the entire index than to scan the entire table.
Where can one find I/O statistics per table?
The STATSPACK and UTLESTAT reports show I/O per tablespace. However, they do not show which tables in the tablespace has the most I/O operations.
The $ORACLE_HOME/rdbms/admin/catio.sql script creates a sample_io procedure and table to gather the required information. After executing the procedure, one can do a simple SELECT * FROM io_per_object; to extract the required information.
For more details, look at the header comments in the catio.sql script.
My query was fine last week and now it is slow. Why?
The likely cause of this is because the execution plan has changed. Generate a current explain plan of the offending query and compare it to a previous one that was taken when the query was performing well. Usually the previous plan is not available.
Some factors that can cause a plan to change are:
Which tables are currently analyzed? Were they previously analyzed? (ie. Was the query using RBO and now CBO?)
Has OPTIMIZER_MODE been changed in INIT.ORA?
Has the DEGREE of parallelism been defined/changed on any table?
Have the tables been re-analyzed? Were the tables analyzed using estimate or compute? If estimate, what percentage was used?
Have the statistics changed?
Has the SPFILE/ INIT.ORA parameter DB_FILE_MULTIBLOCK_READ_COUNT been changed?
Has the INIT.ORA parameter SORT_AREA_SIZE been changed?
Have any other INIT.ORA parameters been changed?
What do you think the plan should be? Run the query with hints to see if this produces the required performance.
Does Oracle use my index or not?
One can use the index monitoring feature to check if indexes are used by an application or not. When the MONITORING USAGE property is set for an index, one can query the v$object_usage to see if the index is being used or not. Here is an example:
SQL> CREATE TABLE t1 (c1 NUMBER);
Table created.

SQL> CREATE INDEX t1_idx ON t1(c1);
Index created.

SQL> ALTER INDEX t1_idx MONITORING USAGE;
Index altered.

SQL>
SQL> SELECT table_name, index_name, monitoring, used FROM v$object_usage;
TABLE_NAME                     INDEX_NAME                     MON USE
------------------------------ ------------------------------ --- ---
T1                             T1_IDX                         YES NO

SQL> SELECT * FROM t1 WHERE c1 = 1;
no rows selected

SQL> SELECT table_name, index_name, monitoring, used FROM v$object_usage;
TABLE_NAME                     INDEX_NAME                     MON USE
------------------------------ ------------------------------ --- ---
T1                             T1_IDX                         YES YES
To reset the values in the v$object_usage view, disable index monitoring and re-enable it:
ALTER INDEX indexname NOMONITORING USAGE;
ALTER INDEX indexname MONITORING   USAGE;
Why is Oracle not using the damn index?
This problem normally only arises when the query plan is being generated by the Cost Based Optimizer (CBO). The usual cause is because the CBO calculates that executing a Full Table Scan would be faster than accessing the table via the index. Fundamental things that can be checked are:
USER_TAB_COLUMNS.NUM_DISTINCT - This column defines the number of distinct values the column holds.
USER_TABLES.NUM_ROWS - If NUM_DISTINCT = NUM_ROWS then using an index would be preferable to doing a FULL TABLE SCAN. As the NUM_DISTINCT decreases, the cost of using an index increase thereby making the index less desirable.
USER_INDEXES.CLUSTERING_FACTOR - This defines how ordered the rows are in the index. If CLUSTERING_FACTOR approaches the number of blocks in the table, the rows are ordered. If it approaches the number of rows in the table, the rows are randomly ordered. In such a case, it is unlikely that index entries in the same leaf block will point to rows in the same data blocks.
Decrease the INIT.ORA parameter DB_FILE_MULTIBLOCK_READ_COUNT - A higher value will make the cost of a FULL TABLE SCAN cheaper.
Remember that you MUST supply the leading column of an index, for the index to be used (unless you use a FAST FULL SCAN or SKIP SCANNING).
There are many other factors that affect the cost, but sometimes the above can help to show why an index is not being used by the CBO. If from checking the above you still feel that the query should be using an index, try specifying an index hint. Obtain an explain plan of the query either using TKPROF with TIMED_STATISTICS, so that one can see the CPU utilization, or with AUTOTRACE to see the statistics. Compare this to the explain plan when not using an index.
When should one rebuild an index?
You can run the ANALYZE INDEX VALIDATE STRUCTURE command on the affected indexes - each invocation of this command creates a single row in the INDEX_STATS view. This row is overwritten by the next ANALYZE INDEX command, so copy the contents of the view into a local table after each ANALYZE. The 'badness' of the index can then be judged by the ratio of 'DEL_LF_ROWS' to 'LF_ROWS'.
How does one tune Oracle Wait event XYZ?
Here are some of the wait events from V$SESSION_WAIT and V$SYSTEM_EVENT views:
db file sequential read: Tune SQL to do less I/O. Make sure all objects are analyzed. Redistribute I/O across disks.
buffer busy waits: Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i)/ Analyze contention from SYS.V$BH
log buffer space: Increase LOG_BUFFER parameter or move log files to faster disks
log file sync: If this event is in the top 5, you are committing too often (talk to your developers)
log file parallel write: deals with flushing out the redo log buffer to disk. Your disks may be too slow or you have an I/O bottleneck.
What is the difference between DBFile Sequential and Scattered Reads?
Both "db file sequential read" and "db file scattered read" events signify time waited for I/O read requests to complete. Time is reported in 100's of a second for Oracle 8i releases and below, and 1000's of a second for Oracle 9i and above. Most people confuse these events with each other as they think of how data is read from disk. Instead they should think of how data is read into the SGA buffer cache.
db file sequential read:
A sequential read operation reads data into contiguous memory (usually a single-block read with p3=1, but can be multiple blocks). Single block I/Os are usually the result of using indexes. This event is also used for rebuilding the controlfile and reading datafile headers (P2=1). In general, this event is indicative of disk contention on index reads.
db file scattered read:
Similar to db file sequential reads, except that the session is reading multiple data blocks and scatters them into different discontinuous buffers in the SGA. This statistic is NORMALLY indicating disk contention on full table scans. Rarely, data from full table scans could be fitted into a contiguous buffer area, these waits would then show up as sequential reads instead of scattered reads.
The following query shows average wait time for sequential versus scattered reads:
prompt "AVERAGE WAIT TIME FOR READ REQUESTS"
select a.average_wait "SEQ READ", b.average_wait "SCAT READ"
from   sys.v_$system_event a, sys.v_$system_event b
where  a.event = 'db file sequential read'
and    b.event = 'db file scattered read'

How do you create an 'auto number' or 'auto increment' column similar to the offering from Microsoft's SQL Server ?
A common question for users converting from MS SQL Server to Oracle is how to handle 'auto increment' or 'auto number; columns. A feature of MS SQL server for handling (typcially) primary key columns which have to be populated with meaningless sequence numbers. There is no direct equivalent in Oracle, but the combination of sequences and triggers comes close.
Marco Coletti: Another option available in Oracle version 9.2 is to use the built in SQL function sys_guid() for the default column value. (13th Oct 2005)
How does Oracle handle PGA memory.
<<< DOWN LOAD THE PPS & PDF FROM http://www.jlcomp.demon.co.uk/faq/pat_presentation.html >>>
What is the relationship between Oracle's buffer cache and UNIX's buffer cache?
In short there is no direct relationship between the Oracle and Unix buffer caches aside from the fact they both cache data to prevent physical IO. With regard to data stored in Oracle data files Oracle's buffer cache mostly eliminates the need for the Unix buffer cache however, with a little effort, you can get the best out of both of them.
There are two aspects to consider:
Reads: When data is read from disk on a cached file system it is also placed in the Unix buffer cache. In general if a block of data can not be found in Oracle's buffer cache then you are not going to find it in the Unix buffer cache either, hence the Unix cache is just an overhead.
Writes: A write done through the Unix buffer cache will involve user data being written to the cache and then Unix writing the cache contents down to disk (or at least what it thinks is disk*). In an online situation it is unlikely you will notice this extra step, especially when using asynchronous IO as once the data is in the cache you can get on with your next job. However in a busy system you may notice it indirectly as lots of CPU time is spent writing cached data down to disk rather than servicing the application. You are unlikely to read the majority of data back from this cache because of Oracle’s buffer cache so it would be nice to skip this step, this is where writing direct to disk and bypassing the cache comes into play.
There is a good description of asynchronous and direct IO in the the Database Performance Tuning Guide linked to at the foot of this page.
In general it is probably a good idea to allow your Oracle home to sit in a file system that uses the Unix cache but put your database files into file systems that utilise direct IO (if supported) to bypass the Unix cache (save for a few oddities such as temporary tablespace data**).
Eg (taken from AIX):
[oracle:PROD] /u01/app/oracle/product/10.1.0 >mount
  node       mounted        mounted over    vfs       date        options     
-------- ---------------  ---------------  ------ ------------ ---------------
...
         /dev/lvora01      /u01             jfs2   Jun 23 15:48 rw,log=/dev/loglv01
         /dev/lvora02      /u02             jfs2   Jun 23 15:48 rw,dio,log=/dev/loglv02
         /dev/lvora03      /u03             jfs2   Jun 23 15:48 rw,dio,log=/dev/loglv03
...
Here /u01 uses the Unix buffer cache and contains $ORACLE_BASE (/u01/app/oracle) and certain specific oracle database files (/u01/oradata). /u02 upwards are then all set to bypass the Unix buffer cache - you can see this from the "dio" parameter under options. This could also be "cio" which means concurrent IO and utilises direct IO plus an extra tune around file inode serialisation.
In summary, if your Unix system supports direct IO to bypass the Unix buffer cache then you should at least try it out. You could potentially free up CPU time and memory to be used by other processes. It is generally understood that a system with well configured file systems in regard to the Unix buffer cache can rival the performance of raw data files however, having never used a raw file system, I cannot confirm either way.
* Bear in mind that there is a good chance your disks are actually logical units in a SAN which will also have its own hefty cache.
** Temp is a good example of why the Unix buffer cache is useful as the data to be written is not cached by Oracle and will more often than not be read back in the near future - the buffer cache can satisfy this request. Redo logs are similar in that they are written to by LGWR and read again by ARCH.
Unix File system has a buffer cache and Oracle also maintains its own cache. The buffers are being copied from Kernel Space(File system buffer cache) to User Space(Oracle Buffer Cache). This operation generates a lot of overhead. But in case of raw devices, a write to a raw device bypasses the Unix Buffer Cache , the data is transferred direct from Oracle buffer to the disk. So, write performance enhanced with the use of raw devices.
http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10752/ch23_os.htm#227
Queries against dba_free_space are slow - is there any way to speed them up ?
Original Code :
SELECT
TABLESPACE_NAME, SUM(BYTES) SUMB,
MAX(BYTES) LARGEST,COUNT(*) CHUNKS
FROM DBA_FREE_SPACE A
GROUP BY TABLESPACE_NAME

Modification:

SELECT /*+ use_hash (A.ts A.fi) */
TABLESPACE_NAME, SUM(BYTES) SUMB,
MAX(BYTES) LARGEST,COUNT(*) CHUNKS
FROM DBA_FREE_SPACE A
GROUP BY TABLESPACE_NAME

How do I change the default password of internal ?
I follow below steps.
1. shutdown the database.
2. delete the password file
3. Create new password file with ORAPWD [ ORAPWD is the oracle utility. Its Description is provided at the end of article ]
orapwd file=e:\oracle\database\PWDsnap2.ora password=hello entries=30
4.Startup the database.
5.Enter command SVRMGRL and connect internal
SVRMGR> connect internal
Connected.

I am able to connect internal with
SVRMGR> connect internal/hello
Connected.

My question, why I am connecting internal without password ? It use default password for internal ? In init.ora
"remote_login_passwordfile = exclusive"

What's the problem – You need to enter the password that was used for creating password file when
connecting to INTERNAL

If your problem is that connect internal doesn't ask for password after recreating the password file - then
set SQLNET.AUTHENTICATION_SERVICES = TRUE

and then try svrmgrl>connect internal
If you don't know the password and still want to connect internal - comment out
SQLNET.AUTHENTICATION_SERVICES=TRUE in sqlnet.ora
and
REMOTE_LOGIN_PASSWORDFILE = EXCLUSIVE in init.ora

and you will be through

If you want to recreate the password then
delete $ORACLE_HOME/dbs/orapwSID file and
as suggested above use orapwd to create a new password as :

orapwd file=$ORACLE_HOME/dbs/orapwdev password=xxxxx entries=

For Oracle 9i :
1. INTERNAL is no longer supported as a USER is Oracle9i

2. To connect as SYS is Oracle9i, you need to append AS SYSDBA or AS SYSOPER
(always!!!)

If you have direct access to that oracle9i box then you can do this:
a. Authentication is left to OS:
connect / as sysdba
and change the SYS password
b. Authentication is done via password file(orapwd deals with this)
Note: this is can be accomplished when running from a client
connect sys/password as sysdba

Using ORAPWD Utility :

1. Creating the password file with orapwd
% orapwd file=${ORACLE_HOME}/dbs/orapwSID password=password entries=max_users
Where:
file name is the name of the file that will hold the password information. The password file location will default to the current directory unless a path is provided.
password is the one for the SYS user or new password
entries parameter tells Oracle how many users you will be creating in the password file. You cannot increase entries at a later, so set the value high.
Name and Location of the password file:
At database startup time, Oracle will only look for the password file in the ORACLE_HOME/dbs directory. The naming convention that Oracle will search for is: orapwSID. The SID must match the ORACLE_SID environment variable. If orapwSID can not be found, Oracle will look for a file named orapw. If Oracle can not find a orapw file you will get the ORA-01990 at database startup time.
An important security concern is that the password file be secured.
How to get started with SQL_TRACE


Is it possible to run several different versions of Oracle on the same machine
Yes! We can run different versions of Oracle on the same machine provided that Oracle_base_directory is same for both the versions. 
Oracle_base_directory is the directory where oracle is installed like C:\oracle or D:\orant
Under the Base directory Oracle home directories are differentiated like
C:\oracle\oracle8
C:\oracle\oracle9
In short you can have different versions installed on the same machine if they are installed in the same directory.
I have deleted a trace file with the session still live. I now want the session to resume tracing, but a new file doesn't appear.
From Unix, if you switch sql_trace off, delete the trace file, and switch sql_trace on again for a given session, Oracle does not appear to create a new trace file. This can be a nuisance, and there is no documented workaround. NT is a more friendly, and a new trace file will be generated every time you delete the old one.

The reason why you have a problem under Unix is that the trace file is not closed - even if you set sql_trace to false. There is, however, a method for sorting the problem out using oradebug. This is not, alas, very elegant - but may be adequate in many cases where a DBA is exerting external control of the tracing.
Start up SQL*Plus as sysdba, locate the process id (oracle or unix) of the target user, e.g.
select spid, pid
from v$process
where addr = (
    select paddr
    from v$session
    where sid = 99
);
oradebug setorapid {the pid above}
or
oradebug setospid {the spid above}
then
oradebug flush        -- flushes any remaining data to the trace file
oradebug close_trace    -- closes the trace file.
The nice thing about the close_trace option is that you can do it even after you have deleted the trace file from the operating system level. In fact in most cases you may have to delete the trace file before you issue a close_trace, otherwise you may find that Oracle re-opens to write the next bit of trace information to it, and the problem simply repeats itself.
There is a simple solution to this problem if you are allowed to use it in your environment.
    Alter session set tracefile_identifier = ‘text string’;
If you can issue this command, then the current trace file is closed and a new one is opened with the ‘text string’ included as part of the file name.
This is also very convenient for running a series of tests and dropping the test results into separate trace files. Bear in mind that some of the odds and ends (such as STAT lines) may end up in the wrong trace file, though.
Is there a Windows version of the oerr program?
In UNIX, you type oerr ora 1234 to get a description and suggested action about the Oracle error 1234. Oracle does not provide this utility in Windows. What can you do?
The UNIX version oerr program is a shell script that reads the text files $ORACLE_HOME/*/mesg/*.msg, most notably $ORACLE_HOME/rdbms/oraus.msg. But the Windows version Oracle does not bundle this utility and does not have human-readable error message files; it only has the binary versions such as %ORACLE_HOME%\RDBMS\mesg\oraus.msb and its format is not published by Oracle. So you can't build your own program based on that. The following are alternatives in order of relevance to the question.
1. Yong Huang wrote a freeware Perl script Windows oerr that reads the Oracle documentation Error Message pages and returns the error description and suggested action. It generally requires installing Perl and Oracle documentation locally. Minimum installation requires the Error Message part of documentation and compiled version of the Perl script (so you don't need the Perl interpreter).
2. Several people made attempts to build oerr based on the UNIX message files. The only one we still have access to is Bill Pollack's ora-text at his site. It requires you to ftp oraus.msg under rdbms from UNIX to PC.
3. Inside SQL*Plus or PL/SQL, you can use the sqlerrm function. For example, in SQL*PLus, set serveroutput on. Then exec dbms_output.put_line(sqlerrm(-1555)). This does not need any extra installation but gives you very limited error description and only works for the ORA facility, of course the largest facility in Oracle.
4. William Rogge posted a message to an Oracle mailing list publishing an awk script that parses $ORACLE_HOME/rdbms/mesg/oraus.msg and loads its content into an Oracle table. Obviously this requires client access to the database server to get the error description. I can't find the author or his script any more.
Note that not all error descriptions are exactly the same between UNIX $ORACLE_HOME/rdbms/mesg/oraus.msg and the Error Message page of documentation. Try oerr pls 123 at the UNIX prompt and then look it up in documentation.
Why does it seem that a SELECT over a db_link requires a commit after execution ?
Why does it seem that a SELECT over a db_link requires a commit after execution ?
Because it does!  When Oracle performs a distributed SQL statement Oracle reserves an entry in the rollback segment area for the two-phase commit processing.  This entry is held until the SQL statement is committed even if the SQL statement is a query.  A demonstration of this fact follows.  The REM’s were added to the output, which is otherwise a cut and paste of the screen.  The script db/obj/rbs_users is SQL to show user sessions to rollback segment assignments (transactions) and similar SQL can be found in the FAQ entry: Is there a way to detect processes that are rolling back, and can I figure out how long it will take?
REM  find the current session sid

 PFC> select * from v$mystat where rownum = 1;

        SID STATISTIC#      VALUE
 ---------- ---------- ----------
          7          0          1

REM  see who is using rollback, the current session should not be there

 PFC> @db/org/rbs_users

 no rows selected

REM  perform a remote query

 PFC> select count(*) from mpowel01.item_master@ut1.world;

   COUNT(*)
 ----------
       2603

REM  determine whether the current session is now a transaction or not

 PFC> @db/org/rbs_users

 NAME           USN    EXTENTS USERNAME            SID LOGON_TIM S STATUS
 ------------ ----- ---------- ------------ ---------- --------- --------
 START_TIME           T STATUS          USED_UBLK  USED_UREC
 -------------------- ---------------- ---------- ----------
 PROGRAM                                          TERMINAL
 ------------------------------------------------ ------------------------------
 ROLL01           2         20 MPOWEL01              7 05-SEP-01 ACTIVE
 09/05/01 12:34:36    ACTIVE                    1          1
 sqlplus@seqdev (TNS V1-V3)                       ttyiR/iARS

REM  end the transaction

 PFC> commit;

 Commit complete.

REM  verify the current session no longer shows as a transaction

 PFC> @db/org/rbs_users

 no rows selected

If the application code fails to issue a commit after the remote or distributed select statement then the rollback segment entry is not released.  If the program stays connected to Oracle but goes inactive for a significant period of time (such as a daemon, wait for alert, wait for mailbox entry, etc…) then when Oracle needs to wrap around and reuse the extent, Oracle has to extend the rollback segment because the remote transaction is still holding its extent.  This can result in the rollback segments extending to either their maximum extent limit or consuming all free space in the rbs tablespace even where there are no large transactions in the application.  When the rollback segment tablespace is created using extendable files then the files can end up growing well beyond any reasonable size necessary to support the transaction load of the database.  Developers are often unaware of the need to commit distributed queries and as a result often create distributed applications that cause, experience, or contribute to rollback segment related problems like ORA-01650 (unable to extend rollback).  The requirement to commit distributed SQL exists even with automated undo management available with version 9 and newer.   If the segment is busy with an uncommitted distributed transaction Oracle will either have to create a new undo segment to hold new transactions or extend an existing one.  Eventually undo space could be exhausted, but prior to this it is likely that data would have to be discarded before the undo_retention period has expired.
Note that per the Distributed manual that a remote SQL statement is one that references all its objects at a remote database so that the statement is sent to this site to be processed and only the result is returned to the submitting instance, while a distributed transaction is one that references objects at multiple databases.  For the purposes of this FAQ there is no difference, as both need to commit after issuing any form of distributed query.
How do I know what Oracle EVENTS are set in my database instance?

To find events that are set at the SYSTEM level you can use the oradebug utility.  Following is my events_system.sql script, which runs the necessary commands.  Sample output is included in the comment area of the script.  The event at the time of the sample was for generating a trace file when ORA-04031 errors occurred.
   
    set echo off
    --  Use ORADEBUG to dump list of events set at system level
    --
    --  Based on Oracle support metalink rdbms forum post 08/04/2003
    -  by Melissa Holman Subject: Setting Event Trace
    -
    -  20030805  Mark D Powell   New, capture useful functionality
    -
    - *** 2003-08-05 09:58:20.827
    - *** SESSION ID:(34.5167) 2003-08-05 09:58:20.781
    - Dump event group for level PROCESS
    - TC Addr  Evt#(b10)   Action   TR Addr    Arm    Life
    - 101D9020 4031       1       101d9080 0 0
    -
    set echo on
    ORADEBUG SETMYPID
    ORADEBUG DUMP EVENTS 2
    ORADEBUG TRACEFILE_NAME
    --

The first command tells oradebug to use the current session as its target.  All system wide events would be inherited by the current session at process startup.  The thought might occur to you that events set by the current session at the session level would also appear in the output; my testing on 9.2.0.5 shows they do not.  You can verify this by setting a session level event and then running the three lines on code.
The second command obviously produces the trace file while the third command displays the trace file name.
A word of caution, generally system wide events are set only with the advice of Oracle support to work around internal errors or for generating debugging (trace) information.  Finding information on non-trace events is difficult, but if you have any set you can scan the Oracle support site, http://metalink.oracle.com, bug database for bug reports that refer to identified events.
How do I randomly select rows from a table?
In Oracle 8i and 9i, the easiest way to randomly select data out of a table is to use the SAMPLE clause with a select statement.
Example:
    SELECT emp
    FROM emp
    SAMPLE(10);
In the example statement, Oracle is instructed to randomly visit 10% of the rows in the table.
The restriction to this clause is that it works for single table queries only. If you include the sample clause with a multiple table query, you will get a parse error or ORA-30561: SAMPLE option not allowed in statement with multiple table references. One way around this is to create an inline view on the driving table of the query with the SAMPLE clause.
Example:
    SELECT t1.dept, t2.emp
    FROM
        (SELECT * FROM dept sample(5)) t1,
        Emp t2
    WHERE
        t1.dep_id = t2.dep_id

How can I maintain a history of user logins?
If you want to get some idea of which users spend most time and consume most resources on the system, you don’t necessarily have to do anything subtle and devious to find out what’s been happening. There has been a (simple) audit trail built into the database for as long as I can remember. (The 8.1 – 10.1 in the banner simply covers the fact that I’ve only recently checked the following comments against those versions)
The init.ora file contains an audit_trail parameter. This can take the values true, false, none, os, db (the true/false options are for backwards compatibility). If you set the value to db (or true), then you have enabled auditing in the database. Once you have restarted the database (the parameter is not modifiable online), you can decide what events you want to audit.
For a quick cheap audit of connections, connect as a suitably privileged account (which typically means as a DBA), and issue the command:
    audit create session;
If you need to turn this audit off, the corresponding command is:
    noaudit create session;
The older syntax for the same level of audit is:
    audit connect;
    noaudit connect;


With this level of audit turned on, every session that logs on (except the SYS sessions) will insert a row into the table sys.aud$ giving various details of who they are and what time they connected. When the session ends, its last action is to update this row with various session-related details, such as log-off time, and the amount of work done. To make the results more readable, Oracle has superimposed the view dba_audit_session on top of the aud$ table; the 9.2 version of this view is as follows:
 Name                    Null?    Type
 ----------------------- -------- ----------------
 OS_USERNAME                      VARCHAR2(255)        Who
 USERNAME                         VARCHAR2(30)
 USERHOST                         VARCHAR2(128)        Where
 TERMINAL                         VARCHAR2(255)
 TIMESTAMP               NOT NULL DATE            logon date/time
 ACTION_NAME                      VARCHAR2(27)       
 LOGOFF_TIME                      DATE            log off date/time
 LOGOFF_LREAD                     NUMBER            v$sess_io.consistent_gets
 LOGOFF_PREAD                     NUMBER            v$sess_io.physical_reads
 LOGOFF_LWRITE                    NUMBER            v$sess_io.block_changes
 LOGOFF_DLOCK                     VARCHAR2(40)        Number of deadlocks
 SESSIONID               NOT NULL NUMBER
 RETURNCODE              NOT NULL NUMBER
 CLIENT_ID                        VARCHAR2(64)
 SESSION_CPU                      NUMBER            Session statistic. CPU used by this session
As you can see, there is quite a lot of helpful information here – perhaps good enough for most monitoring purposes. It is also a very light-weight tool, as it requires just one insert on logon, and an index access to update one row on log off.
There are a couple of administrative points. The aud$ table is in the system tablespace and is the one table in the sys schema that you are told you can delete data from. You may want to run a regular purge job to delete data that is more then N days old from this table.
You might consider moving this table to a separate tablespace – but there have been reports of problems with media recovery if you do this (possibly because the recovering processes tries to insert its own audit records and can’t because the tablespace needs recovery) and it is not supported by Oracle.
Finally, if you are running a physical standby database and open it in read only mode, you may find that you can’t connect to it as anyone other than sys. Auditing requires an insert/upate on aud$ - so can’t be allowed to on a read only database. You will have to remember to change the audit_trail parameter to none on your standby before you start it up.
Further reading: SQL Reference Manual – AUDIT and NOAUDIT commands.
Why does AUTOTRACE not show partition pruning in the explain plan ?
Autotrace not showing partition pruning/elimination is bug 1426992, but, after investigation Oracle has decided that this is not an optimiser bug, but a bug in SQL*Plus. You can, with a bit of knowledge of your data and a little experimentation, deduce that partition pruning is taking place from the output of autotrace, but there are much easier ways !
The following demonstration shows the failings in autotraceand demonstrates a couple of other methods of determining whether or not your partitions are being pruned - or not.
Autotrace
First of all, create a simple table range partitioned over 6 different partitions, and fill it with some test data extracted from ALL_OBJECTS.
SQL> create table tab_part (part_key number(1), some_text varchar2(500))

  2  partition by range (part_key) (

  3  partition part_1 values less than (2),

  4  partition part_2 values less than (3),

  5  partition part_3 values less than (4),

  6  partition part_4 values less than (5),

  7  partition part_5 values less than (6),

  8  partition part_6 values less than (MAXVALUE) );
Table created.


SQL> insert /*+ append */ into tab_part

  2  select mod(rownum, 10), object_name

  3  from all_objects;
24683 rows created.


SQL> commit;
Commit complete.
Once the table has been filled, analyse it and see how the data has been spread over the various partitions. The first and last partitions have more data in them that the remaining four, hence the differing totals.
SQL> analyze table tab_part compute statistics;
Table analyzed.


SQL> select partition_name, num_rows

  2  from user_tab_partitions

  3  where table_name = 'TAB_PART'

  4  order by partition_name;
PARTITION_NAME                   NUM_ROWS

------------------------------ ----------

PART_1                               4937

PART_2                               2469

PART_3                               2469

PART_4                               2468

PART_5                               2468

PART_6                               9872
6 rows selected.
Now that we have a table to work with, we shall see what autotrace has to say about partition elimination. First, however, note how many logical reads a full scan of the entire table needs :
SQL> set autotrace on
SQL> select count(*) from tab_part;
  COUNT(*)

----------

     24683


Execution Plan

----------------------------------------------------------

   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=42 Card=1)

   1    0   SORT (AGGREGATE)

   2    1     PARTITION RANGE (ALL)

   3    2       TABLE ACCESS (FULL) OF 'TAB_PART' (Cost=42 Card=24683)


Statistics

----------------------------------------------------------

          0  recursive calls

          0  db block gets

        135  consistent gets

          0  physical reads

          0  redo size

        381  bytes sent via SQL*Net to client

        499  bytes received via SQL*Net from client

          2  SQL*Net roundtrips to/from client

          0  sorts (memory)

          0  sorts (disk)

          1  rows processed
To read 24,683 rows of data Oracle had to perform 135 logical reads. Keep this in mind and note that the autotrace output shows a full table scan - as we would expect on an unindexed table. The next count should only look in a single partition :
SQL> select count(*) from tab_part where part_key = 7;
  COUNT(*)

----------

      2468


Execution Plan

----------------------------------------------------------

   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=17 Card=1 Bytes=2)

   1    0   SORT (AGGREGATE)

   2    1     TABLE ACCESS (FULL) OF 'TAB_PART' (Cost=17 Card=2468 Bytes=4936)

Statistics

----------------------------------------------------------

          0  recursive calls

          0  db block gets

         49  consistent gets

          0  physical reads

          0  redo size

        380  bytes sent via SQL*Net to client

        499  bytes received via SQL*Net from client

          2  SQL*Net roundtrips to/from client

          0  sorts (memory)

          0  sorts (disk)

          1  rows processed
This seems to have again carried out a full table scan, but as we already know that a real FTS takes 135 logical reads, the fact that only 49 were required here should indicate that something is different. Autotrace's output is not showing partition elimination. If you didn't know how many reads were required to full scan the table, you would be hard pressed to determine that partition elimination had taken place in this scan.
Event 10053
There are other methods by which we can obtain a true picture of the plan used by the optimiser - a 10053 trace for example would show the details. I've never had to use a 10053 trace so I'm unfortunately not in a position to explain its use, I leave this as 'an exercise for the reader' as they say :o)
SQL_TRACE and TKPROF
I have used SQL_TRACE and TKPROF though, so here's what shows up when SQL_TRACE is set true.


SQL> set autotrace off
SQL> alter session set sql_trace = true;
Session altered.
SQL> alter session set tracefile_identifier = 'PARTITION';
Session altered.


SQL> select count(*) from tab_part where part_key = 7;
  COUNT(*)

----------

      2468


SQL> alter session set sql_trace = false
Session altered.
At this point, exit from SQL*Plus and locate the trace file in USER_DUMP_DEST which has 'PARTITION' in it's name. This is the one you want to run through TKPROF. The output from this is shown below :
select count(*) from tab_part where part_key = 7


call     count       cpu    elapsed       disk      query    current        rows

------- ------  -------- ---------- ---------- ---------- ----------  ----------

Parse        1      0.00       0.00          0          0          0           0

Execute      1      0.01       0.00          0          0          0           0

Fetch        2      0.01       0.01          0         49          0           1

------- ------  -------- ---------- ---------- ---------- ----------  ----------

total        4      0.02       0.01          0         49          0           1
Misses in library cache during parse: 0

Optimizer goal: CHOOSE

Parsing user id: 62 
Rows     Row Source Operation

-------  ---------------------------------------------------

      1  SORT AGGREGATE (cr=49 r=0 w=0 time=10353 us)

   2468   TABLE ACCESS FULL TAB_PART PARTITION: 6 6 (cr=49 r=0 w=0 time=6146 us)
The explain plan clearly shows that partition 6 was the start and stop partition in the scan. In addition, there were 49 logical reads performed to get at the count. This is identical to what we saw above with autotrace, except we get to see that partition pruning did actually take place.
Explain Plan
Back in SQL*Plus, there is another method that can be used. The old faithful EXPLAIN PLAN will show how partition pruning did take place.
SQL> explain plan

  2  set statement_id = 'Norman'

  3  for

  4  select count(*) from tab_part where part_key = 7;
Explained.
SQL> set lines 132

SQL> set pages 10000
SQL> col operation format a20

SQL> col options format a15

SQL> col object_name format a15

SQL> col p_start format a15

SQL> col p_stop format a15

SQL> col level noprint
SQL>  select level,lpad('  ', 2*level-1)||operation as operation,

  2   options,

  3   object_name,

  4   partition_start as p_start,

  5   partition_stop as p_stop,

  6   cardinality

  7   from plan_table

  8   where statement_id = 'Norman'

  9   start with id=0

 10   connect by prior id=parent_id

 11   order by level
OPERATION            OPTIONS         OBJECT_NAME     P_START         P_STOP          CARDINALITY

-------------------- --------------- --------------- --------------- --------------- -----------

 SELECT STATEMENT                                                                              1

   SORT              AGGREGATE                                                                 1

     TABLE ACCESS    FULL            TAB_PART        6               6                      2468
Once again, the plan clearly shows that partition pruning takes place. The problem is that autotrace doesn't show it at all. Unless you really know how many blocks of data you have in a table and all of its partitions, you may find it difficult to determine whether or not you are seeing a 'true' plan when using partitioned tables and autotrace.
Note: Do you ever suffer from the PLAN_TABLE growing too big as developers fail to delete old rows from the table? Alternatively, do you forget to delete rows from the table?
Take a copy of $ORACLE_HOME/rdbms/admin/utlxplan.sql and edit it.
Change this :
create table PLAN_TABLE (

statement_id    varchar2(30), ...

filter_predicates varchar2(4000));
To this :
create global temporary table PLAN_TABLE (

statement_id    varchar2(30), ...

filter_predicates varchar2(4000))

on commit preserve rows;
Now login to SQL*Plus as SYS and :
sql> @?/rdbms/admin/utlxplan_edited    /* Or whatever your copy is called */
sql> grant all on plan_table to public;
sql> create public synonym PLAN_TABLE for SYS.PLAN_TABLE;
Now when developers or DBAs use PLAN_TABLE and logout their rows will be deleted. A self-tidying PLAN_TABLE. Of course, this is no good if you want to keep rows in PLAN_TABLE between sessions.
DBMS_XPLAN
Under Oracle 9i (release 2 I think) there is a new PL/SQL package which you can use to show explain plans. The above statement could have its plan shown using this command :
SQL> Select * from table(dbms_xplan.display(statement_id=>'Norman'));
or, if this was the only statement in my PLAN_TABLE :
SQL> Select * from table(dbms_xplan.display);
There is much more information shown with this new feature than with a 'normal' explain plan and you don't have to worry about all that formatting either.
Summary
In summary, autotrace doesn't show partition elimination in Oracle up to versions 9i release 2. You should therefore be aware of this fact and use SQL_TRACE or EXPLAIN_PLAN to get at the true plan for the SQL you are trying to tune/debug.

Further reading:
Note: 166118.1 Partition Pruning/Elimination on Metalink. You will need a support contract to access Metalink.
Bug: 1426992 SQLPlus AUTOTRACE does not show correct explain plan for partition elimination. Again on Metalink.

When I rebuild an index, I see Oracle doing a sort. Why should this be necessary, why doesn't it simply read the existing index ?
In the absence of the online option Oracle usually does read the existing index, but rather than read the index in key order which would use single block index IO, which in turn could be very slow for a large index, a design decision was made to perform the rebuild operation by reading the index blocks sequentially using multi-block reads.  This logic is a fast full index scan, FFS, and can be seen by running an explain plan on the alter index rebuild command.  My tests on 8.1.7.4 and 9.2.0.1 produced the same plan for unique and non-unique indexes of very small (16K) to reasonably large size (2.6G).  Oracle access plans are subject to change with each dot release as Oracle improves its code, but the basic answer that the reason appears to be a design decision based on overall performance considerations probably will not change.

    Explain plan for: alter index rebuild xxx.item_trans_log_idx4

          COST CARDINALITY QUERY_PLAN
    ---------- ----------- --------------------------------------------------
          1204     2919826  ALTER INDEX STATEMENT
                             2.1 INDEX BUILD NON UNIQUE ITEM_TRANS_LOG_IDX4
                2919826     3.1 SORT CREATE INDEX
          1204    2919826      4.1 INDEX FAST FULL SCAN ITEM_TRANS_LOG_IDX4
                          NON-UNIQUE


When the online option is added to the rebuild command my testing shows that Oracle implements the rebuild using a full table scan.

    Explain plan for: alter index rebuild xxx.item_trans_log_idx4 online

          COST CARDINALITY QUERY_PLAN
    ---------- ----------- --------------------------------------------------
          5879     2919826  ALTER INDEX STATEMENT
                             2.1 INDEX BUILD NON UNIQUE ITEM_TRANS_LOG_IDX4
                       2919826     3.1 SORT CREATE INDEX
          5879     2919826       4.1 TABLE ACCESS FULL ITEM_TRANS_LOG


Another consideration is that an Index Fast Full Scan can only be used on an index where at least one of the columns is defined as NOT NULL.  Otherwise it is up to the CBO to choose the access plan that it believes is best.  In versions 8i and 9i it would appear that a full table scan is the preferred method when a FFS is not available.  Also a full table scan has to be used in situations where the index is invalid such as immediately after an alter table move is performed.
I would like to thank Mark J. Bobak for reviewing this article.

Lewis, J. (2001). Practical Oracle 8i, Chapter 7: Enhanced Indexing. Upper Saddle River: NJ. Addison Wesley. pp. 128-129.
Oracle Corporation (2003, December). Oracle Database Performance Tuning Guide 10g, Ch 16, Using Indexes and Clusters, Section Label: Re-creating Indexes and Ch 14, The Query Optimizer, Section Label: Fast Full Index Scans.
For those with access to metalink at metalink.oracle.com the following notes might be of interest:
How to Rebuild and Index                    Note 232819.1
SCRIPT: Move indexes from one tablespace to another        Note 131959.1
Guidelines on When to Rebuild a B-Tree Index            Note 77574.1
I lock my table before truncating it, but I still get ORA-00054: resource busy ... Why ?
The following quote can be found in the version 9.2 Concepts manual chapter 14: SQL, PL/SQL, and JAVA under the section titled Data Definition Language Statements:  “DDL statements implicitly commit the preceding [transaction] and start a new transaction.”
Truncate is a DDL statement and is contained in the list of example DDL immediately above the quote.  The implicit commit that precedes the truncate command terminates the transaction began by the lock table in exclusive mode command allowing DML statements issued by other sessions after the lock table command was issued and before the truncate command was issued to access the table. The sessions issuing DML statements obtain a lock or locks on the table preventing DDL alteration during the DML activity.  This behavior blocks the truncate command, which then because it is coded not to wait, immediately issues the ORA-00054 error.
    Example ran on Oracle version 9.2.0.4
Note - the marktest table has only 6 rows contained in one data block.

[step 1 from session one]

    12:45:31 SQL> lock table marktest in exclusive mode;

    Table(s) Locked.

    Elapsed: 00:00:00.11
   

[Step 2 from a different session.  Note time of delete, elapsed time and time upon completion]

    12:45:56 SQL> delete from mpowel01.marktest where rownum = 1;

        [session hangs waiting on lock]
    1 row deleted.

    Elapsed: 00:00:10.74    <= notice long elapsed time
    12:46:17 SQL>        <= and termination time

[Step 3 back in session one, note time of truncate is after delete was issued]

    12:45:41 SQL>    [This enter was just to update the prompt time displayed]
    12:46:13 SQL> truncate table marktest;
    truncate table marktest
                   *
    ERROR at line 1:
    ORA-00054: resource busy and acquire with NOWAIT specified

    Elapsed: 00:00:00.12
    12:46:18 SQL>  [the truncate completes after the delete completes]

The delete goes into a lock wait when first issued.  Then when the truncate command is issued the delete executes, and because no commit follows the delete statement session two now has a lock on the table preventing the truncate from working.  Any small table will work for duplicating this test.
The behavior displayed above is a direct result of Oracle’s design decision to not require, indeed to not allowing DDL operations to be explicitly committed.  By including an implicit commit both before and after every DDL operation Oracle made every DDL statement a separate transaction.  This greatly reduces the likelihood of lock contention while accessing the Oracle base (dictionary) metadata tables.  However, behavior as demonstrated above is the result.
Note – commit and rollback statements can be issued after DDL statements are executed, but because of the implicit commit that follows all DDL statements the commit or rollback statements have no practical effect.  There is nothing to commit or rollback, hence in Oracle successful DDL statements cannot be rolled back.

Further reading:  SQL Manual entry for the truncate table command.
Can I see all the tables that are in the database if I have the DBA privilege ?
The answer hinges on the word ALL.  Using version 9.2 as a reference a DBA privileged user has been granted around 87 system related any type privileges such as select any table.  The net effect is that a DBA privileged user has select access to any and all selectable user created objects defined to the database.  The key word in the preceding sentence is user created.  Excluded from these privileges are access rights to the SYS owned X$ views that the V$ views and that some RDBMS dictionary views are built on.  Select privilege on the RDBMS base tables such as obj$ which house the RDBMS dictionary information are however included in the default DBA privileges. 
    select count(*)
    from   dba_sys_privs
    where  grantee = 'DBA'
    and    privilege like '%ANY%';
   
      COUNT(*)
    ----------
            87

The RDBMS dictionary views user_ and all_ continue to work as normal for a DBA privileged user; however since a DBA has access to all user objects the all_ views should return the same counts as the dba_ views. 
How do you purge old statspack snapshots automatically ?
Below is a script that may be run via cron which will purge all old snapshots that exceed the specified count.  This script requires no knowledge of the password for the PERFSTAT user.  The script has been tested with Oracle 8.1.7 and 9.2.0.
To implement this script, do the following:
1)    Save the script (below) as sp_purge.ksh to each Unix machine on which the Oracle instance may be located. 
2)    If the Unix tmp directory is anything other than /tmp then you will need to modify the script accordingly.
3)    If your oratab file is located in any directory other than /var/opt/oracle then you will need to update the script accordingly.
4)    Set execute privilege on the script:  chmod u+x sp_purge.ksh
5)    Establish a cron job to call the script.  The script requires three parameters:
·    The name of the database in which to purge snapshots.
·    The maximum number of snapshots to retain.
·    The email recipient for success messages.
Here is an example cron entry:
00 19 * * 1-5 /scripts/sp_purge.ksh  prod  60  mrogers@company.com  >>/tmp/sp_purge_portal.log 2>&1 &
This entry causes the script to run at 19:00 each weekday, to retain no more than 60 snapshots for the ‘prod’ database, and send success messages to: mrogers@company.com
6)    Note that this script may be invoked on any machine on which the instance may run.  If the instance is not on the current machine, then a simple message to that effect will be sent to a file in the tmp directory.
7)    Note also that all log files are written to the tmp directory.


Automatic StatsPack snapshot purge script:
-----------------------CUT----------CUT----------CUT-------------------------
#!/bin/ksh


#   Script Name:  sp_purge.ksh


#   This script is designed to purge StatsPack snapshots.
#
#      Parameter $1 is the name of the database.
#      Parameter $2 is the maximum number of snapshots to retain.
#      Parameter $3 is the mail recipient for success messages.
#
#   To succeed, this script must be run on the machine on which the
#   instance is running.

#   Example for calling this script:
#
#      sp_purge.ksh prod 30 username@mycompany.com


#   Script History:
#
#   Who             Date         Action
#   --------------- ------------ --------------------------------------------
#   Mark J. Rogers  22-Sep-2003  Script creation.
#
#
#

tmp_dir=/tmp


# Validate the parameters.

if [[ $# -ne 3 ]]; then
   echo ""
   echo "*** ERROR: You must specify these parameters: "
   echo ""
   echo "             1: the name of the database"
   echo "             2: the maximum # of snapshots to retain"
   echo "             3: the mail recipient for success messages"
   echo ""
   exit 1
fi

grep "^${1}:" /var/opt/oracle/oratab >> /dev/null
if [[ $? -ne 0 ]]; then
   echo ""
   echo "*** ERROR: The ORACLE_SID specified in parameter 1 is not a valid SID."
   echo "    (Note that the SID is case sensitive.)"
   echo ""
   exit 1
fi

if [[ ! (${2} -ge 0) ]]; then
   echo ""
   echo "*** ERROR: Parameter 2 must specify the # of snapshots to retain."
   echo ""
   exit 1
fi

# Ensure that the instance is running on the current machine.
ps -ef | grep pmon | grep $1 >> /dev/null
if [[ $? -ne 0 ]]; then
   echo ""
   echo "*** ERROR: Instance $1 is not running on machine `uname -n` "
   echo "    on `date`."
   echo "    The instance must be running on the current machine for this"
   echo "    script to function properly."
   echo ""
   echo "    Exiting..."
   echo ""
   exit 1
fi

# Establish error handling for this UNIX script.

function errtrap {
   the_status=$?
   echo ""
   echo "   *** ERROR: Error message $the_status occured on line number $1."
   echo ""
   echo "   *** The script is aborting."
   echo ""
   exit $the_status
}

trap  \
'  \
errtrap $LINENO  \
'  \
ERR

# Set up the Oracle environment.

export ORACLE_SID=${1}
export ORAENV_ASK=NO
. oraenv

script_name=${0##*/}
echo ""
echo "Script: $script_name"
echo "   started on: `date`"
echo "   by user: `id`"
echo "   on machine: `uname -n`"
echo ""
echo "This script is designed to purge StatsPack snapshots for the "
echo "   $ORACLE_SID database."
echo ""
echo "You have requested to retain no more than $2 StatsPack snapshots."
echo ""

tmp_script=${tmp_dir}/sp_purge_tmp_$ORACLE_SID.ksh  # script to actually purge
tmp_output=${tmp_dir}/sp_purge_tmp_$ORACLE_SID.out  # output to be mailed

rm -f $tmp_script
rm -f $tmp_output

sqlplus -s <
/ as sysdba

whenever sqlerror exit failure rollback
whenever oserror exit failure rollback

SET SERVEROUTPUT ON
SET FEEDBACK OFF

VARIABLE P_SNAPS_TO_RETAIN   NUMBER
VARIABLE P_LOSNAPID          NUMBER
VARIABLE P_HISNAPID          NUMBER

BEGIN
   /* Assign values to these variables. */
   :P_SNAPS_TO_RETAIN := ${2};
   :P_LOSNAPID := -1;
   :P_HISNAPID := -1;
END;
/

-- Identify the snapshot ids to purge, if any.

DECLARE

   V_LOSNAPID          NUMBER := NULL;  -- Low snapshot ID to purge.
   V_HISNAPID          NUMBER := NULL;  -- High snapshot ID to purge.
   V_COUNT           NUMBER := NULL;  -- Number of snapshots current saved.
   V_COUNTER           NUMBER := 0;     -- Temporary counter variable.
   V_DBID              NUMBER := NULL;  -- Current database ID.
   V_INSTANCE_NUMBER      NUMBER := NULL;  -- Current instance number.
   V_SNAPS_TO_RETAIN    NUMBER := :P_SNAPS_TO_RETAIN; -- Max snaps to retain.

BEGIN

   select
      d.dbid,
      i.instance_number
   INTO
      v_DBID,
      V_INSTANCE_NUMBER
   from
      v\$database d,
      v\$instance i;

   select
      count(snap_id)
   into
      v_count
   from
      perfstat.stats\$snapshot
   where
      dbid = V_DBID AND
      instance_number = V_INSTANCE_NUMBER;

   IF V_COUNT <= V_SNAPS_TO_RETAIN THEN

      -- We do NOT need to perform a purge.

      DBMS_OUTPUT.PUT_LINE ('NOTE: There are only ' ||
       to_char(v_count) || ' snapshots currently saved.');

   ELSE

      -- We DO need to perform a purge.

      DBMS_OUTPUT.PUT_LINE ('There are currently ' ||
       to_char(v_count) || ' snapshots saved.');

      -- Obtain the low snapshot id to be purged.

      select
         min(snap_id)
      into
         V_LOSNAPID
      from
         perfstat.stats\$snapshot
      where
         dbid = V_DBID AND
         instance_number = V_INSTANCE_NUMBER;

      -- Obtain the high snapshot id to be purged.

      FOR V_HISNAPID_REC IN
         (SELECT
             SNAP_ID
          FROM
             perfstat.stats\$snapshot
          WHERE
             dbid = V_DBID AND
             instance_number = V_INSTANCE_NUMBER
          ORDER BY
             SNAP_ID DESC)
      LOOP
         V_COUNTER := V_COUNTER + 1;
         IF V_COUNTER > V_SNAPS_TO_RETAIN THEN
            V_HISNAPID := V_HISNAPID_REC.SNAP_ID;
            EXIT;   -- Exit this LOOP and proceed to the next statement.
         END IF;
      END LOOP;

      :P_LOSNAPID := V_LOSNAPID;     
      :P_HISNAPID := V_HISNAPID;     

   END IF;

END;
/

prompt
-- Generate the specific purge script.
set linesize 60
spool $tmp_script
begin
   IF (:P_LOSNAPID <> -1) THEN
      /* Build the script to purge the StatsPack snapshots. */
      dbms_output.put_line('#!/bin/ksh');
      dbms_output.put_line('#THIS IS THE SCRIPT TO ACTUALLY PERFORM THE PURGE');
      dbms_output.put_line('trap '' exit \$? '' ERR');
      dbms_output.put_line('sqlplus -s << SP_EOF2');
      dbms_output.put_line('/ as sysdba');
      dbms_output.put_line('whenever sqlerror exit failure rollback');
      dbms_output.put_line('whenever oserror exit failure rollback');
      dbms_output.put_line('@ \$ORACLE_HOME/rdbms/admin/sppurge.sql');
      dbms_output.put_line(:P_LOSNAPID);
      dbms_output.put_line(:P_HISNAPID);
      dbms_output.put_line('-- the following are needed again');
      dbms_output.put_line('whenever sqlerror exit failure rollback');
      dbms_output.put_line('whenever oserror exit failure rollback');
      dbms_output.put_line('commit;');
      dbms_output.put_line('exit');
      dbms_output.put_line('SP_EOF2');
      dbms_output.put_line('exit \$?');
   END IF;
end;
/
spool off

exit
EOF_SP

if [[ ! (-f ${tmp_script}) ]]; then
   echo ""
   echo "*** ERROR: Temporary script: ${tmp_script} does not exist."
   echo ""
   exit 1
fi

if [[ `cat ${tmp_script} | wc -l` -ne 0 ]]; then
   #   Execute the newly generated StatsPack snapshot purge script.
   chmod u+x $tmp_script
   echo ""
   echo "Performing the purge..."
   echo ""
   $tmp_script > $tmp_output
   cat $tmp_output   # display the output
   #  Check the output file for a success message:
   trap ' ' ERR  # temporarily reset error handling for the grep command
   grep "^Purge of specified Snapshot range complete." $tmp_output >> /dev/null
   if [[ $? -ne 0 ]]; then
      echo ""
      echo "*** ERROR: The purge did not complete successfully."
      echo "           Check the log file $tmp_output."
      echo ""
      exit 1
   fi
   trap ' errtrap $LINENO ' ERR  # re-establish desired error handler
else
   #   No purge script was created.
   echo "No snapshot purge was necessary." > $tmp_output
fi

echo ""
echo "The ${script_name} script appears to have completed "
echo " successfully on `date`."
echo ""

mailx \
   -s "sp_purge.ksh in $ORACLE_SID on `uname -n` completed successfully"  \
   ${3} \
   < $tmp_output

# End of script sp_purge.ksh.
-----------------------CUT----------CUT----------CUT-------------------------
Is there any reason why the tablespace_name in user_tables and all_tables is blank for an Index Organized Table?
The IOT name itself is a logical name and occupies no space. When an IOT is created it generates two segments.
1. An index name SYS_IOT_TOP_.
2. Optionally an overflow table named SYS_IOT_OVER_.
It is these two segments which occupy space and against which tablespace is visible. To view the tablespace_name first find the object_id of the table and then search against that object_id in all_indexes. Example:

-- Lets First Create a sample INDEX ORGANIZED TABLE named DELME_IOT.
CREATE TABLE DELME_IOT
(
A INTEGER,
B INTEGER NOT NULL,
C INTEGER NOT NULL,
D INTEGER,
PRIMARY KEY (C, B)
)
ORGANIZATION INDEX
LOGGING
TABLESPACE SYSTEM
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
FREELISTS 1
FREELIST GROUPS 1
BUFFER_POOL DEFAULT
)
NOPARALLEL;
--Now we will query the tablespace name against this table.
SQL> select table_name,tablespace_name from all_tables
2 where table_name like 'DELME_IOT';
TABLE_NAME TABLESPACE_NAME
------------------------------ ------------------------------
DELME_IOT
--Here we cannot see the tablespace name. Lets first find the object_id for this table.
SQL> select object_id, object_name from all_objects where object_name like 'DELME_IOT';
OBJECT_ID OBJECT_NAME
---------- ------------------------------
56709 DELME_IOT
-- Here we have the object_id for the table DELME_IOT. Next we will query the all_indexes table.
SQL> select table_name,index_name,tablespace_name from all_indexes where index_name like '%56709%';
TABLE_NAME INDEX_NAME TABLESPACE_NAME
------------------------------ ------------------------------ ------------------------------
DELME_IOT SYS_IOT_TOP_56709 SYSTEM
--Here we can see that tablespace name against the IOT   
Or simply the all_indexes table can be queried directly for the tablespace name.
Our PL/SQL loop to load 20M rows into the database slows down after the first 1M or so, why ?
The best way to describe what is happening here is with an example. First create two tables and populate them as follows:
    create table SOURCE (KEY number);
    begin
      for I in 1 .. 100000 loop
        insert into SOURCE (KEY) values (I);
      end loop;
      commit;
    end;
    /
    create table DEST (KEY number);
We can now time the insert of 20 million rows into the DEST table using the following PL/SQL block:
    declare
      offset number;
      start_time date;
      time_taken number;
    begin
      offset:= 0;
      for I in 1 .. 200 loop
        start_time:= sysdate;
        insert into DESTINATION (KEY) select KEY+offset from SOURCE;
        time_taken:= (sysdate-start_time)*24*60*60;
        dbms_output.put_line('Loop '||to_char(I)||' ending at '||to_char(sysdate,'HH24:MI:SS')||
                             ' after '||to_char(time_taken)||' seconds.');
      end loop;
    end; 
    /
This loop produces output that shows how long it takes to load 100000 rows into the table. Ideally we would like the last 100000 rows to be inserted in roughly the same amount of time as the first 1000000 rows. In this case, this is exactly what we find - that is, there is no slow down as more rows are loaded.
Perhaps this is because we are inserting known data values in order? As a second attempt, we can modify the insert statement in line 9 of the above procedure to read:
    insert into DESTINATION (KEY) select dbms_random.random from SOURCE;
Looking at the times taken for each 100000 row insert we notice that although the time is slightly longer on average than it wasfor  the first test, there is still no noticeable increase in time through the iterations in the loop. The extra time taken being the overhead of  the call to dbms_random.
So, maybe we need an index on the DESTINATION table in order to see the effect that inserting many rows into the table has on performance. Create an index on the table as follows:
    create index IX1 on DESTINATION (KEY);
and then rerun the PL/SQL block that we ran earlier. Perhaps surprisingly in this case, there is still no noticeable increase in the amount of time taken to load 100000 rows at the end of the procedure compared to at the beginning.
As a last resort, we can run the second PL/SQL block, inserting a random number into the table. At last, we come to a point where it takes longer to insert the last 100000 rows than it did to insert the first 100000 rows. (In my test, an increase in time of over 5000%!) This indicates that the presence of an index on a column that contains values in a random order is likely to be the culprit that is slowing the inserts.
The reason for this slow down can be explained if the test is modified to provide statistics on the number of reads and writes that are performed on the table and index during the load. As the table is initially empty when the load starts, there is no need to read from the disk - new blocks are simply created internally and mapped onto the data file. There are, however, disk writes as the database writer flushes dirty buffers to the disk, making space for new blocks to be created. Initially this is true for the index as well. However, after about 1 million rows have been inserted, the number of reads from the index starts to increase. In my tests, the insert of the last 100000 rows into the table required over 80000 disk reads from the index segment! (And 3 from the table.)
The reason for the slow down is therefore clear. Up to about a million rows, the index is completely held within the buffer cache. In order to insert a new entry into the index, Oracle can scan a bunch of blocks that are in memory and insert the row. Once the index grows beyond the size available in the block buffers, some of the blocks needed for the index maintenance may not be in memory, and will need to be read. However, in order to red them, Oracle first needs to create space for them, by clearing out old blocks, which in turn may be needed for a later insert. This thrashing of blocks in and out of memory is the source of the performance issue.
This also explains why the index did not show the same characteristics when the rows were inserted in order - once a leaf block had been written it would not be required again, and so the reads from the index were not required.
In order to try to relieve the problem, the insert statement can be modified to read:
    insert into DESTINATION (KEY) select dbms_random.random from SOURCE order by 1;
This forces each 100000 rows to be inserted in numeric order, progressing through the index from start to finish. Although this does not completely get rid of the problem, on my (rather slow) test system, the time to load 20million rows was reduced by over 28 hours (a saving of 29%)!
The ideal solution is of course to drop the indexes on a table before running a large load and recreate them after the load has completed.
It is worth noting that this characteristic is only observed when inserting large numbers of rows into the table using a batch system when there is little other activity on the database. Inserting a single row into a table with 20 million rows on a busy database will not take significantly more time than inserting the same row into an empty table, as the chances of the necessary blocks being in the buffer cache are equally random depending on the use that has been made of the database in the recent past.
How can an end-user see whether a row is locked, and who is locking it, before they try to lock it themselves ?
The short answer is:  Sorry, it's not possible.
For the longer answer, please continue reading.
In principle, there is no scalable way to see what rows are locked or who holds a lock on a particular row.  That's because there is no centralized list of locked rows anywhere in Oracle.  This is by design, and is a feature, not a bug.  One of the Oracle database engine's most powerful features is it's ability to scale to very large volumes of data and numbers of users.  Much of it's ability to do so is integral to the locking model it employs.  
As previously mentioned, there is no way to determine who is locking a row without attempting to lock it yourself.  To see if a particular row is locked, you may attempt to lock it.  If you succeed, then no one else had it locked, but you do now.  (If it's not your intent to update it, don't forget to rollback and release the lock.)  If you fail to lock it, one of two things will happen.  If you have chosen to attempt the lock in NO WAIT mode, then your session will return an ORA-0054 "resource busy" error, and you will not have a lock on the row(s).  If you have attempted the lock in WAIT mode, (which is the default for DML statements), your session will begin waiting on the transaction which currently holds the lock on the row you are trying to lock.  As soon as that transaction is commited or rolled back, your session will be freed, and will lock the row or (if the data changed and your where clause no longer applies) may fail to retrieve the row.
While a session is waiting for a specific row, it's possible to identify the row being waited on, however, this information is only available while a session is actviely waiting for a lock on a row.  For the duration of the wait, you may examine V$SESSION, and in particular, the following columns may be of interest: ROW_WAIT_OBJECT#, ROW_WAIT_FILE#, ROW_WAIT_BLOCK#, and ROW_WAIT_ROW#.  Note that these columns are only populated for a session that is actively waiting on a specific row.  Sessions not waiting on a resource, or waiting on a non-row resource, will have null values for these columns.  However, this method of determining who holds locks on what rows, is cumbersome at best, and is not feasible for an end-user.  Also, there is the race condition to consider.  Even if you are able to implement any kind of tool or application based on the above information, it's likely  that by the time you process and interpret the results, those results are out-of-date.
A quick note about a MetaLink document outlining how to do this:  A MetaLink analyst has cobbled together a bit of Pro*C and PL/SQL, that can identify locked rows on a specific table.  A review of the code will reveal that he simply attempts to lock each row and returns those rows that failed to lock.  If you read the introduction closely, the author admits that the method is "very slow and is NOT bullet-proof".  I strongly recommend you not attempt implementing this method, at all, at the very least, it certainly should not be part of a larger application where the application would have to rely on the functionality.  However, if you are still interested in the code used, see the hyperlink in the Further Reading section below.
In my application we run into these problems, and the scripts used to see locked rows wasn't consistent either. So finally I created following objects for them (developers), which eases the pain a bit for developers.
CREATE OR REPLACE FORCE VIEW SYSTEM.DB$LOCKED_OBJECTS
(OBJECT_NAME, SESSION_ID, ORACLE_USERNAME, OS_USER_NAME, SQL_ACTIONS, LOCK_MODE)
AS
SELECT /*+ no_merge(lo) */
       DO.object_name, lo.SESSION_ID, lo.oracle_username, lo.OS_USER_NAME,
       DECODE(locked_mode,
              1, 'SELECT',
              2, 'SELECT FOR UPDATE / LOCK ROW SHARE',
              3, 'INSERT/UPDATE/DELETE/LOCK ROW EXCLUSIVE',
              4, 'CREATE INDEX/LOCK SHARE',
              5, 'LOCK SHARE ROW EXCLUSIVE',
              6, 'ALTER TABLE/DROP TABLE/DROP INDEX/TRUNCATE TABLE/LOCK EXCLUSIVE') sql_actions,
       DECODE(locked_mode, 1, 'NULL', 2, 'SS - SUB SHARE', 3, 'SX - SUB EXCLUSIVE',
              4, 'S - SHARE', 5, 'SSX - SHARE/SUB EXCLUSIVE', 6, 'X - EXCLUSIVE') Lock_mode
  FROM sys.V_$LOCKED_OBJECT lo, DB$OBJECTS DO
 WHERE DO.object_id = lo.object_id;
CREATE PUBLIC SYNONYM DB$LOCKED_OBJECTS FOR SYSTEM.DB$LOCKED_OBJECTS;
GRANT SELECT ON  SYSTEM.DB$LOCKED_OBJECTS TO PUBLIC;
CREATE OR REPLACE FORCE VIEW SYSTEM.DB$LOCKS
(OBJ_OWNER, OBJ_NAME, OBJ_TYPE, OBJ_ROWID, DB_USER,SID, LOCK_TYPE, ROW_WAIT_FILE#, ROW_WAIT_BLOCK#, ROW_WAIT_ROW#)
AS
SELECT owner obj_owner,
       object_name obj_name,
       object_type  obj_type,
       dbms_rowid.rowid_create(1, row_wait_obj#, ROW_WAIT_FILE#,
                               ROW_WAIT_BLOCK#,ROW_WAIT_ROW#) obj_rowid,
       a.username db_user, a.SID SID, a.TYPE lock_type,
       a.row_wait_file#, a.row_wait_block#, a.row_wait_row#
  FROM DB$OBJECTS,
       (SELECT /*+ no_merge(a) no_merge(b) */
               a.username, a.SID, a.row_wait_obj#, a.ROW_WAIT_FILE#,
               a.ROW_WAIT_BLOCK#, a.ROW_WAIT_ROW#, b.TYPE
          FROM sys.V_$SESSION a, sys.V_$LOCK b
         WHERE a.username IS NOT NULL
           AND a.row_wait_obj# <> -1
           AND a.SID = b.SID
           AND b.TYPE IN ('TX','TM')
           ) a
 WHERE object_id = a.row_wait_obj#;
CREATE PUBLIC SYNONYM DB$LOCKS FOR SYSTEM.DB$LOCKS;
GRANT SELECT ON  SYSTEM.DB$LOCKS TO PUBLIC;
Where DB$OBJECTS is a MV that I refresh everyday at 6AM.
CREATE MATERIALIZED VIEW SYSTEM.DB$OBJECTS
        PCTFREE 10 PCTUSED 0 INITRANS 2 MAXTRANS 255
        STORAGE(
                MINEXTENTS 1
                MAXEXTENTS UNLIMITED
                PCTINCREASE 0
                BUFFER_POOL DEFAULT
                )
TABLESPACE TEMPTABS
NOLOGGING
NOCACHE
NOPARALLEL
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
        STORAGE(BUFFER_POOL DEFAULT)
REFRESH COMPLETE
        WITH ROWID
 USING DEFAULT LOCAL ROLLBACK SEGMENT
DISABLE QUERY REWRITE AS
SELECT *
  FROM DBA_OBJECTS
/
With this, selects from (mainly) db$locks and db$locked_objects is really fast and my developers love it. This is all on 9202 by the way ... so as I read Mark's article, this makes finding the 'rows being waited on' much faster, I think.
I have deleted 6M rows from a table - how can I reclaim the space that is now free ?
The only way to make this space available for other objects to use is to use some sort of rebuild on the table.
The fastest option is probably:
    Alter table XXXX move nologging;
This will recreate the table and repack all the data.  Once you have done this, you should take a backup of at least the tablespace that the table is in as the nologging command means that you could not recover the table from an earlier backup if the system crashed.
To avoid the backup, you can use:
    alter table XXX move;
After you have used the move command (in either form) you will have to rebuild all the indexes on that table.  Again you can choose to do this with or without the nologging command.
    Alter index XXX_idx1 rebuild nologging;
    Alter index XXX_idx1 rebuild;
In the case of indexes, it is less important to take the backup as you don't lose information if the index is accidentally destroyed - you don't need to recover them, you can always rebuild them. Don't forget that you might want to increase your sort_area_size whilst rebuilding the indexes.
You shoudl also remember that if your table rows are subject to significant growth after they have been inserted you will have to work out a more subtle strategy for rebuilding the table. If you do a simple rebuild, then you will either end up wasting a lot of space, or you will find that the table starts to acquire migrated rows as time passes. The details of (and one resolution to) this problem can be found in Practical Oracle 8i.
How do I clone a database from one machine to another - when the filesystems are different ?
(I'm assuming you're using a hot backup of the source database to do your clone - after all, no-one should be using cold backups!)
Cloning a database is normally done with the following sequence:
1.    Copy the datafiles to the target node
2.    Produce a CREATE CONTROLFILE command for the cloned database
3.    Start the target database in NOMOUNT mode
4.    Run the create controlfile command created in (2)
5.    Issue "recover database using backup controlfile until cancel"
6.    Provide the recovery processe with archived redo logs until you are happy
7.    Cancel the recovery and issue "alter database open resetlogs"
8.    Re-create or re-add any TEMPFILES used for temporary tablespaces
This is all quite straightforward even if you have to rename the database, and any of the files in the database. The secret is in step (2). On the source database, you can run
SQL> alter database backup controlfile to trace
at which point you will get a full "create controlfile" command dumped out to the user_dump_dest directory. It will look something like
CREATE CONTROLFILE REUSE DATABASE "SOURCEDB" NORESETLOGS ARCHIVELOG
  ...
LOGFILE
  GROUP 1 '/disk1/log1.dbf'  SIZE 50M,
  ...
DATAFILE
  '/disk2/system01.dbf',
  '/disk3/rbs01.dbf',
  ...
  '/diskN/last_file01.dbf'
CHARACTER SET WE8ISO8859P1;
For cloning this database, you will alter
·    The REUSE to SET
·    The SOURCEDB to NEWDB (that is, the new name of the target database)
·    The NORESETLOGS to RESETLOGS
For each file listed under the DATAFILE clause, if you will be renaming the file as part of the cloning process, then you simply update the file name to the new name in the CREATE CONTROLFILE command.

Further reading: You can also clone a database using the RMAN duplicate command (see the RMAN documentation for instructions)
I have a two-column partition key, and my data keeps going in the wrong partition. Why ?
Unfortunately, the natural assumption that many people make when specifying multiple columns in their partitioning key, is that they are creating an 'n'-dimensional partitioning of the data, where 'n' is the number of columns in the partitioning key. Consider a simple example where there are 2 columns in the partition key.
SQL> create table DEMO (
  2    pcol1 date,
  3    pcol2 date,
  4    data varchar2(10) )
  5  partition by range (pcol1,pcol2)
  6  (  partition p_PRE_2000 values less than
  7       (to_date('01/01/2000','dd/mm/yyyy'), to_date('01/01/2000','dd/mm/yyyy')),
  8     partition p_2000_JAN_JUN values less than
  9       (to_date('01/07/2000','dd/mm/yyyy'), to_date('01/07/2000','dd/mm/yyyy')),
 10     partition p_2000_JUL_DEC values less than
 11       (to_date('01/01/2001','dd/mm/yyyy'), to_date('01/01/2001','dd/mm/yyyy')),
 12     partition p_2001_JAN_JUN values less than
 13       (to_date('01/07/2001','dd/mm/yyyy'), to_date('01/07/2001','dd/mm/yyyy'))
 14  );

Table created.
Its fairly obvious that the desired result here is for each partition to hold a 6 month window of data based on the columns PCOL1 and PCOL2. But the partition key columns are not dimensions, a better label would be tie-breakers. When data is presented for insertion, the columns in the partition key are evaluated "left to right" in order to determine the correct partition. Thus in the DDL above, the second partition key column (PCOL2) will only be considered when the first column is not sufficient to determine the partition.
So lets see what happens when we add some data.
SQL> alter session set nls_date_format = 'dd/mm/yyyy';

Session altered.

SQL> insert into DEMO values ('01/02/1999','01/01/1999','row1');
SQL> insert into DEMO values ('01/02/2000','01/01/1999','row2');
SQL> insert into DEMO values ('01/02/2001','01/08/1999','row3');
SQL> insert into DEMO values ('01/02/2001','01/08/2000','row4');
SQL> insert into DEMO values ('01/02/1999','01/02/1999','row5');
SQL> insert into DEMO values ('01/02/2000','01/02/2000','row6');
SQL> insert into DEMO values ('01/02/2001','01/08/2001','row7');
SQL> insert into DEMO values ('01/02/2001','01/08/1999','row8');
SQL> insert into DEMO values ('01/08/2000','01/08/2000','row9');
SQL> insert into DEMO values ('01/08/1999','01/02/2001','row10');

SQL> select * from DEMO partition (p_PRE_2000);

PCOL1      PCOL2      DATA
---------- ---------- ----------
01/02/1999 01/01/1999 row1
01/02/1999 01/02/1999 row5
01/08/1999 01/02/2001 row10

SQL> select * from DEMO partition (p_2000_JAN_JUN);

PCOL1      PCOL2      DATA
---------- ---------- ----------
01/02/2000 01/01/1999 row2
01/02/2000 01/02/2000 row6

SQL> select * from DEMO partition (p_2000_JUL_DEC);

PCOL1      PCOL2      DATA
---------- ---------- ----------
01/08/2000 01/08/2000 row9

SQL> select * from DEMO partition (p_2001_JAN_JUN);

PCOL1      PCOL2      DATA
---------- ---------- ----------
01/02/2001 01/08/1999 row3
01/02/2001 01/08/2000 row4
01/02/2001 01/08/2001 row7
01/02/2001 01/08/1999 row8
Only rows 1,5,6 appear to satisfy what the user desired, but in fact, all of the rows have been correctly located, because in all cases, the leading column of the partition key (PCOL1) was enough to determine the correct partition. To emphasise this further, consider
SQL> insert into DEMO values ('01/08/1999','01/02/2005','row11');

1 row created.
It would appear from the original DDL that the year 2005 is out of bounds for PCOL2, but of course, this was not even considered because the value for PCOL1 is quite valid, and sufficient to determine that partiton P_PRE_2000 is the destination for this row.
dba_segments and dba_tables show different numbers for blocks. What has gone wrong ?
This is not a problem.
The dba_segments view tells you about blocks that has been allocated (reserved) in the database for an object. The dba_tables view will be telling you how many blocks from that space had been used the last time you ran the analyze command, or used the dbms_stats package to collect statistics about the table.
Note - the blocks figure recorded in the dba_tables view tells you about the point that the highwater mark for the table had reached. If you delete all the rows from a table, the highwater mark will not move, so the number of blocks used will not (from this perspective) be changed, even when you re-analyze the table. However, if you truncate the table the highwater mark will be reset to zero, and an immediate analyze would show the blocks figures back at zero.
In passing - the dbms_stats package has been the 'advised'' method for collecting CBO-related statistics about object in recent versions of Oracle (probably since 8.1, but perhaps a little earlier).


Further reading: FAQ: What's the difference between DELETE ROWS and TRUNCATE TABLE ?
What is the time unit used in the v$ views ?
In general the answer is hundredths of a second.
However, there are a few exceptions (and possibly a few platform-specific oddities).
In Oracle 9, the view v$enqueue_stat has been introduced to expose the number of enqueues (locks) that have been requested. The view includes the number of requests, waits, successful acquisitions, and failures. It also includes the time spent waiting - measured in milliseconds. (Humour point - according to some coder in Oracle Corp, there are 1024 milliseconds in a second; check the length of a one-second wait in the dbms_lock.sleep procedure, if you don't believe me).
In late versions of Oracle 8.1, Oracle started recording time in v$session_wait, v$session_event, and v$system_event (or at least the underlying X$ objects) in micro seconds. However, the micro second time is not always exposed - some views have a column time_waited_micro, some only have the time_waited where the value is rounded to the hundredth.
This does lead to some oddities in trace files, where you may find the elapsed time of an action recorded to the micro second, but the corresponding CPU time rounded to the hundredth, with the effect that the CPU time will sometimes be much larger than the elapsed time.

Further reading: FAQ: Is there a way to suspend a task, or make it sleep, for a period of time ?
My locally managed tablespaces seem to have lost some space according to dba_free_space. Where has it gone ?
A locally managed tablespace breaks each file into uniform chunks, but reserves the first 64K (possibly the firsty 128K if you use a 32K block size) for the file header and space management tasks.
In the case of an autoallocate tablespace, these chunks are 64K each, but in the case of a uniform tablespace, you can specify the chunksize when you create the tablespace. Each chunk will then be available for use as an extent (although in autoallocate tablespaces, the extent sizes will be selected from 64K, 1MB, 8MB, 64MB, 256MB, and each extent will be created from a number of adjacent 64K chunks).
If you create a file that is not 64K plus an exact multiple of the chunk size, then there will be a bit of space at the end of the file which will be too small to act as a valid chunk. This will simply disappear from the dba_free_space view - which only records the total available legal chunk space for LMTs.
For example, if you create a tablespace as 'space management local uniform extent 10M' with a file size of 100M, then you will find that you have 9 extents of 10M available, a 64K header, and 9.9MB 'vanished'. Add 64K to the file, and you will suddenly find that you have a whole extra 10MB extent available.
Personally, I tend to keep the arithmetic simple, but not waste too much space, by working out a 'reasonable' size for the file, and then adding an over-generous 1MB for the header. . e.g, Extent size = 8M, so I start with (say) 800MB for data space, but make is 801MB for the file.
What is the difference between cpu_costing in Oracle 9 and the old costing of Oracle 8 ?
Oracle 9 introduces a more subtle, and thorough costing mechansim. It's a great improvement on the Oracle 7/8 version, but I think the change-over is going to be a bit like the change-over from rule-based to cost-based. If you don't understand how it works you may see some strange events.
You can enable cpu_costing simply by collecting system_statistics for an appropriate period of time with the dbms_stats package. This records in the table sys.aux_stats$ values for:
    assumed CPU speed in MHz
    single block read time in milliseconds
    multiblock read time in milliseconds
    typical achieved multiblock read.
These figures are used to produce three main effects.
Instead of Oracle assuming that single block reads are just as expensive as multiblock reads, Oracle knows the relative speed. This is roughly equivalent to setting the parameter optimizer_index_cost_adj according to the db file xxxx read average wait times - it will encourage Oracle to use indexed access paths instead of tablescans because Oracle now understands that tablescans are more expensive than it used to think.
Secondly, Oracle will use the 'achieved' average multiblock read count to calculate the number of read requests required to scan a table, instead of using an adjusted value of db_file_multiblock_read_count. Since many people use a value of db_file_multiblock_read_count that is inappropriate, the result of this is that Oracle is likely to increase the cost of, and therefore decrease the probability of, doing tablescans (and index fast full scans). Note - the recorded value is used in the calculations, but Oracle tries to use the init.ora value when actually running a tablescan.
Finally, Oracle knows that (e.g.) to_char(date_col) = 'character value' costs a lot more CPU than number_col = number_value, so it may change its choice of index to use a less selective index if the consequential cost of using that index is lower on CPU usage. (Also, Oracle will re-arrange the order of non-access predicates to minimise CPU costs, but the difference in performance from this is not likely to be visible in most cases).
Overall - it's definitely a good thing. In practice, you may see a much stronger bias towards indexed access paths which may impact performance.

Further reading: Oracle 9.2 Performance Tuning Guide and Reference
How do I find distributed queries / transactions (either issued from or connecting to this instance)?
This FAQ entry will demonstrate SQL to locate sessions currently involved in distributed transactions in three forms: remote SQL issued from the instance, remote SQL issued from or to the instance, and remote transaction issued from other instances.
All examples ran on 64 bit RDBMS version 8.1.7.4.
Whenever a distributed query or DML statement is issued from an instance a Distributed Transaction lock, DX, is taken on the transaction and is visible through v$lock:
        select addr, kaddr, sid, type, id1 from v$lock where type = 'DX';

    no rows selected
    select count(*) from oper_console;    -- synonym on remote table

    COUNT(*)
    --------
        6226
    select addr, kaddr, sid, type, id1 from v$lock where type = 'DX';

    ADDR             KADDR                   SID TY        ID1
    ---------------- ---------------- ---------- -- ----------
        070000000015CC48 070000000015CC68         16 DX         29
This entry will remain until the issuer either commits or rollbacks the transaction.This means that a query on a remote object is a transaction and it takes a rollback segment entry to support the two-phase commit feature (see References). But while the above query on v$lock will show distributed queries from this instance to a remote database it does not show queries from the remote instance to this instance.
The only way I know to locate these queries from the (target) instance is the following query, which I based on an ASKTOM article (see references), and which requires the user to be logged on as SYS or to create a view on sys.x$k2gte and grant select on this view to the desired user(s).
        select
          username
         ,osuser
         ,status
         ,sid
         ,serial#
         ,machine
         ,process
         ,terminal
         ,program
        from
         v$session
        where saddr in ( select k2gtdses from sys.x$k2gte );
This query will show sessions that have issued queries using remote objects and queries issued from remote instances to this instance.
To find only those sessions coming from a remote source the change the where clause as follows (Eliminate DX transactions identified above):
        select
          s.username
         ,s.osuser
         ,s.status
         ,s.sid
         ,s.serial#
         ,s.machine
         ,s.process
         ,s.terminal
         ,s.program
        from
          v$session s
        where s.saddr in ( select x.k2gtdses from sys.x$k2gte x )
        and   not exists ( select
                           l.sid
                            from v$lock l
                           where l.type = 'DX'
                           and   l.sid  = s.sid
                          )
    ;
WARNING - During testing this query was producing incorrect results with both not in and exists clauses until I added the table name label on X$K2GTE. When X$ tables (views) are involved it is probably wise to fully qualify all tables and columns.The behavior of any X$ table is subject to change without notice with every patch set / release of the Oracle RDBMS so regular verification of query results is advisable.  With a few exceptions Oracle does not document the X$ tables.

Further reading: see Tom Kyte’s article on this subject at http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:5309401983757 
What happens if I drop a partition from a partitioned table whilst a query is running ?
The documentation on partitioned tables explains that a select may execute and complete successfully across a drop partition even if the partition dropped is a target of the select statement.
I have tested this out and found that it can work. However, if you drop a partition, then allocate the space it was using to another data segment before the query reaches it, then the query will (quite reasonably) crash.Therefore the guaranteed behaviour seems to be that the query will either be correct or the query will crash. (In fact, in a very early verion (I think 8.0.3/4) I found that some queries simply stopped when they hit the destroyed partition and returned whatever result they had accumulated up to that point.)
Note - If you drop partitions at a high rate, and have a large number of partitions, you may find you get a significant parse-time overhead, because the cursors referencing a partitioned object become invalid when a partition is dropped (or subject to virtiually any other DDL-style maintenance).
Why does a global index on my partitioned table work faster than a local index ?
As usual, the quick (consultant's) answer is that the benefits depend on your system. How big is your data, what are your clustering factors like, are you using the features that supply the benefits, and in this case, how much work do you typical queries do anyway ?
The global/local indexes problem is generic - global indexes CAN be faster than local indexes. In general, local indexes will be faster when partition elimination can take place, and the expected volume of data acquired id significant. If either of this conditions is not met, then local indexes probably won't help performance.
Remember, the optimizer has to do some extra work to deal with partitioned tables. If the queries you use are very precise queries (e.g. on a nearly unique pair fo columns) then the cost of optimisation and identifying the partition may easily exceed the saving of having the partitions in the first place (The saving might be just one logical I/O, hence < 1/10,000 of a CPU second) .Where queries are very precise, a global index is quite likely to be a little more performant than a local index.
This is a common Oracle trade-off between how much you win/lose and how often you make that win/loss. In the local / global indexes case you (should expect to) lose a tiny amount of performance on every query in order to win a huge amount when you do partition maintenance such as eliminating entire partitions of very old data with a quick drop partition command.
Of course, if your partitions are large, and the indexes hit a large number of table rows, and the queries are over-generous in their throwaway rates; then the work saved by hitting just the one partition through exactly the correct index partition may prove to be a significant gain
When I update one table from another, lots of my data turns to nulls - what have I done wrong ?
The problem here is that you have probably written some SQL to update data, without remembering that sometimes the associated data may be missing. The code sample that shows this problem usually looks something like:
update tableX    tx
set colA = (
        select sum(colB)
        from    tableY    ty
        where    ty.ref_col = tx.ref_col
    )
;
But what will happen if there are no matching rows in TableY for a given row in TableX ? The subquery finds no data. You might expect this to result in Oracle returning the error "no data found". In fact Oracle simply returns a null value - so if colA originally held a value, it will disappear.
To circumvent this problem, the simplest change is to modify the query so that it reads - update table X if there is any data. You do this by adding a where clause (that looks very similar to the subquery) to restrict the update to those rows for which relevant data exist. For example:
update tableX    tx
set colA = (
    select sum(colB)
    from    tableY    ty
    where    ty.ref_col = tx.ref_col
    )
where exists (
    select    null
    from    tableY    ty
    where    ty.ref_col = tx.ref_col
)
;
Note - this is not the only SQL solution for this type of update - take a look, for example, at updatable join views, which are typically twice as efficient as this subquery method.
What are all these tables with names like EVT% and SMP% that have recently appeared in my system ?
These tables are installed as part of Oracle Enterprise Manager (OEM). They most commonly appear if you have opted for a default database installation and the Enterprise Manager Management Server component was included. Unfortunately they have historically been dumped into the SYSTEM tablespace which is not the best location (for anything). Also in days gone by with the 2-tier client/server version of the console, if you happened to connect to a database that did not contain the objects (for example, you've logged on to the wrong schema), then the objects would be created automatically, which could leave you with multiple copies of the objects across several databases.
If you do not use OEM then they can be dropped, otherwise it is best to ensure that they are within their own tablespace (an option that the more recent versions of OEM will give you during installation).
How can I track structural changes to my database ?
If you are worried about people doing things to your database and you not knowing, and you are running at 8i then you can use a DDL trigger and a logging table to keep a log of all DDL changes, who made them and when. The DDL trigger can be amended so that it logs only certain DDL changes - you needn't worry about logging everything.
You will need a user, created somewhere so that no-one else has access - unless the DBA wishes them to have it, and a trigger owned by SYS which does the logging into a table in your user. The following shows how it all hangs together. This was taken from a session on an Oracle 8.1.7.4 database.
SQL> connect system@cmdb
Enter password: ******
Connected.

SQL> create tablespace logging
  2  datafile '/data2/CMDB/logging.dbf' size 201m
  3  extent management local
  4  uniform size 64k;
Tablespace created.

SQL> create user logging identified by something_daft
  2  default tablespace logging
  3  temporary tablespace temp
  4  quota 0 on system
  5  quota 200m on logging;
User created.
You will notice that I have not given the logging user the ability to connect to the database. This is not a problem as I'm simply using this user to keep my objects out of the SYS user and the SYSTEM tablespace. There is nothing stopping you from creating the logging table so that SYS owns it, but you will not be able to export it if you do. Having a separate user at least gives you that option.
SQL> create table logging.ddl_log
  2  ( user_name        varchar2(30),
  3     ddl_date         date,
  4     ddl_type         varchar2(30),
  5     object_type      varchar2(18),
  6     owner            varchar2(30),
  7     object_name      varchar2(128)
  8  ) tablespace logging;
Table created.
Next we need to switch over to the SYS user to create our trigger. The trigger will write all its data into the logging user's ddl_log table.
SQL> connect sys@cmdb
Enter password: ********
Connected.

SQL> create or replace trigger DDLTrigger
  2  after DDL on database
  3  begin
  4    insert into logging.ddl_log
  5    values (ora_login_user,
  6            sysdate,
  7            ora_sysevent,
  8            ora_dict_obj_type,
  9            ora_dict_obj_owner,
 10            ora_dict_obj_name);
 11  exception
 12    when others then
 13      NULL;
 14  end;
 15  /
Trigger created.
And that is all there is to it. The following is an example where some DDL changes have been made in this test database, and the results of checking the contents of the logging.ddl_log table :
SQL> connect cmdb@cmdb
Enter password: ****
Connected.

SQL> alter table employee nologging;
Table altered.

SQL> grant select on location to public;
Grant succeeded.

SQL> alter user cmdb identified by new_password;
User altered.

SQL> create table test (a number);
Table created.

SQL> alter table test add constraint pk_test primary key (a);
Table altered.

SQL> insert into  test (a) values (1);
1 row created.

SQL> commit;
Commit complete.

SQL> truncate table test drop storage;
Table truncated.

SQL> drop table test cascade constraints;
Table dropped.


SQL> connect system@cmdb
Enter password: ******
Connected.

SQL> alter user cmdb identified by cmdb;
User altered.
 
SQL> column user_name format a10;
SQL> column ddl_date format a20;
SQL> column owner format a10
SQL> column object_name format a20
SQL> column object_type format a20
SQL> column ddl_type format a20
SQL> set lines 150
SQL> set pages 50
SQL>  
SQL> select * from logging.ddl_log;

USER_NAME  DDL_DATE             DDL_TYPE             OBJECT_TYPE          OWNER      OBJECT_NAME
---------- -------------------- -------------------- -------------------- ---------- ---------------
CMDB       10/09/2002 01:13:57  ALTER                TABLE                CMDB       EMPLOYEE
CMDB       10/09/2002 01:14:30  GRANT                OBJECT PRIVILEGE     CMDB       LOCATION
CMDB       10/09/2002 01:15:37  ALTER                USER                            CMDB
CMDB       10/09/2002 01:16:08  CREATE               TABLE                CMDB       TEST
CMDB       10/09/2002 01:16:23  CREATE               INDEX                CMDB       PK_TEST
CMDB       10/09/2002 01:16:23  ALTER                TABLE                CMDB       TEST
CMDB       10/09/2002 01:17:16  TRUNCATE             TABLE                CMDB       TEST
CMDB       10/09/2002 01:17:33  DROP                 TABLE                CMDB       TEST
SYSTEM     10/09/2002 01:20:54  ALTER                USER                            CMDB

9 rows selected.
From the logging.ddl_log table we can see that user CMDB made a change to the EMPLOYEE table with ALTER TABLE. CMDB then granted some privileges on the LOCATION object - which could be a view, a table, a trigger etc, we cannot tell from the logging details unfortunately, nor can we tell who was granted the privileges - whatever ones thay may have been. And so on.
Notice that when CMDB created a primary key constraint using ALTER TABLE, this was logged first as a CREATE INDEX PK_TEST followed by an ALTER TABLE TEST. This is because the index was created in the background to support the constraint. Had there been an existing index which was suitable, the CREATE INDEX PK_TEST would not be seen.
Things to remember
You cannot, unfortunately, get the OSUSER of the person who made the changes to an object in the database. This is a problem if you have everybody logging in as user 'application' for example, you will only see that user 'application' made the change - so it will still be difficult to trace it back to a specific user. There is an option to obtain the IP address of the client PC, but that field is not available in the DDL Trigger unfortunately.
The logging user has only 200 Mb of space to play with - if you are logging all changes on a fast changing database, you'll need to clear out the dross regularly to stop the tablespace filling up. You can use something like the following command to remove unwanted data :
SQL> delete from logging.ddl_log where ddl_date
commit;
Commit complete.
In my example, I simply used today's date as I don't have old data in there, you would use something like 'sysdate - 30' to delete the details of chnages made prior to 30 days ago.
What will happen if the tablespace is full up, and someone tries to make some DDL changes ?
SQL> alter user  logging quota 0 on logging;
User altered.
At this point, the logging table has not used up all of it's first allocated extent, so I'll run a few 'insert into .. select * from ...' to fill the extent up. First though, I'll keep the date and time for now - I'll need it later.
SQL> select sysdate from dual;

SYSDATE
-------------------
10/09/2002 14:05:04


SQL> insert into logging.ddl_log select * from logging.ddl_log;
insert into logging.ddl_log

( repeat until the following error occurs )

ERROR at line 1:
ORA-01536: space quota exceeded for tablespace 'LOGGING'
Then a few more DDL changes ...
SQL> alter user logging identified by logging;
User altered.

SQL> r
  1* alter user logging identified by logging
User altered.

SQL> r
  1* alter user logging identified by logging
User altered.
Then find out what rows have been added to the logging table after the date & time when I filled up the table :
SQL> select * from logging.ddl_log
  2  where ddl_date > to_date('10/09/2002 14:05:04','DD/MM/YYYY HH24:MI:SS');
no rows selected
So, when the tablespace fills up, no DDL changes are logged to the logging table, but they do get carried out. However, if I change the trigger to remove the exception handling, the following will happen :
SQL> connect sys@cmdb
Enter password: ********
Connected.

SQL> create or replace trigger DDLTrigger
  2  after DDL on database
  3  begin
  4    insert into logging.ddl_log
  5    values (ora_login_user,
  6            sysdate,
  7            ora_sysevent,
  8            ora_dict_obj_type,
  9            ora_dict_obj_owner,
 10            ora_dict_obj_name);
 11  end;
 12  /
Trigger created.

SQL> alter user logging identified by another_password;
alter user logging identified by another_password
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01536: space quota exceeded for tablespace 'LOGGING'
ORA-06512: at line 2

SQL> grant create session to logging;
grant create session to logging
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01536: space quota exceeded for tablespace 'LOGGING'
ORA-06512: at line 2

SQL> connect logging/another_password@cmdb
ERROR:
ORA-01017: invalid username/password; logon denied
Warning: You are no longer connected to ORACLE.
So, without the exception handling, the users get to see an error message and the DDL is not carried out. Always make sure you have exception handling on your triggers to avoid causing your users problems.

Further reading:
The article that first pointed me in the direction of DDL triggers can be found here at DBAZine.com.
More information, including a table of what triggers and events are possible, what parameters can be used for each type of trigger, and other useful information can be found in the Oracle 8i Application Developers Guide, chapter 13 - Working with system events. More importantly, it also has information on which DDL commands do not get logged.

What's my best strategy for laying out an Oracle database across my available I/O devices ?

Historically there have been may directives, hints and tips about Oracle and disc layout like 'separate indexes from data', 'put redo logs on their own discs'. Some suggestions have varied from pointless to dangerous, some are merely optimistic. (How many smaller systems nowadays actually have the nine discs that once was touted as the minimum requirement).
This note is not a 'quick tip' suggestion - it is a generic guideline. You have to think for yourself about spreading I/O in the most effective way to reduce hotspots on your discs - if that happens to contradict one of the old rules of thumb - so be it, there are always special cases.
So - when trying to work out the best (or perhaps least worst) distribution pattern for your database:
a) Be aware of what your database has to do
b) Understand how indexes work and what they cost
c) Understand how UNDO and redo work
d) Understand the possible data access mechanisms that Oracle users
e) Identify the parts of your database that are necessarily read-intensive
f) Identify the parts of your database that are necessarily write-intensive
g.1) For administrative reasons separate parts of your system with different I/O characteristics into separate modules. (One consequence of this is that in general no tablespace will hold tables and indexes because in most cases table access behaviour can be different from index access behaviour)
g.2) For administrative reasons (sub)section your system into units that can be categorised by size.
g.3) For administrative reasons (sub)section your system into units that can be categorised by backup/restore requirements.
g.4) For administrative reasons consider reducing the number of units generated by g1, g2, g3 above.
h) Allocate estimates of physical I/O costs to each unit specified in g4
i) Spread the units from g4 across the available devices with a view to balancing the I/O evenly whilst avoiding significant contention between units of significantly contradictory characteristic behaviour."
Is it possible to re-set a sequence without dropping and recreating it ?
Many people believe that the only way to reset a sequence is to drop it and recreate it again. This causes a number of problems for any object - functions, procedures etc - which rely of the sequence as it renders them invalid as the following small example shows :
SQL> -- Create a test sequence
SQL> create sequence test_seq;
Sequence created.

SQL> -- Create a test table
SQL> create table test(a number);
Table created.

SQL> -- Create a trigger on the table to use the test sequence
SQL> CREATE OR REPLACE TRIGGER test_bi
  2   BEFORE INSERT ON TEST
  3  FOR EACH ROW
  4  BEGIN
  5     IF (:NEW.a IS NULL) THEN
  6        SELECT test_seq.NEXTVAL INTO :NEW.a FROM dual;
  7     END IF;
  8  END;
  9  /
Trigger created.

SQL> -- Insert a few rows - firing the trigger and using the sequence
SQL> insert into test (a) values (NULL);
1 row created.

SQL> r
  1* insert into test (a) values (NULL)
1 row created.

SQL> r
  1* insert into test (a) values (NULL)
1 row created.

SQL> r
  1* insert into test (a) values (NULL)
1 row created.

SQL> -- Check results
SQL> select * from test;

         A
----------
         1
         2
         3
         4

SQL> commit;
Commit complete.

SQL> -- Check status of trigger. Note that USER_TRIGGERS shows it as ENABLED but
SQL> -- USER_OBJECTS shows VALID.
SQL> select status from user_objects
  2  where object_name = 'TEST_BI';

STATUS
-------
VALID

SQL> select status from user_triggers
  2  where trigger_name = 'TEST_BI';

STATUS
--------
ENABLED

SQL> -- Drop the sequence which will invalidate the trigger
SQL> drop sequence test_seq;
Sequence dropped.

SQL> -- Note that USER_TRIGGERS still says ENABLED but USER_OBJECTS shows INVALID now !
SQL> select status from user_triggers
  2  where trigger_name = 'TEST_BI';

STATUS
--------
ENABLED

SQL> select status from user_objects
  2  where object_name = 'TEST_BI';

STATUS
-------
INVALID

So now the trigger is invalid and will remains so until such time as the sequence is recreated and either the trigger manually compiled or an insert into the table is carried out. On the insert, the trigger will be recompiled and will once more become valid. This is useful, but will tend to cause a slowdown in user responses if the trigger has to be compiled on first use.
SQL> create sequence test_seq;
Sequence created.

SQL> insert into test (a) values (NULL);
1 row created.

SQL> commit;
Commit complete.

SQL> select status from user_objects where object_name = 'TEST_BI';

STATUS
-------
VALID
As this recompiling could affect many other objects, it is advisable to avoid it. There is a way to reset a sequence back to its start value without having to drop it, and most importantly, without having to invalidate any objects.
We start by finding out some information about the sequence by querying USER_SEQUENCES as follows :
SQL> select sequence_name, min_value, max_value, increment_by, cache_size, last_number
  2  from user_sequences
  3* where sequence_name = 'TEST_SEQ'

SEQUENCE_NAME                   MIN_VALUE  MAX_VALUE INCREMENT_BY CACHE_SIZE LAST_NUMBER
------------------------------ ---------- ---------- ------------ ---------- -----------
TEST_SEQ                                1 1.0000E+27            1         20          21
Make a note of the increment size and min_value for later. Note that this sequence is cached, so the value in LAST_NUMBER may not be the correct value as it is not the last value 'handed out' by the sequence. The only way to get at the correct value is to select the NEXTVAL from the sequence.
SQL> select test_seq.nextval from dual

   NEXTVAL
----------
         2
Now we have the last number, we can negate it to give a new increment for the sequence, however, because the MINVALUE is 1 we need to allow it to become zero - or the change to the sequence will fail. Simply alter the sequence and set a new INCREMENT_BY and MINVALUE as follows :
SQL> alter sequence test_seq
  2  increment by -2
  3  minvalue 0;
Sequence altered.
The sequence is still not reset. In order to do this, we need to select its NEXTVAL again which will increment the current value (2) by -2 giving us zero.
SQL> select test_seq.nextval from dual;

   NEXTVAL
----------
         0
Note that we had to allow the MINVALUE to be zero to allow the dummy select we did to reset the current value to zero. Why not simply set the increment to -1 instead ? Well, we could have, but then we have left ourselves with a problem because, we want to reset the sequence back to its original starting value - which is 1 - so that the first time it is used after our adjustments, the value returned is 1. If we simply set the increment to -1 we would indeed get 1 as the next value. Then we would have to monitor the sequence to find out when it was first used, and before it was used again, reset the increment to 1 again so that the sequence is once more counting upwards rather than down. In addition, if we missed the first usage, the user would get an error on subsequent usage because we are trying to reduce the value below that which we set as the minimum.
Now we have successfully reset the sequence back to its original state as the next selection from it will give the value 1 and that was our original intent.
NOTE we have actually changed the sequence as its original MINVALUE was 1 but now it is zero. This might cause a problem on some installations if zero is not allowed. If so, the MINVALUE will need to be adjusted after the first use of the sequence. The chance of a problem is quite minimal unless the sequence is allowed to CYCLE around when it reaches the MAXVALUE.
Once again, checking the status of the trigger shows that it is still valid - it has not been affected by the changes we made to the sequence.
SQL> select status from user_objects where object_name = 'TEST_BI';

STATUS
-------
VALID
Footnote The original answer to this FAQ was the link below to a document written by Howard J Rogers. Unfortunately, Oracle Australia requested that Howard take down his web site and so the document is no longer available officially. This rewrite of the FAQ answer is not a blatent copy of Howard's document, but is my own work - which addmittedly is influenced by Howard. The link has been maintained in case Howards gets permission to re-open the site.

The following procedure is also offered by Peter Clark (Peter.Clark@vcint.com)
CREATE or REPLACE PROCEDURE reset_sequence (sequencename IN VARCHAR2) as   
    curr_val INTEGER;
BEGIN
    EXECUTE IMMEDIATE 'alter sequence ' ||sequencename||' MINVALUE 0';
    EXECUTE IMMEDIATE 'SELECT ' ||sequencename ||'.nextval FROM dual' INTO curr_val;
    EXECUTE IMMEDIATE 'alter sequence ' ||sequencename||' increment by -'||curr_val;
    EXECUTE IMMEDIATE 'SELECT ' ||sequencename ||'.nextval FROM dual' INTO curr_val;
    EXECUTE IMMEDIATE 'alter sequence ' ||sequencename||' increment by 1';
END reset_sequence;
How do I retrieve information about partitioned tables from the data dictionary ?
Information about partitioned tables are typically stored in the data dictionary in a number of views with the word 'PART' in them; so a query like the following will list the relevant views (if you select any table privileges):
    select    object_name
    from    dba_objects
    where    object_name like '%PART%'
    order by
        object_name
    ;
You will get the usual feature of lists of views with the three family prefixes, (DBA_, ALL_, USER_), which should include the following:
    DBA_(SUB_)PART_TABLES        Identifies tables that have been (sub) partitioned
    DBA_TAB_(SUB_)PARTITIONS    Lists (sub)partitions of tables
    DBA_(SUB_)PART_INDEXES        Identifies indexes that have been (sub) partitioned
    DBA_IND_(SUB_)PARTITONS        Lists (sub) partitions of indexes
    DBA_(SUB_)PART_KEY_COLUMNS    List (sub) partitioning columns of objects
    DBA_(SUB_)PART_COL_STATISTICS    Holds (sub)partition level statistics on columns
    DBA_(SUB_)PART_HISTOGRAMS    Holds (sub)partition level histograms on columns
    DBA_LOB_(SUB_)PARTITIONS    Lists (sub)partitions of lOBs
    DBA_(SUB_)PART_LOBS        Identifies LOBs in (sub)partitioned tables
Why is my import so much slower than my export ? ?
If you switch on sql_trace whilst the import is running, then you will find that an import is typically executing a large number of array inserts to get your data into the database, and then creating indexes and adding constraints. In other words, it is doing a lot of very ordinary updates to the database.
When imp creates indexes, it may do so in nologging mode, but otherwise, all the work you do will be generating undo and redo. This is much heavier work than simply reading the data (by tablescan) and dumping it to a flattish file, which is almost all that exp does, so imp is almost invariably slower than exp. Things may be made worse by the fact that imports tend to be large, which means that when the indexes are being created, they are scanning tables with lots of dirty blocks back from disk, so they generate even more redo doing delayed block cleanout.
Note - there are always special cases - if you have lots of procedures and packages, and very little data in an exported schema, then the work done by Oracle to build the export (i.e. reconstruct your code) may be similar in complexity and volume to the work done by imp in reloading them, so some imports will not be dramatically slower than the export.
Damage limitation methods:
You might consider setting the commit parameter on the import. This causes imp to issue a commit every N rows, and for large imports could reduce the amount of delayed block cleanout that has to take place. This is generally NOT a good idea because it could leave you in an awkward state if something goes wrong - especially if you were using export/import to add data to existing tables.
Set a large buffer size for the import, this increases the array size that Oracle can use for its array insert. It can help.
Make sure that the sort_area_size is set to a large value to reduce disk activity when creating the indexes. In later versions of Oracle you could create a database logon trigger for the importing account that does an execute immediate to set the sort_area_size.
An import is a big job, check the size of your redo log files. It may be that you are doing lots of redo log switches, which results in lots of checkpoints as Oracle tries to flush all the current dirty data to disk. Increasing the redo log file size may help. (Check also the log_checkpoint_interval and log_checkpoint_timeout parameters, as these could have a similar effect).
If you are loading extra data on top of existing data, and therefore updating existing indexes, then the cost of those index updates may be large. You might consider dropping (some of) the indexes before the import. This may be particularly revelant for bitmap indexes which can degrade and grow quite dramatically as new data is added to the tables.
On import, Oracle will always build the indexes needed to enforce enabled primary key and unique key constraints, but you could stop it from creating all the other indexes and create them manually, in parallel, by hand later on. To stop excess indexes being created, you use the indexes parameter to the imp command, and if you want to generate a script that lists the indexes that would have been created (so that you can hack it and run it under your own control) you can use the indexfile parameter.
What is an object of type 'Non-existent' ?
There are two issues which result in non-existent objects.
If you drop an object, then it's row in obj$ (the table holding object definitions) is reclassified as type# = 10 (non-existent). If you create a new object before the database is shutdown, this object number (and row) can be re-used. This is an efficiency thing, and stops the obj$ table and its index degenerating over time in systems where objects are frequently dropped and created.
The other reads is that non-existent objects are created to handle 'negative dependency' tracking.
Say you have a view defined by
    create or replace view v1 as
    select * from t1;
but T1 is actually a public synonym to a table in someone else's schema.
The validity of this view is dependent on the fact that there is no object named T1 in your schema; so Oracle creates a non-existent object in your schema that the view is dependent on (take a look at the view user_dependencies). Then, when you create an object called T1 in your schema, this causes the elimination of the 'non-existent' object, which automatically causes the invalidation and recompilation of the view so that it now references your object T1, rather than the public synonyn T1.
I am getting a database startup error about being unable to start "with new and and old " values for parameters. (9.0.1)
Errors come with error numbers - if you are on a UNIX machine you can always try the oerr call to get more details about the error message, including (sometimes) a cause and action. For example, of the program returns error ORA-00381, you can try
$ oerr ora 381
00381, 00000, "cannot use both new and old parameters for buffer cache size specification"
// *Cause:  User specified one or more of { db_cache_size ,
//          db_recycle_cache_size, db_keep_cache_size,
//          db_nk_cache_size (where n is one of 2,4,8,16,32) }
//          AND one or more of { db_block_buffers , buffer_pool_keep ,
//          buffer_pool_recycle }. This is illegal.
// *Action: Use EITHER the old (pre-Oracle_8.2) parameters OR the new
//          ones. Don't specify both. If old size parameters are specified in
//          the parameter file, you may want to replace them with new parameters
//          since the new parameters can be modified dynamically and allow
//          you to configure additional caches for additional block sizes.
Note, the necessary file ($ORACLE_HOME/rdbms/mesg/oraus - or local language equivalent) does not exist on NT boxes, but you could try raiding your nearest friendly unix box for the file, and just search it for the error number.
I assume that you have used SQL*Plus to issue a command like:
    alter system set db_block_buffers=4000 scope=spfile
In this case, your spfile now has two lines like:
    *.db_block_buffers=4000
    *.db_cache_size=16777216
This is what the complaint is about. You need to fix the spfile, expect you can't until you've started the database, and you can't start the database until you've fixed the spfile. Rename the spfile to something else so that it doesn't get accessed by accident from now on.
To fix this problem: if you have a pfile (init{SID}.ora file) that is reasonably up to date, use
    startup pfile={fully qualified name of pfile}
    create spfile from pfile;
You now have a file spfile{SID}.ora sitting in the default location ($ORACLE_HOME/dbs for Unix), so copy it, and start issuing commands to patch it up for all the paramters that are currently not what you want.
Second option - spfiles are 'binary' files, which in this case means that have some extra junk in them, but you can read most of the text. Copy the spfile, and edit it to extract the text (string -a is a useful command under Unix). Use the resulting file as the pfile in option 1.
I am getting several hundred "buffer busy waits" on data blocks per day, even though I have increased FREELISTS on all objects - what should I do ?
The questions you want to ask yourself are, presumably: which object has the problem, do I need to fix the problem, and how do I fix it. This note addresses the first two parts of that question.
Do I need to fix the problem
Whilst a few hundred waits per day does not look like a good number, you do need to compare it with the actual amount of work done, and check the time lost. In other words, is it likely to make any noticeable if you solve the problem ? There are two cross-checks to make. In v$system_event, is the total time waited on buffer busy waits significant compared to waits like file reads, enqueues, latches ? Then in v$sysstat - is the number of waits you have recorded tiny compared to the total number of consistent gets (say one in one million), is the time lost significant compared to the CPU used when call started.
Of course, it is always a little misleading just to look at system-level figures. This can easily give you a false sense of security by hiding crticial peaks in a swamp of average behaviour. It is just possible that 99% of all your buffer busy waits take place in a critical 30 minutes of the day on one very important process; so you really need to look at regular snapshots (statspack can help) and probe down to the session level (v$sesstat and v$session_event)
Once you have decided that you do need to investigate further, the next step is to identify the problem object. Note - when one session suffers buffer busy waits, it isn't necessarily a problem caused by that session; it may be another session that is doing something completely different; however it is often the case the these waits show up because multiple sessions are doing the same sort of thing at the same time, so you will often find that you can identify the cause by researching the effect.
In 8.1 there is generally no better way than watching a session that you think will suffer a number of these waits. If you check v$session_event joined to v$session, you may notice that a particular program suffers most of the waits. If this is the case, then pick one of those sessions and fire event 10046 at level 8 at it, for example using:
sqlplus "/ as sysdba"
select spid, pid
from v$process
where addr = (
    select paddr
    from v$session
    where sid = 99
);
oradebug setorapid {the pid above}
or
oradebug setospid {the spid above}
then
oradebug event 10046 trace name context forever, level 8
Wait a while, then look at the trace file. You may find some buffer busy waits recorded in the trace file, and be able to use the p1 and p2 values (file number and block number) in a query against dba_extents to identify the object, and the p3 value to get the reason for the wait.
If you do not want to have sql_trace running, then you could simply query v$session_wait repeatedly,
select sid, p1,p2,p3
from v$session_wait
where event = 'buffer busy waits';
Steve Adams (http://www.ixora.com.au) has a more subtle approach to using v$session_wait that increases your chances of spotting a wait when using this view.

Note - In Oracle 9.2, there is a view called v$segstat that can be cued to record all sorts of activity per segment - so a quick report on this dynamic view would tell you which segments were suffering most buffer busy wait.
I need to delete a lot of small records from LOB-tablespace then load some larger ones. How can I avoid the fragmentation?
LOB-records can be loaded only in flat free part of tablespace the same size or bigger. Small chunks scattered over all tablespace will not used. Also there is no way to defragment current LOB-tablespace online. You can avoid fragmentation by moving all data to another LOB-tablespace by clause 'ALTER TABLE .. MOVE another_blob_tspace' then drop original tablespace or by routine procedures export\truncate tablespace\reimport.
Where has svrmgr gone in 9i - how do I start and stop the database now ?
The desupport and removal of svrmgr .or (svrmgrl) was documented in the manual from at least version 8.1 (if not earlier), when it became possible to start and stop an Oracle instance from Sql*plus. In fact, SQL*Plus acquired many of the features of svrmgr at this time, including oradebug.
You will also discover that the internal account has also disappeared, and you are now required to connect 'as sysdba' or 'as sysoper' to start and stop Oracle.
There are two common strategies for use with SQL*Plus
    sqlplus /nolog
    connect / as sysdba
    startup
or simply
    sqlplus "/ as sysdba"
    startup
Note that you need to surround the "/ as sysdba" with quote marks so that the entire string will be recognised as a single item.
If you are planning to use exp for transporting tablespaces and don't plan to use parameters files, this quoting mechanism is the method you will have to use to invoke exp (and imp) from the command line - note that from about 8.1.7, tablespaces can only be transported by users with the sysdba privilege, although the first releases of 8.1 allowed the privilege to ordinary DBAs.
How have the log_checkpoint_interval and log_checkpoint_timeout changed from version 7 ?
There was a very important change made to the log_checkpoint_timeout and log_checkpoint_interval parameters in version 8.  (Possibly between 8.0 and 8.1, but I think it was probably between 7.3.4 to 8.0)  
In earlier versions, after log_checkpoint_interval redo blocks had been written by lgwr (or after log_checkpoint_timeout seconds had elapsed) since the last checkpoint), Oracle issued a checkpoint call and dbwr tried to write all currently dirty buffered blocks to disc.  This resulted in an extreme I/O hit from time to time.  
To avoid the extremes, Oracle decided to keep trickling dirty blocks to disc at a higher rate than had been effected by the old 3-second idle write rate (every 3 seconds, dbwr wakes up and writes a few blocks to disc if it has had no other work in the interval).  To achieve this, they changed the meaning of the two log checkpoint parameters. This change was made possible by a change in the architecture of the buffer management, which now allows Oracle to queue dirty buffers in the order that they were first made dirty.
Amongst other things, Oracle already kept a low redo block address (lrba)on each buffer header for each dirty buffer. This identifies the first redo block that started the process of changing that buffered block from the state that is currently on disc to the dirty state that is in the buffer. The function of the log_checkpoint_interval is simply to limit the distance between a buffer's lrba and the addreess of the redo block that lgwr is currently writing. If the lrba is too low, then the block was first dirtied too long ago and it has to be written to disc (and its lrba set to zero). Since Oracle now queues dirty blocks in the order they were first dirtied (i.e. lrba order) it is a quick and cheap process to find such blocks.
For example: if lgwr is currently writing redo block 12,100 and the log_checkpoint_interval is set to 4,000, then dbwr will be cued to write any dirty blocks with an lrba less than 8,100. This check is carried out every 3 seconds, and I believe the control files and possibly any affected data files are updated with the SCN at which this incremental checkpoint took place.
Note, however, that you are no longer supposed to set these parameters. Over the last few versions, Oracle has introduced different ways of controlling the rate at which dirty blocks are written - including fast_start_io_target (8.1 Enterprise Edition), db_block_max_dirty_target, and in Oracle 9 fast_start_mttr_target.
Why isn't my output from tkprof with the explain option showing row counts ?
The row counts come from trace file lines which are identified as STAT lines. When a cursor is closed, the stat lines for that cursor are dumped to the trace file. consequently, if the trace file ends before the cursor is closed, the stat lines will be missing.
Typically, this occurs because you have issued
    alter session set sql_trace true;
        ..... execution time
    alter session set sql_trace false;
(or one of the variants from dbms_system, dbms_support or oradebug). If this is the case, and the SQL in question is inside a pl/sql block, then the pl/sql engine will have cached the cursor, keeping it open even if the program appears to have closed it.
In order to ensure that pl/sql child cursors are closed, you need to exit cleanly from the session that you are tracing.
How does Oracle handle updating a row that causes it to move from one partition to another ?
If you dump the contents of the undo segment or the log file after a simple update that moves a row from one partition to another you will find that Oracle takes three steps to handlethe task..
    Update the row in situ - which leaves it in the wrong partition
    Insert the corrected row into the correct partition
    Delete the row from the old partition
Clearly this is quite heavy on overheads (deletes are always expensive anuway as the whole row has to be written to the undo segment), so the feature should be used with care, and should probably not be used to move rows at a high rate.
Where is the UGA located, and how does sorting affect its use ?
The UGA is the user global area, which holds session-based information.
If you think about Oracle's architecture of sessions and processes, particularly in multi-threaded server (or shared server as it has been renamed for Oracle 9), you will realise that session-based information has to be handled a little carefully.
When you are running dedicated servers (one session = one process) then the session information can be stored inside the process global area - i.e. in the memory space of the dedicated server.
However, when you are running with MTS/shared servers, a single session can migrate from one server (Snnn) process to another. Consequently the session-based information can not be stored in the process (Snnn) memory space, or it would be lost on migration. Consequently session-based information is stored in the SGA (shared global area).
In particular, when running MTS/shared servers and your session does some sorting, some of the memory allocated for sorting - specifically the amount defined by parameter sort_area_retained_size - comes from the SGA and the rest (up to sort_area_size) comes from the PGA (Snnn). This is because the sort_area_retained_size may have to be held open as the pipeline to return results to the front-end, so it has to be located where the session can find it again as the session migrate from server to server.m On the other hand, the sort_area_size is a complete throwaway, and by locating it in the PGA, Oracle can make best use of available memory without soaking the SGA.
To avoid sessions grabbing too much memory in the SGA when running MTS/shared server, you can set the private_sga value in the resource_limit for the user. This ensures that any particularly greedy SQL that (for example) demands multiple allocations of sort_area_retained_size will crash rather than flushing and exhausting the SGA.
I used 'set transaction use rollback segment xxx', but my query still encounters ORA-1555 on rollback segment 'yyy'.
The statement 'set transaction use rollback segment xxx' does two things:
1.) It begins a transaction.
2.) It tells Oracle that any DML for the duration of that transaction should write it's rollback to a rollback segment named 'xxx'.
However, the statement encountering the error is NOT DML.  It's a select statement.  A select statement will not write any data, so will not consume rollback.  A select statement will (attempt to) provide a read consistent view of the data.  In order to do so, it may have to refer to rollback.
Depending on what other transactions modified the data you are reading, those changes may have been recorded in ANY rollback segment.  If, in the course of constructing a read consistent view of the data, Oracle discovers data whose rollback has been overwritten, it will raise an ORA-1555. This error will report the name of the rollback segment from which the (now overwritten) consistent image cannot be reconstructed.
If you look at it this way, it's easy to see why the name of the rollback segment reported in the ORA-1555 error doesn't match the name of the rollback segment specified in the 'set transaction ...' statement.  It is a misconception to think that use of the 'set transaction use rollback segment xxx' statement can help you avoid an ORA-1555 error.  It's simply not the case.  Please see the items listed in the "Further reading" section below for more information about ORA-1555 and what actually can be done to avoid it.

Further reading:  ORA-01555 Snapshot too old - Detailed Explanation   (MetaLink Registration required)
                         Avoiding ORA-1555 Errors   (Link to IxOra, Steve Adams' website)
Is there a way to get the Date/Time when a table was last updated ?
One option is as follows:
(1) Turn the auditing on: AUDIT_TRAIL = true in init.ora
(2) Restart the instance if its running.
(3) Audit the table:
    AUDIT INSERT,SELECT,DELETE,UPDATE on TableName
    by ACCESS WHENEVER SUCCESSFUL
(4) Get the desired information using :
    SELECT OBJ_NAME,ACTION_NAME ,to_char(timestamp,'dd/mm/yyyy , HH:MM:SS')
    from sys.dba_audit_object.
What's the difference between DELETE ROWS and TRUNCATE TABLE ?
Delete
At the simplest level, delete scans the table and removes any rows that match the given criteria in the (optional) where clause. It generates rollback information so that the deletions can be undone should it be necessary. Index entries for the deleted rows are removed from the indexes. You must commit to make the deletions permanent.
When deleting rows from a table, extents are not deallocated, so if there were 50 extents in the table before the deletion, there will still be 50 after the deletion. In addition the High Water Mark is not moved down, so it remains where it was before the deletion began. This means that any subsequent full table scans may (still) take a long time to complete - because a full table scan always scans up to the HWM. So, by example, if a select count(*) from very_large_table; took 15 minutes to complete before all the rows were deleted, you will find that it still takes about 15 mins after the deletion - because Oracle is still scanning every single block up to the HWM - even though some (or all) of the blocks may have no data in them.
Truncate
Truncate, on the other hand, simply moves the high water mark on the table right down to the beginning. It does this very quickly, and does not need to be committed. Once you truncate a table, there is no going back. Indexes are also truncated. There is no facility to be able to specify which rows to 'delete' as you can with the where clause on the delete command.
When a table is truncated, all its extents are deallocated leaving only the extents specified when the table was originally created. So if the table was originally created with minextents 3, there will be 3 extents remaining when the tables is truncated.
If you specify the reuse storage clause, then the extents are not deallocated. This saves time in the recursive SQL department if you intend to reload the table with data from an export for example, and can reduce the time it takes to do the import as there is no need to dynamically allocate any new extents.
Every time when I restart the database I have to bring all my rollback segments back online manually. Why ?
In order to have a private rollback segment come on-line at database startup, you must ensure that the rollback segment names are mentioned in the initSID.ora file as follows, assuming you have already created 4 rollback segments named r01, r02, r03 and r04 :
rollback_segments = (r01, r02, r03, r04)
Once the database is open :
SQL> select segment_name,status from dba_rollback_segs;

SEGMENT_NAME                   STATUS
------------------------------ ----------------
SYSTEM                         ONLINE
R01                            ONLINE
R02                            ONLINE
R03                            ONLINE
R04                            ONLINE

SQL>
You can see that the rollback segments mentions in the initSID.ora have indeed been brought on-line.
You must never put the SYSTEM rollback segment name in the list - it always comes on-line, as can be seen from the above output. You must only put the names of already created rollback segments in the list. If you have a name that does not represent a rollback segment, the database will not start correctly - as shown below :
SVRMGR> startup
ORACLE instance started.
Total System Global Area                         65322004 bytes
Fixed Size                                          76820 bytes
Variable Size                                    47828992 bytes
Database Buffers                                 16384000 bytes
Redo Buffers                                      1032192 bytes
Database mounted.
ORA-01534: rollback segment 'OOPS' doesn't exist
SVRMGR>
If the rollback_segments parameter is missing from iniSID.ora, then the database will, by default, use any public rollback segments that it knows about by bringing them on-line at startup. The following shows this in action :
SQL> create public rollback segment public_rbs
  2  tablespace rbs;

Rollback segment created.

SQL> select segment_name,status from dba_rollback_segs;

SEGMENT_NAME                   STATUS
------------------------------ ----------------
SYSTEM                         ONLINE
PUBLIC_RBS                     OFFLINE
R01                            ONLINE
R02                            ONLINE
R03                            ONLINE
R04                            ONLINE

6 rows selected.

SQL>
After the instance has been 'bounced' we find the following :
SQL> select segment_name,status from dba_rollback_segs;

SEGMENT_NAME                   STATUS
------------------------------ ----------------
SYSTEM                         ONLINE
PUBLIC_RBS                     ONLINE
R01                            ONLINE
R02                            ONLINE
R03                            ONLINE
R04                            ONLINE

6 rows selected.

SQL>
Private and Public Rollback Segments
If you are not running Parallel Server, then a private and public rollback segment are effectively the same - except that a public one comes on-line at startup without any further action from the DBA.
If you are running Parallel Server, then many instances can access a single database. Each instance can have its own private rollback segments - which are used by itself, and can also acquire a public rollback segment from the pool which is shared between all the instances.

Further reading:
Oracle Reference manual, chapter 1, Initialization Parameters
Oracle Administrators Guide, chapter 2 - Parameters, and chapter 18 Managing Rollback Segments
Oracle Concepts Manual.
Is there a way to get the Date/Time when a row was last updated?
As Oracle is configured out of the box there is a very quick short answer to this question: NO.  For Oracle to keep track of this information on a per row basis would require a lot of overhead, which for the great majority of installations would be unnecessary and therefore wasteful.
However, if you really need this information here are some possible ways of trapping this information.
1 – Use triggers to capture this information on a per table basis
2 - Use the Oracle audit command to capture changes
3 - Search the Oracle redo logs for the information using the log miner utility
Pros and Cons of each method
1 – The Pro is this adds overhead only for critical to monitor objects
    The Con is this requires coding but it is fairly straightforward
2 – The Pro is this is relatively easy to set up, as it is a built-in feature
    The performance overhead is usually not that noticeable
    The Con is that by default the audit table is stored in the system tablespace
    leading to the possibility of filling the system tablespace as auditing can be
    very costly from a storage standpoint
3 – The Pro is there is no pro in my opinion to this approach; however if audit triggers
    and/or the Oracle auditing function are not in use then for something that happened
    recently that you really need to attempt to find, then this option is available
    The Con is this is a resource, time intensive approach
How do you use triggers to capture this information?  Here is an example:
The basic procedure is to modify the table adding a last_modified_by and last_modified_on column.  Then place before insert and before update triggers on the table.  This will allow trapping the Oracle user id and date/time of the row creation and of any updates.  Here is a working example trigger.  MARKTEST can be any table that has the two auditing columns defined previously defined, just change the table name.  Alternately a separate history table could be used to hold the audit data.
set echo on
create or replace trigger marktest_biu
before insert or update
on marktest
for each row
--
declare
--
v_last_modified_by     varchar2(30) ;
v_last_modified_on     date         ;
--
begin
--
select
  user
 ,sysdate
into
  v_last_modified_by
 ,v_last_modified_on
from dual ;
--
:new.last_modified_by := v_last_modified_by ;
:new.last_modified_on := v_last_modified_on ;
--
end ;
/
The sys_context function is a valuable potential source of information for auditing purposes especially if you have applications with imbedded Oracle user id and passwords.
Using the audit command: 
Table MARKTEST is created then object level auditing is set using the following command
> audit insert, update, delete on marktest by access;

Audit succeeded.
I attempted to create the table (again), access is attempted from an ID without proper privilege to the table, and then the table is updated from a user with insert privilege, the DBA_AUDIT_TRAIL is queried, and finally auditing is turned off.  There is a great deal more information available than shown below.
> l
  1  select username, timestamp, action_name, returncode
  2  from dba_audit_trail
  3* where obj_name = 'MARKTEST'
> /

USERNAME                       TIMESTAMP ACTION_NAME                 RETURNCODE
------------------------------ --------- --------------------------- ----------
MPOWEL01                       01-FEB-02 CREATE TABLE                       955  -- table already existed
TESTID                         01-FEB-02 INSERT                            2004  -- 00942 issued to user
MPOWEL01                       01-FEB-02 INSERT                               0  -- insert successful

> noaudit insert, update, delete on marktest;

Noaudit succeeded.

Warning the auditing information is kept by default in the system tablespace and by access (row level) auditing can generate huge amounts of data very quickly possibly impacting the ability of Oracle to function. 
Data Miner is a topic by itself and I will not attempt to cover it here.

Further Reading:
See the DBA Administrator’s Guide and SQL manual for information related to auditing and SQL syntax.
If you have Oracle metalink support then you can see How To Set Up Auditing Doc Id: 1020945.6 which will reference several other documents that may be of assistance such as Auditing DML (Insert, Update, Delete) Doc Id: 130146.1
What is table fragmentation, and should I worry about it ?
What is table fragmentation?
Table fragmentation is the situation that develops when a table has been created with an INITIAL and NEXT extent sizes which are too small for the amount of data that is subsequently loaded into the table. In essence, the table ends up with a large number of extents rather than just a few.
Should I worry about it?
Short answer, no.
Quote from Practical Oracle 8i by Jonathan Lewis, Page 150 : Let's start by reviewing one of the oldest myths, and biggest wastes of DBA time, in the Oracle book of legends. Someone once said, "You get better performance if each object is stored in a single extent." It must be well over 10 years ago that Oracle published a white paper exploding this myth, and yet the claim lingers on.
The white paper referred to is (probably) How to stop defragmenting and start living: The definitive word on fragmentation by Himatsingka and Loaiza which describes the SAFE methodology for storage allocation. SAFE is Simple Algorithm For Fragmentation Elimination. In this white paper they explain how testing has shown that under normal circumstances, Oracle can quite happily handle segments which have many thousands of extents.
Of course, when dropping or truncating a segment which has many extents, each extent will have to be de-allocated and the dictionary updated and this will have a degrading effect on performance.
Steve Adams of www.ixora.com.au fame, goes on to clarify and extend the SAFE system a little in his article on managing extents. (See below in further reading). From Steve's article, it appears that allowing the number of extents to exceed 500 (for an 8K block) will cause cluster chaining in the dictionary. This will then have an effect on allocating and deallocating extents.
Summary
·    Don't worry about having more than one extent in a table.
·    Try to keep the number of extents to a managable limit - I personally follow Steve Adams advice and keep extents around the 500 mark.
·    Try to keep segmnents of a similar size and 'hit rate' together in separate tablespaces.
·    Don't spend your life exporting and importing table to get them back into a single extent - it really isn't worth it !

Further reading:
SAFE - How to stop defragmenting and start living
www.ixora.co.au - Planning Extents
Practical Oracle 8i - Building efficient Databases by Jonathan Lewis. Published by Addison Wesley. ISBN 0-201-71584-8
How do I dump a data block ?
How to determine which block to dump
Assuming you didn't have to dump the block because of an error message which said something like '... corrupt file number #f block number #b' - in which case you have your file and block numbers, you can extract the information from dba_segments.
How to dump the block
To dump a single block : alter system dump datafile block ;
To dump a sequence of blocks : alter system dump datafile block min block max ;
The above commands can be replaced with a filename rather than an id : alter system dump datafile 'full_path_to_filename' block ;
Where does the blockdump go ?
The dump file is created in the user_dump_dest location as specified in initSID.ora..
And finally ...
Quote from MetaLink : The capability to generate a formatted block dump is primarily intended for use by Oracle Support and Development. There is no externally published information available on interpreting the contents of a blockdump and the information is considered 'internal' information.

Further reading:
Dumping an Oracle 8 block - from this very web site
Dumping an Oracle 7 block - and so is this !
Ixora - search for Blockdump to get a list of references, hints and tips.
What is an object of type 'Undefined' ?
A quick look at the definition for xxx_OBJECTS shows the cause of the anomaly. As new object types are introduced (for example: partitions, nested tables, materialised views etc), the view needs to be updated to reflect the new object type. Occasionally the Oracle developers appear to miss the new object types which thus fall through the DECODE into the "UNDEFINED" tag
The most commonly reported occurrence of this is appears to be materialised views in 8i.
SQL> set long 5000
SQL> select text
  2  from dba_views
  3  where view_name = 'DBA_OBJECTS'
  4  /

TEXT
--------------------------------------------------------------------------------
select u.name, o.name, o.subname, o.obj#, o.dataobj#,
       decode(o.type#, 0, 'NEXT OBJECT', 1, 'INDEX', 2, 'TABLE', 3, 'CLUSTER',
                      4, 'VIEW', 5, 'SYNONYM', 6, 'SEQUENCE',
                      7, 'PROCEDURE', 8, 'FUNCTION', 9, 'PACKAGE',
                      11, 'PACKAGE BODY', 12, 'TRIGGER',
                      13, 'TYPE', 14, 'TYPE BODY',
                      19, 'TABLE PARTITION', 20, 'INDEX PARTITION', 21, 'LOB',
                      22, 'LIBRARY', 23, 'DIRECTORY', 24, 'QUEUE',
                      28, 'JAVA SOURCE', 29, 'JAVA CLASS', 30, 'JAVA RESOURCE',
                      32, 'INDEXTYPE', 33, 'OPERATOR',
                      34, 'TABLE SUBPARTITION', 35, 'INDEX SUBPARTITION',
                      39, 'LOB PARTITION', 40, 'LOB SUBPARTITION',
                      42, 'MATERIALIZED VIEW',
                      43, 'DIMENSION',
                      44, 'CONTEXT', 47, 'RESOURCE PLAN',
                      48, 'CONSUMER GROUP',
                      51, 'SUBSCRIPTION', 52, 'LOCATION', 56, 'JAVA DATA',
                      57, 'SECURITY PROFILE',
                     'UNDEFINED'),                --  THIS IS THE LINE TO LOOK AT
       o.ctime, o.mtime,
       to_char(o.stime, 'YYYY-MM-DD:HH24:MI:SS'),
       decode(o.status, 0, 'N/A', 1, 'VALID', 'INVALID'),
       decode(bitand(o.flags, 2), 0, 'N', 2, 'Y', 'N'),
       decode(bitand(o.flags, 4), 0, 'N', 4, 'Y', 'N'),
       decode(bitand(o.flags, 16), 0, 'N', 16, 'Y', 'N')
from sys.obj$ o, sys.user$ u
where o.owner# = u.user#
  and o.linkname is null
  and (o.type# not in (1  /* INDEX - handled below */,
                      10 /* NON-EXISTENT */)
       or
       (o.type# = 1 and 1 = (select 1
                              from sys.ind$ i
                             where i.obj# = o.obj#
                               and i.type# in (1, 2, 3, 4, 6, 7, 9))))
  and o.name != '_NEXT_OBJECT'
  and o.name != '_default_auditing_options_'
union all
select u.name, l.name, NULL, to_number(null), to_number(null),
       'DATABASE LINK',
       l.ctime, to_date(null), NULL, 'VALID','N','N', 'N'
from sys.link$ l, sys.user$ u
where l.owner# = u.user#
Which of my indexes are redundant and can be dropped ?

Many a database drag along with unused or lightly used indexes, serving long forgotten purposes. They add an unwished for extra workload to a critical resource.
Two methods came to my mind targeted at nailing down passive, resource consuming indexes. This is not a 100% solution. This is not a point and click - problem fixed solution. Results have to be evaluated and (hard) decisions have to be made. The benefits and (might be) trade-offs have to be evaluated.
This is the first solution, working through the buffer cache i.e. making use of the v$bh view and other sys objects.
Some background: Databases are alive and evolving as long as the data they hold are of value to the company that owns the resource. Databases change over time, as they reflect the business they support. Tables, indexes, procedures, packages, reports, user interfaces and so forth are added, modified over time to reflect the business needs. As time goes by, the complexity and dependency increases. As people also have a tendency to move on in in their life, knowledge about a certain created 'devices' is not available any more. And few of us have the guts or time to do a real cleanup in the database and associated file systems, and drop or delete what we think - is no longer in use. Hence we drag around with indexes (and tables,...) that serve long forgotten purposes.   
Back to index of questions

The idea is to exclude indexes in active use from all the indexes in the system. And then check the leftovers to see if they have any use or value (this is the more manual part).
For Oracle versions prior to 9i, we can only se if an index has been in use in a given time frame. From a given number of indexes in a system, we can measure if a given index has been in use in the time frame we spend to analyze and measure. This then excludes the 'once a year' use of an index for reporting numbers from the general ledger annual report of something. And maybe we were better of, creating the index when called for, and drop it when the task has completed.
First goal is to find indexes that are actually in use, for purposes other than being maintained by dml's. I have chosen two ways to solve this. This first part is through the use of buffer cache (db_block_buffers) and a following part (still in work) is throughout the use of  library cache in shared pool (i.e.. v_$sql et.al.)
Buffer cache solution:
If an index is in use for active lookup, at least some blocks must show up in the buffer_cache, and remain there until aged out. If you don't use all tree buffer_caches, and have a newer database version, this is not very complicated to solve. Just takes some time. Create a keep or recycle pool of decent size, and target the suspect  indexes for this pool. Ie. alter index X storage(buffer_pool [keep|recycle]). After a while (hours or more), check if the targeted indexes is present in the pool at all or with a large percentage of dirty blocks. If that is the case, evaluate if the index can be dropped (seems likely).
If you use all tree buffer caches, or run an older version, here is a more generic solution. It goes: Select all indexes, total number of blocks in the buffer_caches, number of dirty blocks and time stamp per index and store this as a table. Create a view designed for update the table, and a stored procedure to do it. Run the stored procedure regularly to update the table. A small routine for this is supplied. The procedure updates the result table a time stamp and the increased number of blocks, if the total number or number of dirty blocks of a given index has increased. Run the procedure every say 5-15 minutes for a day more, depending on how fragile your buffer_caches is and on db usage pattern. Then check the table. Don't do any online index rebuilds, or anything  that is likely to disturb your metering tool. If this cant be avoided, clear metering values in the table and/or restart this procedure.
In case that the index hasn't had any blocks (recorded) in the buffer_caches, you can most likely drop it. It might be one of these 'once a period' indexes. Take appropriate action if that's the case. Or it might be that the underlying table is never accesses. Export table and all definitions, then rename or drop. Some evaluation is needed here.
If the mayor part of blocks of a given index are dirty, it is likely that is is only present in the buffer cache due to dml's.. Evaluate if the index can be dropped and record the index definition, drop or invalidate the index (if large).
Run the script (might have to change tablespaces). Run the procedure oci_index_cache_block_prc or supplied loop as fitting. When done, evaluate.
If you have lots of schema's and/or a large complex setup, use the commented code to evaluate schemes, one by one. The code here can have negative impact on busy systems.
/****************************************************************************************
  This program is free ware under terms of GNU's General Public License®
  and Open Source Foundation® as long as the copyright notice is not removed.
      Sysdate:         January 2002
      Username:        © 2002 Svend Jensen, Svend@OracleCare.Com
Rem
Rem     ==========================================================================
Rem     DISCLAIMER:
Rem     This script is provided for Oracle DBAs "AS IS". It is NOT supported
Rem     by author, Oracle World Wide Technical Support nor third parties.
Rem     The script has been tested and appears to work as intended.
Rem     NO responsibility taken for working nor use, no matter the circumstances.
Rem     You should always run new scripts on a test instance initially.
Rem     ===========================================================================
Rem     # Improvements and extensions are welcome.
******************************************************************************************/
    -- connect sys/ [as sysdba]
    -- creating base table for index usage investigation
    -- can be global temporary if you like. But can give problems with commit.
    -- Created: Svend Jensen 2002
    -- remove old stuff
    drop index oci_index_cache_block_udx
    ;
    drop view  oci_index_cache_block_vw
    ;
    drop table oci_index_cache_block_use
    ;
    -- now (re)create table, view, index and procedure
    create table oci_index_cache_block_use
    tablespace tools
    as
    select /*+ all_rows */
    object.owner, object.obj# index_obj#, object.index_name, object.table_obj#, object.table_name,
    sum(decode(bhead.dirty,'Y',1,0)) dirty#datablock,
    count(bhead.obj#) total#datablock, sysdate timestamp
    from
        (select  /*+ all_rows */
         u.name owner, o.obj#, o.name index_name, i.bo# table_obj#, o2.name table_name
         --  i.bo# is table obj# for index row in ind$
         from   obj$ o, ind$ i, obj$ o2, user$ u
         where  o.obj#     = i.obj#
         and    o2.obj#    = i.bo#
         and    o.dataobj# = i.dataobj#
         and    o.owner#   = u.user#
         --  and    o.owner#   = (select user# from user$ where name = upper('$user_name')) --
         --  uncommect if only user/schema owner to be checked, and fill in $user_name --  
        )     
        object,
        (select  objd as obj#, dirty
         from v_$bh 
         where  status != 'free'
        )      
        bhead
        where object.obj#  = bhead.obj#(+)
        group by object.owner, object.obj#, object.index_name, object.table_obj#, object.table_name
        order by total#datablock desc
        ;

       -- for update of the base table with new or change entries found in buffer_cache
        create or replace view oci_index_cache_block_vw
       (owner, index_obj#, index_name, table_obj#,
        table_name, dirty#datablock,
        total#datablock, timestamp)
       as (
           select /*+ all_rows */
           object.owner, object.obj# index_obj#, object.index_name, object.table_obj#, object.table_name,
           sum(decode(bhead.dirty,'Y',1,0)) dirty#datablock,
           count(bhead.obj#) total#datablock, sysdate timestamp
           from
                (select  /*+ all_rows */
                 u.name owner, o.obj#, o.name index_name, i.bo# table_obj#, o2.name table_name
                 --  i.bo# is table obj# for index row in ind$
                 from   obj$ o, ind$ i, obj$ o2, user$ u
                 where  o.obj#     = i.obj#
                 and    o2.obj#    = i.bo#
                 and    o.dataobj# = i.dataobj#
                 and    o.owner#   = u.user#
                 --  and    o.owner#   = (select user# from user$ where name = upper('$user_name')) --
                 --  uncommect if only user/schema owner to be checked, and fill in $user_name --  
                )     
                object,
                (select  objd as obj#, dirty
                 from v_$bh 
                 where  status != 'free'
                )      
                bhead
                where object.obj#  = bhead.obj# -- (+): select only hits in buffer_cache
                group by object.owner, object.obj#, object.index_name,
                object.table_obj#, object.table_name
                ) with check option
       ;

       -- create index on table oci_index_cache_block_use for updating
       create unique index oci_index_cache_block_udx
       on oci_index_cache_block_use
       (owner, index_obj#, index_name, table_obj#, table_name)
       compute statistics
       tablespace INDX 
       ;


       -- create update procedure
      create or replace procedure oci_index_cache_block_prc
      as
      /* variables */
      v_owner              varchar2(30) ;
      v_index_obj#         number       ;
      v_index_name         varchar2(30) ;
      v_table_obj#         number       ;
      v_table_name         varchar2(30) ;
      v_dirty#datablock    number       ;
      v_total#datablock    number       ;
      v_timestamp          date         ;
      /* cursors */
      cursor cur_oci_index is select * from OCI_INDEX_CACHE_BLOCK_VW ;
      BEGIN
      open   cur_oci_index ;
      fetch  cur_oci_index into  v_owner, v_index_obj#, v_index_name, v_table_obj#,
                                 v_table_name, v_dirty#datablock,
                                 v_total#datablock, v_timestamp ;
     while cur_oci_index%FOUND
     LOOP
         update OCI_INDEX_CACHE_BLOCK_USE
         set  total#datablock = v_total#datablock,
              timestamp        = v_timestamp
         where owner           = v_owner
         and   index_obj#      = v_index_obj#
         and   index_name      = v_index_name
         and   table_obj#      = v_table_obj#
         and   table_obj#      = v_table_obj# 
         and   total#datablock < v_total#datablock
         ;

        update OCI_INDEX_CACHE_BLOCK_USE
        set  dirty#datablock  = v_dirty#datablock,
        timestamp        = v_timestamp
        where owner           = v_owner
        and   index_obj#      = v_index_obj#
        and   index_name      = v_index_name
        and   table_obj#      = v_table_obj#
        and   table_obj#      = v_table_obj# 
        and   dirty#datablock < v_dirty#datablock
        ;                 
        
        fetch  cur_oci_index into  v_owner, v_index_obj#, v_index_name, v_table_obj#,
                                   v_table_name, v_dirty#datablock,
                                   v_total#datablock, v_timestamp ;
        END LOOP ;
        commit ;
        close cur_oci_index ;
  
     EXCEPTION
          WHEN OTHERS THEN
           IF cur_oci_index%ISOPEN THEN
                   close cur_oci_index ;
              END IF ;
              dbms_output.put_line ('Others execption in oci_index_cache_block_prc '||SQLERRM) ;
              RAISE ;

     END ;
     /
/**************** a little update loop **************************
    begin
    for j in 1..30 loop
   oci_index_cache_block_prc ;
   dbms_lock.sleep(300) ; -- sleep 5 minutes
   end loop ;
  end ;
  /
*********************************************************/

End of story - have fun.

Further Reading: Oracle concepts manual, http://www.oracle.com http://technet.oracle.com , http://metalink.oracle.com
What is the difference between a unique index and a unique constraint?
A constraint is defined by Oracle in the 8.1 Concepts manual, chapter 1, section on Integrity Constraints as being “a declarative way to define a business rule for a column of a table. An integrity constraint is a statement about a table's data that is always true.” Personally, I have always considered constraints to be referential integrity rules that govern the allowable contents of a column and in the case of a primary key (PK) and unique key (UK) constraint, in conjunction with foreign keys (FK), define the formal relationship between columns and rows in one table to another.
The difference between a unique index and a UK or PK constraint starts with the fact that the constraint is a rule while the index is a database object that is used to provide improved performance in the retrieval of rows from a table. It is a physical object that takes space and is created with the DDL command: create index or as part of a create table with PK or UK constraints or an alter table command that adds these constraints (see SQL Manual).
Briefly the constraints are:
Not Null    Column value must be present
Unique Key    Column(s) value(s) must be unique in table or null (see note below)
Primary Key    UK + Not Null which equates to every column in the key must have a value
        and this value is unique so the PK uniquely identifies each and every row
        in the table
Foreign Key    Restricts values in a table column to being a value found in the PK or UK
        Constraint on the referenced table (parent/child relationship)
Check        Tests the column value against an expression (rule)
Technically it would be possible for a relational database management system, RDBMS, vendor to support PK and UK constraints without using an index at all.  In the case of a UK or PK constraint the RDBMS could perform a full table scan to check for the presence of a key value before performing the insert but the performance cost of doing this for anything other than a very small table would be excessive probably rendering the RDBMS useless.  So to the best of my knowledge every commercial RDBMS vendor that supports PK and UK constraints does so using indexes.
Prior to Oracle 8 if you defined a PK or a UK constraint the Oracle RDBMS would create a unique index to support enforcement of the constraint.  If an index already existed on the constrained columns Oracle would use it rather than define another index on the same columns.  Starting with Oracle version 8 Oracle has the ability to enforce PK and UK constraints using non-unique indexes.  The use of non-unique indexes supports deferring enforcement of the constraint until transaction commit time if the constraint is defined at creation time as deferrable.  Also starting with version 8 Oracle has the ability to place constraints on tables where the existing data does not meet the requirements imposed by the constraint through use of a novalidate option (see SQL Manual).
The practical difference between using a unique index to support data integrity and a UK or PK on the same columns since Oracle will build an index to support the constraint if you do not is that you can define FK constraints when the PK or UK constraint exist.  Also in the case of a PK constraint Oracle will convert the columns in the constraint to be not null constrained when it is added to meet the PK requirement to uniquely identify each and every row in the table.  There is no such restriction on a unique index.  The PK and UK constraints along with FK constraints that reference them also provide a form of documentation on the relationships between objects.  Some query tools make use of these relationships to define joins between the tables, example, Oracle Discoverer.  In the absence of an entity relationship diagram, ERD, having PK, UK, and FK defined in the database can be beneficial when trying to determine how to find and how to query data.
The Oracle RDBMS Data Dictionary views All/ DBA/ USER_CONSTRAINTS and ALL/ DBA/ USER_CONS_COLUMNS may be used to locate constraints on a table and the columns being constrained.
If you drop or disable a PK or UK constraint that is supported by a unique index the index is dropped with the constraint.  If a non-unique index is used to support the constraint the index is not dropped with the constraint.  This second condition is effective only with version 8 and higher.
Note – UK constraints allow the constrained column to be NULL.  Nulls values are considered to be valid and do not violate the constraint.

Further Reading: Oracle provides information on constraints in the Concepts manual, the Database Administrators Manual, the Application Developers Guide - Fundamentals, and the SQL manual.
There are also other FAQ articles related to Constraints.
I am unable to add a PK constraint to a table - what is the best way of finding the problem rows ?
How can I check if I have the right indexes for the foreign key constraints on a child table ?
How can I identify which index represents which primary or unique key constraint ?
Is there a performance impact on the database of doing an analyze ?
If you do an analyze ..  estimate statistics without a sample size, the impact is usually pretty negligible as Oracle checks only about 1043 rows, so the tests for count distinct, count(*) and so on are quite cheap.  However, if you do
    analyze table XXX compute statistics
    for table
    for all indexes
    for all indexed columns;
on a very large table, the impact will be severe. To a large degree, the impact comes from the physical I/O that Oracle has to do to acquire its sample set, and the CPU and memory (and as a side-effect I/O) usage involved in sorting. Expect to see db file sequential read, db file direct path read, db file direct path write, buffer free waits, and write complete waits as you do a large analyze.
Of course, there is then a relatively small undo/redo overhead as the data dictionary tables are updated with the new statistics. So even if you are doing a very small estimate, but you have a very large number of objects analyzed in a stream, then this part of the activity could have an impact on the rest of the system. You also have to remember that when the statistics on an object change, any cursors in the library cache that are dependent on that object become invalid so that the optimizer can generate a new execution plan - this could also have a temporary impact on performance as huge amounts of hard-parsing takes place.
Having said that, there are usually not very many objects that need frequent analysis; the ones that need it usually do not need it to be a very high estimate, and then it is likely that only a handful of columns in the database need to have histograms generated..
How do I associate an active session with a rollback segment ?
If you are looking at this FAQ I can think of two different things you are looking for. The first is that you want to assign a session to using a specific rollback segment, which this FAQ will answer. The second is what rollback segment is a session using? This FAQ will also provide the SQL to answer that question.
To assign a session to use a specific rollback segment for a transaction issue the set transaction command:
  set transaction use rollback segment roll02;
The set transaction command must be the first statement since the prior commit, rollback, or session creation for it to work; otherwise, you will get an Oracle error: ORA-01453: SET TRANSACTION must be first statement of transaction. To make sure that the command is the first command issued it is common to see the command scripted immediately following a rollback statement.
  rollback;
  set transaction use rollback segment roll02;
  update big_table set fld1 = 'some value';
  commit;
The first commit or rollback ends the transaction and the rollback segment assignment along with it. If you need to assign multiple transactions then the set command has to be re-issued after every commit. Inside pl/sql code you can use the Oracle provided package call dbms_transaction.use_rollback_segment('segname') to set the rollback segment for a transaction.
To find the list of rollback segments available to the database query the dictionary view dba_rollback_segs. Additional information about rollback segments is contained in the dynamic performance views v$rollstat and v$rollname. Because v$rollstat does not contain the segment name join v$rollstat to v$rollname on the usn column for this information.
  select segment_name, status
  from   dba_rollback_segs;
 
  SEGMENT_NAME                   STATUS
  ------------------------------ ----------------
  SYSTEM                         ONLINE
  ROLL01                         ONLINE
  ROLL02                         ONLINE
  ROLL03                         ONLINE
  ROLL04                         ONLINE
The following SQL will show sessions assigned to rollback segments. Note that only transactions are assigned to rollback segments and non-distributed transactions involve a DML operation: insert, update, or delete. So sessions that have issued only normal queries do not show up as being assigned to rollback segments since these sessions will access rollback segments only to read data from them. And if a session needs to read data changed by another session that data can be in any segment.
    select  s.username,  s.sid,       rn.name,     rs.extents
               ,rs.status,  t.used_ublk,  t.used_urec
               ,do.object_name
        from    v$transaction   t
               ,v$session       s
               ,v$rollname      rn
               ,v$rollstat      rs
               ,v$locked_object lo
               ,dba_objects     do
        where  t.addr        = s.taddr
        and    t.xidusn      = rn.usn
        and    rn.usn        = rs.usn
        and    t.xidusn      = lo.xidusn(+)
        and    do.object_id  = lo.object_id;
This should answer the question.
Which Version of the database am I running ?
When you connect to sql plus, you can see the version of Oracle Database you are connecting to. This is it.
If you are not connecting through SQL*Plus, then look at v$version:
    select * from v$version;
I've got a corrupted data file, and Oracle won't start. I don't need the data so how do I open the database and drop data file?
There is no easy way as to drop a datafile of a tablespace. The only way to remove a datafile from a database is to drop the defining tablespace. There are a few steps to follow:
If you are running in Noarchivelog mode
1. mount the database - startup mount
2. drop the datafile - alter database datafile xxx offline drop
3. open the database - alter database open
4. check all objects belong to that tablespace:
       select owner, segment_name, segment_type
       from dba_segments
       where tablespace_name='tbs_name'
5. export out all the objects in that tablespace
6. drop the tablespace - drop tablespace tbs_name including contents
7. Delete the physical datafiles belonging to the tablespace
8. Recreate the tablespace, import back the objects

If you are running in Archivelog mode
1. mount the database - startup mount
2. drop the datafile - alter database datafile xxx offline
(Note: the datafile is still part of the database and is marked only as offline in the controlfile. Just make sure you don't use the same data file name again)
3. Remove the physical data file at OS level
4. open the database - alter database open
5. At the right time, you can
    export the objects belong to this tablespace,
    drop the tablespace,
    create back the tablespace with appropriate datafiles and
    import the objects back.

Addendum (28th Jan 2002)
Michael.Trothe@originenergy.com.au offers the following: There is another way. This will only work if there is no data in the file that you really require.
1. mount the database
2. alter the datafile offline drop
3. delete the file from the sys.file$ table.
This will prevent it from being recognised when you do hot backups and not allowing you to place the tablespace in to backup mode.
4. open the database in restricted mode then delete any objects that have references to the missing datafile.
5. shut the database down
6. startup nomount the database
7. recreate the control file and alter database open resetlogs.
All reference to the missing datafile should be gone.
Editor's note: Since this strategy involved direct modification of the data dictionary, your database would no longer be supported by Oracle Corp.

Further reading: Metalink Note:111316.1 How to 'DROP' a Datafile from a Tablespace
Why is Explain Plan showing a different execution path from tkprof ?
There are a several possibilities, based on the fact that tkprof shows you the path that actually took place at run time and explain plan tells you the path that would be followed now. (In fact if you do tkprof explain=, you can get both, possibly contradictory, paths in the same output file).
a) The user at runtime is not the user explaining, so the query is actually addressing different objects, despite the names being the same. (Or perhaps it is the same user, but a synonym or view has been redefined to point to a different object).
b) The optimizer mode for the run-time session is not the same as the optimizer mode for the explain session (or any one of 50 other parameters that affect the optimizer could have changed)
c) Indexes have been created or dropped between the generation of the trace file and the explain plan
d) Analyze (or dbms_stats) has been run against some of the objects in the SQL between the generation of the trace file and the explain plan
e) The SQL includes bind variables - explain plan will assume these are character variables, at run-time Oracle may have known that there were not, and therefore different coercion effects may have affected the availability of indexes.
I can't think of any more reasons at the moment, but I'm sure that there are probably a couple of other reasons.
I have taken over an existing tablespace and need some ideas for setting extent sizes when re-organizing it ?
How do I rename a data file ?
Datafiles can be moved or renamed using one of two methods: alter database or alter tablespace.
The main difference between them is that alter tablespace only applies to datafiles that do not contain the SYSTEM tablespace, active rollback segments, or temporary segments, but the procedure can be performed while the instance is still running. The alter database method works for any datafile, but the instance must be shut down.
The alter database method:
1. Shut down the instance.
2. Rename and/or move the datafile using operating system commands.
3. Mount the database and use alter database to rename the file within the database. A fully qualified filename is required in the syntax of your operating system. For example to rename a file called 'data01.dbf ' to ' data04.dbf ' and move it to a new location at the same time (at this point in the example the instance has been shutdown) and;
4. Start the instance.
    > svrmgrl
      SVRMGR>  connect sys/oracle as sysdba;
      SVRMGR>  startup mount U1;
      SVRMGR>  alter database rename file '/u01/oracle/U1/data01.dbf ' TO '/u02/oracle/U1/data04.dbf ' ;
      SVRMGR> alter database open;
Notice the single quotes around the fully qualified filenames and remember, the files must exist at the source and destination paths. The instance is now open using the new location and name for the datafile.
The alter tablespace method:
This method has the advantage that it doesn't require shutting down the instance, but it only works with non-SYSTEM tablespaces. Further, it can't be used for tablespaces that contain active rollback segments or temporary segments.
1. Take the tablespace offline.
2. Rename and/or move the datafile using operating system commands.
3. Use the alter tablespace command to rename the file in the database.
4. Bring the tablespace back online.
    SVRMGR> connect sys/oracle as sysdba
    SVRMGR> alter tablespace app_data offline;
    SVRMGR> alter tablespace app_date rename datafile '/u01/oracle/U1/data01.dbf ' TO '/u02/oracle/U1/data04.dbf ' ;
    SVRMGR> alter tablespace app_data online;
The tablespace will be back online using the new name and/or location of the datafile.
Both of these methodologies can be used within Oracle Enterprise Manager also.

Further reading N/A
This question is also addressed by the following documents:
How big, in bytes, is a ROWID?
There are three formats of ROWIDs.

   1. Short or Restricted
   2. Long or Extended
   3. Universal ROWID
Short or Restricted ROWID is stored as a six-byte hexadecimal string. DBA (4 bytes) + Row (2 bytes). This format is used when the segment that contains the addressed row can be unambiguously determined, for example in row piece pointers, for normal indexes on nonpartitioned tables and for local indexes for partitioned tables. This is the only format used in Oracle 7. The external representation uses 18 characters:BBBBBBBB.RRRR.FFFF (BBBBBBBB: Block, RRRR: Row, FFFF: File). Internal Code = 69
Long or Extended ROWID is stored as a ten-byte hexadecimal string. Object ID (4 bytes) + DBA (4 bytes) + Row (2 bytes). Long or Extended ROWID contains the object number, in addition to tablespace-relative DBA, and therefore completely identifies the row. It is used in columns of type ROWID and in global indexes for partitioned tables. This form is used in the user columns of ROWID type. This format was introduced in Oracle 8. The external representation uses 18 characters (too long to elaborate). Internal code = 69.
UROWID is stored as a hexadecimal string of up to 3950 bytes. It represents the logical address of a row in an index-organized table and a foreign table. It has three subtypes; physical, logical(primary-key based) and remote(foreign). The internal code is 208.
Note: DBA is Data Block Address
What would be a simple list of events a DBA should be alerted about?
Further reading: http://www.google.com/search?sourceid=navclient&q=oracle+monitoring
What is the largest number of columns allowed in a table?
The maximum columns allowed for Oracle 7 is 255. An increase from 255 to 1000 columns was introduced in Oracle release 8.0
Is there any way to estimate how long a create table/index has left to run ?
For an understanding of this progress meeting, there are a few possibilities to distinguish:
- simple create table as select, without distinct, order by, nor group by
- create table as select, with a sort phase (as mentioned above).
- create index.
For the latter two, there are three phases during the creation:
- full table scan
- sorting the data (needed also for grouping)
- writing the output.
These phases correspond to 'rows' in v$session_longops, as the index creation progresses.
The creation of an index is discussed below. For table creation it is much the same (or even more simple).
Suppose, an index needs to be created on the column 'name', of table 'customer'. Therefore, a sqlplus session is started. First statement is to find out the session identifier (sid) of that statement, with e.g.
    select s.sid
    ,      p.pid
    ,      p.spid
    from   v$process p
    ,      v$session s
    where  p.addr = s.paddr
    and    s.audsid = userenv('sessionid')
A second sqlplus session is used for progress monitoring. First, define the sid:
    define sid = <>
    set verify off
After that, start the create index in the first session. In the second session, type:
    select sid
    ,      message
    from   v$session_longops
    where  sid = &sid
    order by start_time;
In the first phase, this query will return output like:
SID MESSAGE
--- -----------------------------------------------------------------------
 11 Table Scan:  CONVERSIE.RB_RELATIE: 7001 out of 21460 Blocks done
While the table is being read, output will be written to the temporary tablespace (if the sort_area is not sufficient).
When the table scanning phase has been finished, the sort/merge begins. A repeat of the statement shows:
SID MESSAGE
--- -----------------------------------------------------------------
 11 Table Scan:  CONVERSIE.RB_RELATIE: 21460 out of 21460 Blocks done
 11 Sort/Merge:  : 2107 out of 3116 Blocks done
During this phase, there is reading and writing to the temporary tablespace.
In the last phase, the index entries have been sorted, and are being written to the index segment. V$session_longops shows:
SID MESSAGE
--- -----------------------------------------------------------------
 11 Table Scan:  CONVERSIE.RB_RELATIE: 21460 out of 21460 Blocks done
 11 Sort/Merge:  : 3116 out of 3116 Blocks done
 11 Sort Output:  : 800 out of 3302 Blocks done
In this last phase, the temporary tablespace is being read from, and writes occur to the tablespace in which the index is created. Note that the index appears as a temporary segment, once the last phase has been started. The segment is 'promoted' to a real index segment, after the physical writing has been done.
Note that entries for a certain phase occur in v$session_longops after about 10 seconds. If a phase take shorter than that, it will not appear in the view.
Note also that v$session_longops might contain data from previous sessions and/or earlier statements. In that case, one can filter on start_time.
With this knowledge in mind, one can estimate how much work has to be done!

Further reading: See the Oracle database reference guide (version 9.0.1), on OTN:
http://download-eu.oracle.com/otndoc/oracle9i/901_doc/server.901/a90190/ch3161.htm#992382
How much redo is filled in my redo logs?
Finding how much percentage of current redo log is filled is bit tricky since the information is not exposed in any V$ views. We have to directly query the X$tables to get that information. The X$views we use here are x$kccle (Kernel Cache ControlfileComponent Log file Entry) and x$kcccp (Kernel Cache Checkpoint Progress).

select
    le.leseq                 current_log_sequence#,
    100 * cp.cpodr_bno / le.lesiz        percentage_full
from
    x$kcccp cp,
    x$kccle le
where
    le.leseq =cp.cpodr_seq
and    le.ledup != 0
;



CURRENT_LOG_SEQUENCE# PERCENTAGE_FULL
--------------------- ---------------
                    6      .428710938
Here x$kcccp.cpodr_bno tells the current log block in the redo log file and x$kccle.lesiz gives the number of (log) blocks in that log file.
How do I find out which users have the rights, or privileges, to access a given object ?
Information on user object and system access privileges is contained in the rdbms data dictionary tables. For this specific question the most likely dictionary table of interest is DBA_TAB_PRIVS:
    Name                         Null?    Type
    ---------------------------- -------- ----------------------------
    GRANTEE                      NOT NULL VARCHAR2(30)    <== Receiver of privilege
    OWNER                        NOT NULL VARCHAR2(30)
    TABLE_NAME                   NOT NULL VARCHAR2(30)
    GRANTOR                      NOT NULL VARCHAR2(30)    <-- Giver of privilege
    PRIVILEGE                    NOT NULL VARCHAR2(40)
    GRANTABLE                             VARCHAR2(3)    <-- Grantee has ability to grant privilege to others
A description of the column values in available in the Oracle 8i Reference manual, but they should all be pretty obvious. Since the DBA_TAB_PRIVS dictionary table (view) contains all grants on all objects in the database this table is suitable for being queried for any Oracle object: tables, views, stored code, etc.... This also means this view is a good source for SQL to generate grant statements for tables, views, stored code, etc.... Example code will follow later.
Before continuing any farther I want to note that privileges are divided into two classes: user access or DML access privileges to Oracle objects (tables, indexes, views...) and system privileges (create session, create table, create user...). In general you should restrict users to possessing only those privileges necessary for them to use their authorized applications and those privileges should be inherited through roles set up to support the application.
Privileges are issued with the GRANT command revoked with the REVOKE command. Examples:
    GRANT select, insert, update, delete, references ON my_table TO user_joe ;
    REVOKE insert, delete ON my_table FROM user_joe ;
    GRANT create public synonym TO user_joe ;
Some other useful security related dictionary views are:
ALL_TAB_PRIVS        All object grants where the user or public is grantee
ALL_TAB_PRIVS_MADE    All object grants made by user or on user owned objects
ALL_TAB_PRIVS_RECD    All object grants to user or public
DBA_SYS_PRIVS        System privileges granted to users and roles
DBA_ROLES        List of all roles in the database
DBA_ROLE_PRIVS        Roles granted to users and to other roles
ROLE_ROLE_PRIVS        Roles granted to other roles
ROLE_SYS_PRIVS        System privileges granted to roles
ROLE_TAB_PRIVS        Table privileges granted to roles
SESSION_PRIVS        All privileges currently available to user
SESSION_ROLES        All roles currently available to user
USER_SYS_PRIVS        System privileges granted to current user
USER_TAB_PRIV        Grants on objects where current user is grantee, grantor, or owner
WARNING the three dictionary views that start with ROLE only show privileges on objects the user has privilege on.
UT1> l
  1  select   grantee,
  2           privilege,
  3           grantable  "Adm",
  4           owner,
  5           table_name
  6  from     sys.dba_tab_privs
  7  where    grantee  =  upper('&usernm')
  8* order by grantee, owner, table_name, privilege


  GRANTEE      PRIVILEGE  Adm OWNER        TABLE_NAME
  ------------ ---------- --- ------------ -------------------------
  SEFIN        DELETE     NO  SYSTEM       SRW_FIELD
               INSERT     NO  SYSTEM       SRW_FIELD
               SELECT     NO  SYSTEM       SRW_FIELD
               UPDATE     NO  SYSTEM       SRW_FIELD
Note that break on grantee is in effect to suppress repeating the user name.
 set echo off
 rem
 rem  19980729  M D Powell   New script.
 rem
 set verify off
 set pagesize 0
 set feedback off
 spool grt_&&owner._&&table_name..sql

 select 'REM  grants on &&owner..&&table_name'
 from sys.dual ;

 select 'grant '||privilege||' on '||lower(owner)||'.'||
         lower(table_name)||' to '||grantee||
         decode(grantable,'YES',' with grant option',NULL)||
         ' ;'
 from   sys.dba_tab_privs
 where  owner      = upper('&&owner')
 and    table_name = upper('&&table_name')
 order by grantee, privilege ;

 spool off
 undefine owner
 undefine table_name
Sample output:
 grant INDEX on jit.wo_master to EDSJIT ;
 grant INSERT on jit.wo_master to EDSJIT with grant option ;
 grant REFERENCES on jit.wo_master to EDSJIT ;
 grant SELECT on jit.wo_master to EDSJIT with grant option ;
The script above can be particularly useful when you are in a development environment and use export/import as means of making copies of a test bed across machines, this script comes in pretty handy to recreate the privileges bit if you have lost them for whatever reason. It is a nice piece of code to actually reverse engineer scripts from a production database.

Further reading: For a list of all system privileges see the Oracle verson# SQL manual. For information on managing user privileges see the DBA Administration manual. Starting with version 7.3 see the Oracle ver# Reference Manual for information on the dictionary tables (views) and for more information on using the dictionary see the FAQ for How do I find information about a database object: table, index, constraint, view, etc... in Oracle ?

How can I tune a tablescan?
This particular example is typical of the generic case. We have a query like the following on a 40GB table, but the index is ‘not selective’:
select
    {list of columns}
from
    very_large_table t
where
    colX = 'ABC'
order by
    colY
;
The first problem with this question is the one of ambiguity – when the poster says “cost” do they really mean the figure reported in the COST column of the execution plan table, or do they mean the resource consumption (e.g. number of physical disk reads) when the query actually runs.
In the former case, it is not necessarily safe to assume that there will be a direct correlation between the COST, the resources used, and the run-time. In the latter, there may not be a direct correlation between resource usage (e.g. number of physical disk I/Os) and the run-time when the query is transferred from development to production, because there may be more competition for physical resources on production, so response time may drop.
So how can you tune a full table scan for very large tables? In a very real sense, you can’t. Once your are committed to a full tablescan, that’s it – you have to scan every block (and every row) in the table typically using multiblock reads. The block scans are likely to generate a lot of I/O; the row examinations will consume a lot of CPU.
Of course, there are a few steps you can take to minimize the resource requirement – but their side effects need to be considered carefully.
Can you make the scan times faster?
In principle, yes, a little. Increasing the size of the parameter db_file_mulitblock_read_count so that the read requests are fewer and larger may help. In practice it is not always that simple. Apart from the side-effects on the optimizer’s calculation, you may find that odd interference effects from the various hardware levels actually make it hard to find an optimum setting for the parameter. Things like disk track size, disk buffer size, network buffer size, O/S stripe size and so on can result in some very odd “impedance” effects. You may have to find a best fit for your system by a tedious process of trial and error.
In practice, you may find that the hardware “sees” your big tablescan starting, and adopts a read-ahead strategy that makes your tuning attempt redundant. (This is actually a problem with mixed-load SANs, they tend to get over-enthusiastic about tablescans at the cost of random I/Os – so on average their rate of throughput can look good while the end-users (and v$session_event) are complaining  about slow disk response times)
Would parallelism help ?
In principle, yes; but only for the query itself. For a tablescan of degree N the table would effectively be broken up into N parts, allowing N separate processes to scan and extract the required data before forwarding the minimum possible data volume on to another set of processes to handle the sorting. So long as the number of slaves was not sufficient to overload the I/O subsystem, and provided the slaves didn’t end up colliding on the same disks all the time, then the speed of the scan would improve by a factor of roughly N.
On the other hand, by making this query run as rapidly as possible – which means the highest degree of parallelism you can get away with – you are likely to put a lot of stress on the I/O subsystem – to you really want to do that, as it will probably slow everything else down.
Can you do something to cache the table?
At 40 GB, probably not (but who knows, maybe you have 128GB of memory to play with and that might be enough to let you put this table into a keep pool) and you’re still going to be using a lot of CPU looking at all those rows whatever you do about caching.
Is it really true that the index has very poor selectivity?
If so, why are you running a query that (apparently) is going to fetch a huge number of rows? Do you really need to fetch all those rows, or are you really after just the first few?
If the volume is actually relatively low, then perhaps a “bad” index is still better than doing a tablescan – it may protect your I/O subsystem for other users.
If the volume of data is high but you only want the first few rows, how about (in the example above) a “function-based index” of the form decode(colX,’ABC’,colY,null) – so holds entries for just the rows you are interested in, with the index entries in the right order for your order by clause. If you do this, you could re-write your query to force a range scan through this index, stopping after a few rows, rather than acquiring all the rows and sorting them before discarding most of them.
Would partitioning (on the colX column in this case) work for you?
A list partition strategy, or possibly even a hash partition strategy, could break the table into a reasonable number of much smaller chunks. You would still have to do a ‘full table scan’ but it would be a scan of just one partition, which could be much smaller, so use less resources, run more quickly, and cause much less damage. But can you partition on this column – or would it interfere with all the other functions of the system; and if it does could it still be worth the penalty?
So, although you can’t really “tune” a large tablescan, there are some strategies you can investigate to see if you can find ways of reducing or restructuring the amount of work that you need to do. Whether any of them is appropriate for your system only you can choose.
Performance Tuning
Why is dbms_stats so much slower than Analyze?
The person who last posed this question didn’t mention a version number – but the problem appears in many versions of Oracle as you make the change from one technology to the other, mainly because the activity carried out by the dbms_stats does not, by default, match the activity carried out by the analyze command.
Not only does dbms_stats differ from the analyze command in its behaviour, virtually every version of Oracle has introduced a few extra parameters in the calls to dbms_stats  and even changed some of the default values from previous versions, so that calls to dbms_stats that used to complete in a timely fashion in one version of Oracle suddenly take much longer after an upgrade – it’s not just the switch from analyze to dbms_stats that causes problems.
In the worst case, when left to its own devices, dbms_stats in 10g will work out for itself the best sample size to use on the tables and indexes, which columns to create histograms for, the number of buckets in the histograms, and the sample size to use for those histograms. The volume of work may be much larger than anything you would choose to do yourself.
The first guideline for using dbms_stats is: read the manual – or the script $ORACLE_HOME/rdbms/admin/dbmsstat.sql to see what it does, and how it changes from release to release. This gives you some chance of making it do what you used to do with the analyze command. The second guideline is to  try a test run with lots of tracing (e.g. events 10046, 10033, and calls to v$session_event and  v$sesstat) set so that you make sure that you can see how much work the tools do, and where they are losing time.
Partitioned tables have always been a particular problem for statistics – and Oracle is constantly fiddling with the code to try and reduce the impact of collecting reasonable statistics. If you have large partitioned tables, and don’t have to collect global statistics, bear in mind that you probably know more about the data than Oracle does. The best strategy for partitioned tables (probably) is to write your own code that limits Oracle to collecting statistics on the most recently added partition and then derives reasonable table level statistics programmatically by combining the latest statistics with the existing table stats. Look very carefully at the procedure with names like get_column_stats, set_column_stats.

Further reading: $ORACLE_HOME/rdbms/admin/dbmsstat.sql
Can Bind Variable Peeking cause problems without a histogram on Predicate columns ?
It is a well known fact that the reason for the majority of problems with the bind variable peek feature is an histogram on the column referenced in the access predicate. The histogram is certainly the main but not the only cause for a non appropriate execution plan. Another less known situation where a different value of bind variables can lead to a change of execution plan is an access predicate on a (sub)partitioned table. This is a particularly important scenario in case of range partitioned fact tables organized as rolling windows. These tables contain two types of partitions, those filled up and those pre allocated to future loads. As the optimiser statistics of both types are very different, the risk of getting the wrong execution plan in case of peeking “in the wrong partition” is relatively high.
This is a true story with all kind of exciting attributes:
large partitioned tables with proper collected statistics on them,
nothing was changed but a nightly job performance explodes,
no histogram on the access predicate column,
Oracle 9i / 10g in a data warehouse environment
Why would a reverse index be useful when the leading column of the index is generated from a sequence ?
Background to Indexes.
When you store data in an indexed table, certain columns of data are copied into the index alongside the rowid of the data row in the table. The data in the table is stored 'randomly', or at least, not necessarily in the order you put them there.
The index entries, on the other hand, must be stored in order, otherwise the usability of the index is removed. If you could walk through the entries in an index, you would see that they are in order, usually ascending, but since 8i, descending also works.
Entries are stored in order of their internal representation, not necessarily the same as what you see on screen when you SELECT the columns from a table.
If the indexed column(s) contain character data (CHAR, NCHAR, VARCHAR2 or NVARCHR2) then the data will appear on screen exactly as it does in the index. For example, if the column contains 'ORACLE' the index entry will also be 'ORACLE'.
We can use the DUMP command to show us the internal representation of any data type. This command takes four parameters. The first is the data you wish to dump, the second is the base you wish to dump it in. The default is 10 which means that DUMP will display the characters in decimal, or base 10. The other allowed values are 8 (Octal), 16 (Hexadecimal) or 17 (Characters).
The third parameter is the start position in the data you wish to dump from and the final parameter is the amount of data you wish to dump. All but the first parameter have sensible defaults.
Using DUMP, we can see the individual character codes for our 'ORACLE' data :
SQL> select dump('ORACLE',10) from dual;

DUMP('ORACLE',10)
-------------------------------
Typ=96 Len=6: 79,82,65,67,76,69

We can prove that this is correct by converting back from decimal character codes to actual characters :
SQL> select chr(79),chr(82),chr(65),chr(67),chr(76),chr(69) from dual;

C C C C C C
- - - - - -
O R A C L E

We could have used base 17 to do the same thing :
SQL> select dump('ORACLE',17) from dual;

DUMP('ORACLE',17)
-------------------------
Typ=96 Len=6: O,R,A,C,L,E

Numeric columns are very much different. The internal format of a number is different from that which appears on screen after a SELECT because the internal format is converted to ASCII format so that it can be displayed. We can see this in the following, first in character format :
SQL> select '1234' as "1234",
  2  dump('1234', 17)
  3  from dual;

1234 DUMP('1234',17)
---- ---------------------
1234 Typ=96 Len=4: 1,2,3,4

Then in internal format :
SQL> select 1234 as "a number",
  2  dump(1234, 17)


  3  from   dual;

a number   DUMP(1234,17)
---------- --------------------
      1234 Typ=2 Len=3: c2,^M,#

The first columns in both examples look identical, but this is only because SQLPlus has converted the internal format of the number 1,234 into the character format so that the display device (the monitor screen) is able to show it. Binary characters have a nasty tendency to disrupt character devices like computer monitors when running in text mode.

Take a look at the second column in the above examples and notice the difference. In the first example we see the individual characters '1', '2', '3' and '4' while the second example shows only three bytes in the internal format of the number 1,234. Lets change the DUMP calls slightly, and do the whole lot in one command :
SQL> select '1234' as "1234",
  2  dump('1234', 10),
  3  1234 as "a number",
  4  dump(1234, 10)
  5  from dual;

1234 DUMP('1234',10)           a number   DUMP(1234,10)
---- ------------------------- ---------- ----------------------
1234 Typ=96 Len=4: 49,50,51,52         1234 Typ=2 Len=3: 194,13,35

This time, we see the actual character codes used internally. Once again columns 2 and 4 differ. Column 4 is showing three bytes and these three bytes are the internal binary representation of the number 1,234.
It is this binary representation that is used in the index entry when a number column is indexed.
Take a few minutes and experiment with dumping a few other numbers - stick to integers for now as those are what sequences generate.
SQL> create table test (a number);

Table created.

SQL> begin
  2    for x in 1 .. 1e6
  3    loop
  4        insert into test values (x, substr(dump(x,10), 14));
  5    end loop;
  6  end;
  7  /

PL/SQL procedure successfully completed.

If we have a look at the 'b' column of the table, we can see that each entry is ascending in a similar manner to the 'a' column. Here are the first 20 rows :
SQL> col b format a20 wrap
SQL> select a,b from test where a < 21;

         A B
---------- ----------
         1 193,2
         2 193,3
         3 193,4
         4 193,5
         5 193,6
         6 193,7
         7 193,8
         8 193,9
         9 193,10
        10 193,11
        11 193,12
        12 193,13
        13 193,14
        14 193,15
        15 193,16
        16 193,17
        17 193,18
        18 193,19
        19 193,20
        20 193,21

The entries are very similar and all have the same leading byte.

How sequences affect indexes.
As mentioned above, index entries have to be stored in order, however, the table data need not be. If your indexed column is fed by a sequence, the data will be similar to the 20 rows shown above.
Similar entries will group together in the index, so the index blocks will split as necessary and new entries will end up all hitting the same block until it too fills up and splits.
If you have one person running the application, this isn't too much of a problem. If the application is multi-user then it means that every user will tend to write into the same index block and buffer busy waits will be the outcome as transactions 'queue' to write data to the hottest index block around.
Back in our small test, if you select more data from the test table, you will find that in the 1 million rows, there are only 4 different values for the leading byte on the internal numeric format and even worse, most of the entries in the index have the same leading byte value :
SQL> select substr(b,1,3),count(*)
  2  from test
  3  group by substr(b,1,3);

SUB COUNT(*)
--- ----------
193         99
194       9900
195     990000
196          1

I cheated and discovered that there was a comma in position 4 of every row in the table that's how I knew to use a three character length in my SUBSTR.
What the above shows is that in an index of 1 million sequential entries, the vast majority have the same leading byte and so will all be trying to get into the same block in the index.

How reverse indexes cure the problem.
A reverse key index stores the bytes of the indexed column(s) in reverse order, so the data 'ORACLE' is actually stored in the index as 'ELCARO'. Using a reverse index on a column fed by a sequence spreads the location of sequential numbers across a wider range of leaf blocks and the problem of a single hot block is removed because the index entries are stored in reverse order.
SQL> alter table test add (c varchar2(30));

Table altered.

SQL> update test set c = substr(dump(reverse(a),10),14);

1000000 rows updated.

SQL> select substr(c,1,instr(c,',')-1),count(*)
  2 from test
  3 group by substr(c,1,instr(c,',')-1)
  4 order by to_number(substr(c,1,instr(c,',')-1))

SUB   COUNT(*)
--- ----------
2         10102
3         10101
4         10101
5         10101

All other numbers between 6 and 95 inclusive, have 10,101 entries each.

96         10101
97         10101
98         10101
99         10101
100      10101

99 rows selected.

This time, our 1 million row index entry has it's leading byte value spread across 99 (100 if you include a value for zero) different values, rather than just 4. In addition, the actual reversed bytes are fairly randomly scattered across each of the different values too.
As more entries are added to the index, blocks will be split to accomodate the new entries in their proper location. As the data is arriving almost 'randomly' by means of the reversing of the actual data bytes for the index, the index itself will be extended to accomodate these new values. However, rather than always being stored in the same single 'hot' index block, new entries will be spread across a number of existing blocks (assuming the index has been around for a while) thus reducing contention. Of course, block splits will still occur on these blocks as new values fill up the existing block but it's happening all over the index not just in one place.
This is the reason why reversing the index when its leading column is fed by a sequence reduces buffer contention, removes the hot block problem and by doing so, reduces the potential for buffer busy waits on a multi-user system.
Drawbacks to Reverse Key Indexes
Of course, there are drawbacks as well. By setting up a reverse key index you are increasing the clustering factor of the index. The clustering factor (from USER_INDEXES) is used by the optimiser (CBO) to determine how best to access data in an INDEX RANGE SCAN. If the clustering factor is roughly equal to BLOCKS minus FREE_BLOCKS from USER_TABLES then the chances are that a range scan will read one index block, and locate all (or nearly all) of the data rows in needs in one or more adjacent blocks in the table.
On the other hand, if the clustering factor is close to NUM_ROWS in USER_TABLES then the chances are that the entries stored together in one index block are likely to be scattered throughout a wide range of table blocks - so the index range scan may not be chosen as a good method of access.
Obviously the above applies to an analysed table and index.

In a quick test on a table with 100,000 rows loaded using a sequence, a normal index was used for most queries returning up to 30 rows, as was the reverse keys index, however, when the number of rows went up to 1,000,000 the reverse key index was never used and a full table scan was used every time.

Addendum (David Aldridge March 2005)
Although the clustering_factor is usually increased by rebuilding the index as a reverse-key index, there are two balancing points to consider:
i)    Reverse-key indexes do not support index range scans, because "... lexically adjacent keys are not stored next to each other in a
reverse-key index..." ( http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10743/schema.htm#sthref988 )
ii)    When key values are generated from a sequence it is extremely rare that the rows identified with lexically adjacent keys have any
real-world connection, so the index range scans ought never to be required by the application.
Of course, neither of these statements applies when the index is a multi-column index, and the first column contains repeating values.

Further reading:     Oracle reference manual for your version of Oracle.

Why does AUTOTRACE not show partition pruning in the explain plan ?
Autotrace not showing partition pruning/elimination is bug 1426992, but, after investigation Oracle has decided that this is not an optimiser bug, but a bug in SQL*Plus. You can, with a bit of knowledge of your data and a little experimentation, deduce that partition pruning is taking place from the output of autotrace, but there are much easier ways !
The following demonstration shows the failings in autotraceand demonstrates a couple of other methods of determining whether or not your partitions are being pruned - or not.

Autotrace
First of all, create a simple table range partitioned over 6 different partitions, and fill it with some test data extracted from ALL_OBJECTS.
SQL> create table tab_part (part_key number(1), some_text varchar2(500))

  2  partition by range (part_key) (

  3  partition part_1 values less than (2),

  4  partition part_2 values less than (3),

  5  partition part_3 values less than (4),

  6  partition part_4 values less than (5),

  7  partition part_5 values less than (6),

  8  partition part_6 values less than (MAXVALUE) );
Table created.


SQL> insert /*+ append */ into tab_part

  2  select mod(rownum, 10), object_name

  3  from all_objects;
24683 rows created.


SQL> commit;
Commit complete.
Once the table has been filled, analyse it and see how the data has been spread over the various partitions. The first and last partitions have more data in them that the remaining four, hence the differing totals.
SQL> analyze table tab_part compute statistics;
Table analyzed.


SQL> select partition_name, num_rows

  2  from user_tab_partitions

  3  where table_name = 'TAB_PART'

  4  order by partition_name;
PARTITION_NAME                   NUM_ROWS

------------------------------ ----------

PART_1                               4937

PART_2                               2469

PART_3                               2469

PART_4                               2468

PART_5                               2468

PART_6                               9872
6 rows selected.
Now that we have a table to work with, we shall see what autotrace has to say about partition elimination. First, however, note how many logical reads a full scan of the entire table needs :
SQL> set autotrace on
SQL> select count(*) from tab_part;
  COUNT(*)

----------

     24683


Execution Plan

----------------------------------------------------------

   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=42 Card=1)

   1    0   SORT (AGGREGATE)

   2    1     PARTITION RANGE (ALL)

   3    2       TABLE ACCESS (FULL) OF 'TAB_PART' (Cost=42 Card=24683)


Statistics

----------------------------------------------------------

          0  recursive calls

          0  db block gets

        135  consistent gets

          0  physical reads

          0  redo size

        381  bytes sent via SQL*Net to client

        499  bytes received via SQL*Net from client

          2  SQL*Net roundtrips to/from client

          0  sorts (memory)

          0  sorts (disk)

          1  rows processed
To read 24,683 rows of data Oracle had to perform 135 logical reads. Keep this in mind and note that the autotrace output shows a full table scan - as we would expect on an unindexed table. The next count should only look in a single partition :
SQL> select count(*) from tab_part where part_key = 7;
  COUNT(*)

----------

      2468


Execution Plan

----------------------------------------------------------

   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=17 Card=1 Bytes=2)

   1    0   SORT (AGGREGATE)

   2    1     TABLE ACCESS (FULL) OF 'TAB_PART' (Cost=17 Card=2468 Bytes=4936)

Statistics

----------------------------------------------------------

          0  recursive calls

          0  db block gets

         49  consistent gets

          0  physical reads

          0  redo size

        380  bytes sent via SQL*Net to client

        499  bytes received via SQL*Net from client

          2  SQL*Net roundtrips to/from client

          0  sorts (memory)

          0  sorts (disk)

          1  rows processed
This seems to have again carried out a full table scan, but as we already know that a real FTS takes 135 logical reads, the fact that only 49 were required here should indicate that something is different. Autotrace's output is not showing partition elimination. If you didn't know how many reads were required to full scan the table, you would be hard pressed to determine that partition elimination had taken place in this scan.
Event 10053
There are other methods by which we can obtain a true picture of the plan used by the optimiser - a 10053 trace for example would show the details. I've never had to use a 10053 trace so I'm unfortunately not in a position to explain its use, I leave this as 'an exercise for the reader' as they say :o)
SQL_TRACE and TKPROF
I have used SQL_TRACE and TKPROF though, so here's what shows up when SQL_TRACE is set true.


SQL> set autotrace off
SQL> alter session set sql_trace = true;
Session altered.
SQL> alter session set tracefile_identifier = 'PARTITION';
Session altered.


SQL> select count(*) from tab_part where part_key = 7;
  COUNT(*)

----------

      2468


SQL> alter session set sql_trace = false
Session altered.
At this point, exit from SQL*Plus and locate the trace file in USER_DUMP_DEST which has 'PARTITION' in it's name. This is the one you want to run through TKPROF. The output from this is shown below :
select count(*) from tab_part where part_key = 7


call     count       cpu    elapsed       disk      query    current        rows

------- ------  -------- ---------- ---------- ---------- ----------  ----------

Parse        1      0.00       0.00          0          0          0           0

Execute      1      0.01       0.00          0          0          0           0

Fetch        2      0.01       0.01          0         49          0           1

------- ------  -------- ---------- ---------- ---------- ----------  ----------

total        4      0.02       0.01          0         49          0           1
Misses in library cache during parse: 0

Optimizer goal: CHOOSE

Parsing user id: 62 
Rows     Row Source Operation

-------  ---------------------------------------------------

      1  SORT AGGREGATE (cr=49 r=0 w=0 time=10353 us)

   2468   TABLE ACCESS FULL TAB_PART PARTITION: 6 6 (cr=49 r=0 w=0 time=6146 us)
The explain plan clearly shows that partition 6 was the start and stop partition in the scan. In addition, there were 49 logical reads performed to get at the count. This is identical to what we saw above with autotrace, except we get to see that partition pruning did actually take place.
Explain Plan
Back in SQL*Plus, there is another method that can be used. The old faithful EXPLAIN PLAN will show how partition pruning did take place.
SQL> explain plan

  2  set statement_id = 'Norman'

  3  for

  4  select count(*) from tab_part where part_key = 7;
Explained.
SQL> set lines 132

SQL> set pages 10000
SQL> col operation format a20

SQL> col options format a15

SQL> col object_name format a15

SQL> col p_start format a15

SQL> col p_stop format a15

SQL> col level noprint
SQL>  select level,lpad('  ', 2*level-1)||operation as operation,

  2   options,

  3   object_name,

  4   partition_start as p_start,

  5   partition_stop as p_stop,

  6   cardinality

  7   from plan_table

  8   where statement_id = 'Norman'

  9   start with id=0

 10   connect by prior id=parent_id

 11   order by level
OPERATION            OPTIONS         OBJECT_NAME     P_START         P_STOP          CARDINALITY

-------------------- --------------- --------------- --------------- --------------- -----------

 SELECT STATEMENT                                                                              1

   SORT              AGGREGATE                                                                 1

     TABLE ACCESS    FULL            TAB_PART        6               6                      2468
Once again, the plan clearly shows that partition pruning takes place. The problem is that autotrace doesn't show it at all. Unless you really know how many blocks of data you have in a table and all of its partitions, you may find it difficult to determine whether or not you are seeing a 'true' plan when using partitioned tables and autotrace.
Note: Do you ever suffer from the PLAN_TABLE growing too big as developers fail to delete old rows from the table? Alternatively, do you forget to delete rows from the table?
Take a copy of $ORACLE_HOME/rdbms/admin/utlxplan.sql and edit it.
Change this :
create table PLAN_TABLE (

statement_id    varchar2(30), ...

filter_predicates varchar2(4000));
To this :
create global temporary table PLAN_TABLE (

statement_id    varchar2(30), ...

filter_predicates varchar2(4000))

on commit preserve rows;
Now login to SQL*Plus as SYS and :
sql> @?/rdbms/admin/utlxplan_edited    /* Or whatever your copy is called */
sql> grant all on plan_table to public;
sql> create public synonym PLAN_TABLE for SYS.PLAN_TABLE;
Now when developers or DBAs use PLAN_TABLE and logout their rows will be deleted. A self-tidying PLAN_TABLE. Of course, this is no good if you want to keep rows in PLAN_TABLE between sessions.
DBMS_XPLAN
Under Oracle 9i (release 2 I think) there is a new PL/SQL package which you can use to show explain plans. The above statement could have its plan shown using this command :
SQL> Select * from table(dbms_xplan.display(statement_id=>'Norman'));
or, if this was the only statement in my PLAN_TABLE :
SQL> Select * from table(dbms_xplan.display);
There is much more information shown with this new feature than with a 'normal' explain plan and you don't have to worry about all that formatting either.
Summary
In summary, autotrace doesn't show partition elimination in Oracle up to versions 9i release 2. You should therefore be aware of this fact and use SQL_TRACE or EXPLAIN_PLAN to get at the true plan for the SQL you are trying to tune/debug.

Further reading:
Note: 166118.1 Partition Pruning/Elimination on Metalink. You will need a support contract to access Metalink.
Bug: 1426992 SQLPlus AUTOTRACE does not show correct explain plan for partition elimination. Again on Metalink.
Will compressing my indexes improve performance ?
Oracle introduced a compression option for indexes in Oracle 8.1. You can create an index as compressed, or rebuild it to compress it (although there are some restrictions about online rebuilds, rebuilds of partitioned indexes etc.) Typical syntax might be:
    create index t1_ci_1 on t1(col1, col2, col3, col4) compress 2;
    alter index t1_ci_1 rebuild compress 2;
The benefits of compression come from the fact that a properly compressed index uses a smaller number of leaf blocks - which tends to mean that less I/O is involved when the index is used, there is a reduced amount of buffer cache flushing, and the optimizer is likely to calculate a lower cost for using that index for range scans. (There is a tiny chance that the number of branch blocks, and the index height might be reduced, too, but that is a little unlikely).
But compressing indexes, especially compressing the wrong number of columns, can have negative impact on your performance. If you compress more columns than you should, the 'compressed' index may be larger than the uncompressed index. Use the validate option on the index, and check view index_stats to find out the optimum compression count. How did I know that I should compress just the first two columns of the t1_ci_1 index ? (Apart from knowing the data, that is):
    validate index t1_ci_1;

    select
    opt_cmpt_count, opt_cmpr_pctsave
    from
    index_stats;
    opt_cmpt_count opt_cmpr_pctsave
    -------------------------------
                 2               50
Unfortunately these two columns don't exist in 8.1, only in version 9 (possibly only 9.2). Fortunately Steve Adams has a script on his website to recommend a compression level (see www.ixora.com.au )
Even if you get the 'right' number of columns compressed, there is a price to pay: The main penalties are: (a) reads and mods of a compressed index cost more CPU than they would (typically) for an equivalent uncompressed index (b) execution paths change - and you may not have predicted the changes, and some nominally cheaper paths may actually be slower. for example: Oracle may choose an index fast full scan instead of an index range scan because the compressed index is now much smaller, and your setting for parameter db_file_multiblock_read_count is large; or Oracle may choose to use an index and do a nested loop because the index is now 30% smaller, where previously it was doing a table scan and hash join.
So - don't go and compress all the indexes in your schema.
Think carefully about which indexes could give you significant gains, and whether you can afford some CPU loss to reduce buffer thrashing and I/O.
Remember too, if the way you use an index is such that the column order doesn't matter, then perhaps you could rearrange the column order to maximise the compression. The most critical point, perhaps, is that you should avoid moving a column that is typically used with a range scan towards the front of the index.t
How are the Cache Buffers Chains and Cache Buffers LRU Chains used in the Buffer Cache management ?
The Cache Buffers Chains are very short chains (linked lists) that allow Oracle to locate a block very quickly if it is in the buffer. Each block hashes (by block address, tablespace number, and block type - I believe) to one of the chains. In Oracle 8 there are roughly twice as many chains available as there are buffers, so many chains are empty, and the remainder tend to have only one or two blocks in them - so scanning a chain can be very quick.
Chains are covered by Cache Buffers Chains latches. Each Cache Buffers Chains latch covers around 64 - 128 chains - the commonest sizes of db_block_buffers (or db_cache_size as it should be in oracle 9) mean that are typically 512 or 1024 chains, but this varies in powers of 2 as the size of the buffer cache grows.
The Cache Buffers LRU chains tell Oracle about how much use a buffer has had, and therefore allow it to decide quickly and cheaply which buffer to clear when someone wants to read a new block from disc. In fact the term LRU chain is somewhat obsolete, as Oracle 8.1 uses a touch count algorithm to decide on the popularity of a buffered block, nevertheless many details of the LRU algorithm still apply, and the chain of buffers still has blocks being 'pushed down' to be dropped off at the end if they have not been touched in the recent past.
Why does a global index on my partitioned table work faster than a local index ?
Partitioning is often touted as the cure-all for performance problems. "Partitioned your tables into 1,000 sections, and your queries run hundreds of times faster" is a comment I heard from one well-known "authority" in the early days of Oracle 8 - and some people still think it is true. So why, when you have partitioned your table, and start to index them, do you find that global indexes are quicker than local indexes.
As usual, the quick (consultant's) answer is that the benefits depend on your system. How big is your data, what are your clustering factors like, are you using the features that supply the benefits, and in this case, how much work do you typical queries do anyway ?
The global/local indexes problem is generic - global indexes CAN be faster than local indexes. In general, local indexes will be faster when partition elimination can take place, and the expected volume of data acquired id significant. If either of this conditions is not met, then local indexes probably won't help performance.
Remember, the optimizer has to do some extra work to deal with partitioned tables. If the queries you use are very precise queries (e.g. on a nearly unique pair fo columns) then the cost of optimisation and identifying the partition may easily exceed the saving of having the partitions in the first place (The saving might be just one logical I/O, hence < 1/10,000 of a CPU second) .Where queries are very precise, a global index is quite likely to be a little more performant than a local index.
This is a common Oracle trade-off between how much you win/lose and how often you make that win/loss. In the local / global indexes case you (should expect to) lose a tiny amount of performance on every query in order to win a huge amount when you do partition maintenance such as eliminating entire partitions of very old data with a quick drop partition command.
Of course, if your partitions are large, and the indexes hit a large number of table rows, and the queries are over-generous in their throwaway rates; then the work saved by hitting just the one partition through exactly the correct index partition may prove to be a significant gain.
Which Oracle features require and/or force the use of Cost Based Optimisation ?
There is a list of Oracle features in the Performance Tuning Guide that are quoted as 'forcing Cost Based Optimisation' to happen. Unfortunately, this list is a little misleading. Some of the features (such as parallel tables) force CBO to kick in, others simply will not work unless CBO has been invoked. This note splits the list into the two relevant groups.
The presence of the following features appears to force CBO to kick in.
Index Organized Tables (IOTs)
Partitioned Tables
Parallel Tables
The SAMPLE clause
ANSI Outer joins
The Rule Based Optimizer will not notice the presence of the following features - you must invoke CBO explicitly
Reverse key indexes
Function based indexes
Bitmap indexes
Bitmap Join indexes
The Rule based optimiser will use the following, without invoking CBO
Inline (from clause) views
Partition views
ANSI natural and cross joins
Other comments
Other functionality such as star joins, bitmap star transformations, the progress meter (v$session_longops) and hash joins are also dependent on the cost based optimiser being invoked. However I have not included them in the lists above, as they are 'action-oriented' options rather than having what might be called a sort of 'physical presence'.
What is the difference between a soft parse and a hard parse ?
When you submit an SQL statement for processing, it passes through a number of stages before your server process finally gets to execute it and give you back the results.
First your command is checked for syntax errors - basically, is the command correctly formed ?
If the syntax check is passed, it is checked for semantic errors - things like do all the objects required exist, does the user have appropriate privileges etc.
The next stage is to pass the command thorough a hashing function to obtain a hash value for the statement. This hash value is used as a lookup in the library cache to see if this command has been used before. If the hash value is found in the cache, then the SQL for that command is compared with yours to see if they are identical (down to letter case etc). If so, the next step is to ensure that all objects referenced in the cached command are the same objects referenced in your new one. If so, then the parse tree and execution plan for the existing command can be used for your as well and the optimiser stage is missed out This is a soft parse.
Assuming that there is not an identical SQL command in the library cache, then your command will be passed on to the Optimiser to work out the best plan of attack to get back the data you want. The optimiser builds a parse tree which will involve processing some recursive SQL. Once a parse tree has been build, an execution plan is created from it. This constitutes a hard parse.
Once an execution plan has been created, or an existing one re-used, the command can be executed.
In summary,
1.    Perform syntax check
2.    Perform semantic check
3.    Perform hash function
4.    Perform library cache lookup
5.    If hash value found then
6.    .....If command is identical to existing one in cache then
7.    ..........If the objects referenced in the cached command are the same as the ones in the new command then
8.    ...............This is a soft parse, go to step 11
9.    This is a hard parse, build parse tree
10.    Build execution plan
11.    Execute plan.
The building of the parse tree and execution plan are the two most expensive parts of the parsing, and if these have to be done, then we have a hard parse.
Note : In the Oracle Performance Tuning 101 book mentioned below, the order given is that the hashing is done first and if the hash value is not found in the library cache, then the syntax and semantic checks are carried out. Tom Kyte says in his book, and on his web site, that this is not the case. The syntax and semantic checks are always carried out and then the hashing. As Tom works for Oracle, and because he gives good reasons, I'm following Tom's advice on the matter. The Oracle Concepts manual has the same description as Tom.
I'm not sure how I could actually test it to find out.

Further reading:
Oracle training manuals from SQL and PL/SQL course.
Oracle server concepts manual volume 2
Beginning Oracle Programming by Sean Dillon, Tom Kyte etc al.
Oracle Performance Tuning 101 by Vaidyanatha et al
How do I find out which tables are accessed with a full table scan, and the number of times this occurred ?
We have found a doc on metalink which shows a query on the x$bh table/view to determine which tables are being accessed by full table scans. We are interested in a count of number of times each table is accessed by a full table scan so we can find the most heavily accessed table by full table scan and possibly tune to reduce the number of full table scanning. The reason we know this can somehow be done is because we have received a report from a vendor like the following:
        There have been 15,402 full table scans since the start of the database.
    Listed below are the tables that have had over 10 full table scans.
    Table Owner Table Name Number of Full Table Scans
    ABC TABLEONE 335
    DEF TABLETWO 5551
Because it mentions how many occurred since startup of the database, it implies that there is/are v$ tablesor x$bh tables that were queried to get this type of information. I have checked metalink documents without success.
There is no such view or X$ object as far as I know.
I assume the script that you mention regarding using X$BH uses the flag value to check if the buffer header was used for scan read, with a clause like:
    select distinct obj
    from x$bh
    where    bitand(flag,power(2,19)) = power(2,19):
Perhaps joining to obj$ on the dataobj# column.
This identifies objects that have been scanned (which may include index fast full scans), but doesn't tell you how often.
To get an idea of how often an object has been scanned, you could dump v$sql or v$sqlarea from time to time and use explain plan on the queries involving the guilty objects to identify the SQL that includes a full scan on the object. If you find plans with full scans, you can use the executions column to tell you how many times the query was executed. This still doesn't tell you how often the scan occured, of course, as the scan may have taken place many times in one exection of the query.
In Oracle 9.0 onwards, you have the extra view v$sql_plan so you don't need to dump the sql and explain it, and in 9.2 you have a view called v$sql_plan_statistics, which includes information about the number of times each line of each plan was executed, and this can allow you to get a much better estimate of the actual number of scans.
Of course, under all Oracle versions, using v$sql as the source of such information is only an approximation - some executions may have been aged out of the SGA before you get to see them, and sometimes invalidations and reloads can confuse the statistics.

Is is possible to flush the db_block_buffer for testing purposes
There is no matching command, however there is a little trick which may be adequate.
    alter tablespace XXX offline;
    alter tablespace XXX online;
When you alter the tablespace offline, any blocks which are in the buffer are invalidated, and therefore subject to rapid elimination as the buffer is re-used. In fact, even when you bring the tablespace back online, even if some blocks are still apparently buffered, they cannot be reused.
Bear in mind, though, that if you are running Oracle on a file-system, and not using direct I/O as the O/S level, then the blocks may still be buffered in the file-system buffer; so your tests may still suffer from some spurious buffering benefit; especially if your code is accessing some smallish tables through tablescans. Remember that a small table is one that is no more that 2% of the size of the db_block_buffer, it is NOT, as is commonly believed, one that is only four blocks or less..
If I store text in a column, can I index individual words ?.
This is possible if you use the Oracle interMedia Text option.  It allows you to create "text indexes" on your text columns which can then be queried using the contains() function in the where clause.
How to speed up reporting of large execution plans based on PLAN_TABLE?
Oracle supplies script to define PLAN_TABLE used by EXPLAIN PLAN command. The script usually resides in
$ORACLE_HOME/rdbms/admin/utlxplan.sql file in Unix or
#:\ORANT\RDBMSnn\ADMIN|utlxplan.sql file under MS Windows (# - is a letter specifying an assigned drive, nn specifies the Oracle version e.g. 81).

The data is inserted into PLAN_TABLE by issuing
EXPLAIN PLAN SET STATEMENT_ID = 'User_defined_id_string' FOR
User written SQL statement;

In order to retrieve the data another standard statement (or some derivation of it) is used:
   SELECT LPAD (' ', 2 * (level - 1)) || operation operation, options,
          object_owner || '.' || object_name object, DECODE (id, 0,'Cost = ' || position) pos
   FROM PLAN_TABLE
   START WITH id = 0 AND statement_id = 'User_defined_id_string'
   CONNECT BY PRIOR id = parent_id AND statement_id = 'User_defined_id_string';
However quite often the execution of such statement requires large amount of time and computer resources, especially when explained statement is a complicated one (generates many rows in PLAN_TABLE) or when all application developers use single PLAN_TABLE (and nobody cares enough to delete an old data from it). The reason for such a behavior is simple enough, when we look at EXPLAIN PLAN of our SELECT statement:
SELECT STATEMENT OPTIMIZER=CHOOSE
   CONNECT BY
     TABLE ACCESS (FULL) OF PLAN_TABLE
     TABLE ACCESS (BY USER ROWID) OF PLAN_TABLE
     TABLE ACCESS (FULL) OF PLAN_TABLE
As we see the explain shows that:
1. Full table scan is performed (to find a row with id = 0 and given statement_id)
2. For each child row another full table scan is executed (to get a row with the same statement_id and with a parent_id of current row).
In order to eliminate FTS and to enable the best possible access for each retrieved row I suggest defining 2 indexes:
CREATE INDEX PLAN_TABLE$STMTID_ID ON PLAN_TABLE (STATEMENT_ID, ID);

CREATE INDEX PLAN_TABLE$STMTID_PID ON PLAN_TABLE (STATEMENT_ID, PARENT_ID);
(Of course you have to use TABLESPACE and STORAGE parameters as well)

Now the explain plan looks much better:
SELECT STATEMENT OPTIMIZER=CHOOSE
  CONNECT BY
    INDEX (RANGE SCAN) OF PLAN_TABLE$STMTID_ID
    TABLE ACCESS (BY USER ROWID) OF PLAN_TABLE
    TABLE ACCESS (BY INDEX ROWID) OF PLAN_TABLE
      INDEX (RANGE SCAN) OF PLAN_TABLE$STMTID_PID
Selecting execution plan of 12 rows out of PLAN_TABLE with 7000 rows took (Oracle 8.1.6.0/Windows NT/P-III-350):
Without indexes: 0.501 sec
With both indexes defined: 0.010 sec - 50-fold improvement.
It's possible to use 1 index only instead of 2 (I advise to define it on STATEMENT_ID and PARENT_ID columns) to get significant performance improvements compared with situation where no index exists.

The same solution of defining an indexes on columns used in CONNECT BY PRIOR clause may be used to improve the performance of all hierarchical queries.
Tuning UPDATE/DELETE statements with subqueries
The tuning of UPDATE or DELETE statement referencing a single table is no different from SELECT, however some performance problem arises when a number of tables are involved. In major number of cases there are 2 tables when one of them must be updated based on some criteria from another.
We are going to analyze 2 cases (both with RULE and COST based optimizers) of UPDATE statement execution but the same reasoning and methods apply to DELETE statement as well:
Case 1. EMP and DEPT tables are involved. We would like to update salaries of all employees (lets give them 5% raise) working for ‘SALES’ department. There is always a possibility of using 2 statements:
  a. SELECT from DEPT to get a value of DEPTNO column.
  b. UPDATE EMP SET SAL = SAL * 1.05 WHERE DEPTNO = :DNO
However lets see what happens when we try to do it by using single UPDATE statement (and Rule Based Optimizer - RBO):
   UPDATE EMP SET SAL = SAL * 1.05
   WHERE DEPTNO IN (SELECT DEPTNO FROM DEPTWHERE DNAME = ‘SALES’)

Call    Count    CPU    Elapsed    Disk    Query    Current    Rows      
Parse    1    0.04    0.04    0    0    0    0      
Execute    1    1.58    1.74    1    40135    2058    2006      
Fetch    0    0.00    0.00    0    0    0    0      
Total    2    1.62    1.78    1    40135    2058    2006   
 
Rows    Execution Plan      
0    UPDATE STATEMENT   GOAL: CHOOSE      
1     UPDATE OF 'EMP'       
2007        NESTED LOOPS                                                                                               
20015           TABLE ACCESS (FULL) OF 'EMP'                                                        ç a      
22020           TABLE ACCESS (BY INDEX ROWID) OF 'DEPT'                               ç c      
40028               INDEX (UNIQUE SCAN) OF 'PK_DEPT' (UNIQUE)                        ç b   
As we can see the optimizer performs:
  a. FULL TABLE SCAN of EMP table (about 20000 rows).
  For EACH row it :
  b. Accesses an INDEX of DEPT table by DEPTNO value to get a ROWID
  c. Accesses a DATA part of DEPT table by using ROWID retrieved from an
      index to get value of DNAME column.
      Only now it performs a final elimination of unnecessary rows by filter on
      DNAME column (DNAME = ‘SALES’)
It’s quite obvious that it’s may be more efficient to access DEPT table first and EMP table at later stage.
In order to force the optimizer to access the tables in specified order we have to supply USE_NL hint. It tells optimizer to use EMP table as an inner table and, by-product, DEPT table will be used as outer (driving) table.

  UPDATE /*+ USE_NL(EMP) */ EMP SET SAL = SAL * 1.05
  WHERE DEPTNO IN (SELECT DEPTNO FROM DEPT WHERE DNAME = ‘SALES’)

Call    Count    CPU    Elapsed    Disk    Query    Current    Rows      
Parse    1    0.01    0.01    0    0    0    0      
Execute    1    0.91    1.68    5    7    2056    2006      
Fetch    0    0.00    0.00    0    0    0    0      
Total    2    0.92    1.69    5    7    2056    2006   
 
Rows    Execution Plan      
0    UPDATE STATEMENT   GOAL: CHOOSE      
1     UPDATE OF 'EMP'       
2007        NESTED LOOPS                                                                                               
2           TABLE ACCESS (FULL) OF 'DEPT'      
2007           INDEX (RANGE SCAN) OF 'EMP$DEPT' (NON-UNIQUE)   
Now we can compare the results:

     Without hint    With hint    Improvement(%)      
CPU    1.62    0.92    45      
Elapsed    1.78    1.69    5      
Access count    42193    2063    95   
As expected there is a major performance gain.
Now let’s see what happens with the cost based optimizer (CBO).
   UPDATE EMP SET SAL = SAL * 1.05
   WHERE DEPTNO IN (SELECT DEPTNO FROM DEPT WHERE DNAME = ‘SALES’)

Call    Count    CPU    Elapsed    Disk    Query    Current    Rows      
Parse    1    0.01    0.01    0    0    0    0      
Execute    1    0.97    1.71    0    108    2062    2006      
Fetch    0    0.00    0.00    0    0    0    0      
Total    2    0.98    1.72    0    108    2062    2006   
 
Rows    Execution Plan      
0    UPDATE STATEMENT   GOAL: CHOOSE      
1     UPDATE OF 'EMP'       
2007        HASH JOIN      
1           TABLE ACCESS GOAL: ANALYZED (FULL) OF 'DEPT'      
20014           TABLE ACCESS GOAL: ANALYZED (FULL) OF 'EMP'   
As we can see the CBO performs much better then RBO (it uses HASH JOIN instead of NESTED LOOP JOIN of RBO). However it’s interesting to see what will happen when we access EMP table by index instead of performing FTS on it. In order to do it we add a hint to our statement once again and another hint to enforce index access:
   UPDATE /*+ USE_NL(EMP) INDEX(EMP) */ EMP
   SET SAL = SAL * 1.05
   WHERE DEPTNO IN (SELECT DEPTNO FROM DEPT WHERE DNAME = ‘SALES’)

Call    Count    CPU    Elapsed    Disk    Query    Current    Rows      
Parse    1    0.04    0.04    0    0    0    0      
Execute    1    0.88    1.02    0    7    2056    2006      
Fetch    0    0.00    0.00    0    0    0    0      
Total    2    0.92    1.06    0    7    2056    2006   
 
Rows    Execution Plan      
0    UPDATE STATEMENT   GOAL: CHOOSE      
1     UPDATE OF 'EMP'       
2007        NESTED LOOP      
2           TABLE ACCESS GOAL: ANALYZED (FULL) OF 'DEPT'      
2007           INDEX ACCESS GOAL: ANALYZED (RANGE SCAN) OF 'EMP$DEPT'
  (NON-UNIQUE)      
2007           INDEX ACCESS GOAL: ANALYZED (RANGE SCAN) OF 'EMP$DEPT'
(NON-UNIQUE)   
Now (using CBO) there is almost no performance difference between hinted and non-hinted statements, however we have to remember a number of facts:
  a. We update about 10 percent of rows in EMP table
  b. Our rows are very short (so the number of block in the table EMP is relatively small)
My conclusion is that there is still a place to use hints when:
  a. FTS of updated table is performed and
  b. Only small percentage of rows are updated
We are going to verify that conclusion in our next example.
Case 2. Let’s check a bit more complicated statement.
  EMP table must be updated based on values in EMP_LOAD table that contains relatively small number of rows ( about 20000 rows in EMP table and 100 rows in EMP_LOAD table):
RBO:
  UPDATE emp e
  SET (ename, job, mgr, hiredate, sal, comm, deptno) =
    (SELECT ename, job, mgr, hiredate, sal, comm, deptno
     FROM emp_load el
     WHERE e.empno = el.empno)
  WHERE e.empno IN (SELECT empno FROM emp_load)

Call    Count    CPU    Elapsed    Disk    Query    Current    Rows      
Parse    1    0.03    0.03    0    0    0    0      
Execute    1    0.49    0.54    0    20321    208    100      
Fetch    0    0.00    0.00    0    0    0    0      
Total    2    0.52    0.57    0    20321    208    100   
 
Rows    Execution Plan      
0    UPDATE STATEMENT   GOAL: CHOOSE      
1     UPDATE OF 'EMP'       
101        NESTED LOOPS                                                                                               
20015           TABLE ACCESS (FULL) OF 'EMP'      
20114           INDEX   (RANGE SCAN) OF 'EMP_LOAD_PK' (UNIQUE)      
0     TABLE ACCESS (BY INDEX ROWID) OF 'EMP_LOAD'      
0       INDEX (UNIQUE SCAN) OF 'EMP_LOAD_PK' (UNIQUE)   
             Once again FULL TABLE SCAN raises it’s ugly head. We are going to use the previous approach and try to access EMP as an INNER table.
     UPDATE /*+ USE_NL(e) INDEX(e) */ emp
     SET (ename, job, mgr, hiredate, sal, comm, deptno) =
          (SELECT ename, job, mgr, hiredate, sal, comm, deptno
           FROM emp_load el
           WHERE e.empno = el.empno )
     WHERE e.empno IN ( SELECT empno FROM emp_load)

Call    Count    CPU    Elapsed    Disk    Query    Current    Rows      
Parse    1    0.02    0.02    0    0    0    0      
Execute    1    0.05    0.05    0    401    204    100      
Fetch    0    0.00    0.00    0    0    0    0      
Total    2    0.07    0.07    0    401    204    100   
 
Rows    Execution Plan      
0    UPDATE STATEMENT   GOAL: CHOOSE      
1     UPDATE OF 'EMP'       
101        NESTED LOOPS                                                                                               
101          INDEX GOAL: ANALYZED (FULL SCAN) OF 'EMP_LOAD_PK' (UNIQUE)      
200          INDEX (RANGE SCAN) OF 'PK_EMP' (UNIQUE)      
0     TABLE ACCESS   GOAL: ANALYZED (BY INDEX ROWID) OF 'EMP_LOAD'      
0       INDEX   GOAL: ANALYZED (UNIQUE SCAN) OF 'EMP_LOAD_PK'
              (UNIQUE)   
          CBO:
    UPDATE emp e
    SET (ename, job, mgr, hiredate, sal, comm, deptno) =
      (SELECT ename, job, mgr, hiredate, sal, comm, deptno
       FROM emp_load el
       WHERE e.empno = el.empno)
    WHERE e.empno IN (SELECT empno FROM emp_load)

Call    Count    CPU    Elapsed    Disk    Query    Current    Rows      
Parse    1    0.02    0.02    0    0    0    0      
Execute    1    0.52    0.61    0    20321    208    100      
Fetch    0    0.00    0.00    0    0    0    0      
Total    2    0.54    0.63    0    20321    208    100   
 
Rows    Execution Plan      
0    UPDATE STATEMENT   GOAL: CHOOSE      
1     UPDATE OF 'EMP'       
101        NESTED LOOPS                                                                                               
20015            TABLE ACCESS   GOAL: ANALYZED (FULL) OF 'EMP'      
20114            INDEX   GOAL: ANALYZED (RANGE SCAN) OF 'EMP_LOAD_PK'
              (UNIQUE)      
0     TABLE ACCESS   GOAL: ANALYZED (BY INDEX ROWID) OF 'EMP_LOAD'      
0       INDEX   GOAL: ANALYZED (UNIQUE SCAN) OF 'EMP_LOAD_PK'
              (UNIQUE)   
The result is identical to the default RBO decision, so let’s try and improve it:
    UPDATE /*+ USE_NL(e) INDEX(e) */ emp
    SET (ename, job, mgr, hiredate, sal, comm, deptno) =
        (SELECT ename, job, mgr, hiredate, sal, comm, deptno
         FROM emp_load el
         WHERE e.empno = el.empno )
    WHERE e.empno IN ( SELECT empno FROM emp_load)

Call    Count    CPU    Elapsed    Disk    Query    Current    Rows      
Parse    1    0.02    0.02    0    0    0    0      
Execute    1    0.06    0.06    0    401    204    100      
Fetch    0    0.00    0.00    0    0    0    0      
Total    2    0.08    0.08    0    401    204    100   
 
Rows    Execution Plan      
0    UPDATE STATEMENT   GOAL: CHOOSE      
1     UPDATE OF 'EMP'       
101        NESTED LOOPS                                                                                               
101            INDEX   GOAL: ANALYZED (RANGE SCAN) OF 'EMP_LOAD_PK'
              (UNIQUE)      
200            INDEX   GOAL: ANALYZED (RANGE SCAN) OF 'PK_EMP' (UNIQUE)      
0     TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
              'EMP_LOAD'      
0       INDEX GOAL: ANALYZED (UNIQUE SCAN) OF 'EMP_LOAD_PK'
              (UNIQUE)   
Now we can compare the results:

     Without hint    With hint    Improvement(%)      
CPU    0.54    0.08    85      
Elapsed    0.63    0.08    87      
Access count    20529    605    97   
As we can see there is huge performance improvement in these cases just waiting to be executed.
I am unable to add a PK constraint to a table - what is the best way of finding the problem rows ?
We've all been there - you issue an "alter table ... add primary key" command, and sit and watch for 12 hours whilst it chugs through your "squillion" row table only to come back with "ORA-02437: cannot validate ..." or similar because you have some problem rows. And since it failed - you are no closer to resolving the problem.
The solution is to break the task down into stages:
·    Create an NON-UNIQUE index on the appropriate table column(s)
·    Add the primary with the NOVALIDATE clause. This stops any more problem rows getting into the table
·    Alter the primary key with the VALIDATE EXCEPTIONS INTO clause. You will still see the standard 'cannot validate' error, but now the EXCEPTIONS table (cf: $ORACLE_HOME/rdbms/admin/utlexcpt.sql) contains the rowids for the problem rows
·    Remove the problem rows
·    Alter the primary key with the VALIDATE clause
Dropping a tablespace seems to take a very long time - how can I speed it up ?
There are two reasons why dropping a tablespace takes a lot of time.
·    If the tablespace contains a lot of objects, then this is a massive recursive dictionary operation that is being undertaken. (Even if the tablespace is empty, there may still be a lot of free space entries which may need to be cleaned up)
·    All of these dictionary operations need to be recorded in rollback, just in case the operation fails or is terminated in some way.
A way to avoid this is to explicitly drop the segments in the tablespace before dropping the tablespace itself. The overall operation is faster, and if the session/instance crashes, then there will not be a massive undo operation to performed. Some sample timings are shown below:
Standard 'drop' on a dictionary managed tablespace
(prelim - we stick 1000 segments in "random" order in the tablespace)

SQL> begin
  2  for i in 1 .. 500 loop
  3  execute immediate 'create table tab'||(i*2)||' ( x number ) tablespace sample_data '||
  4  'storage ( initial 16k next 16k )';
  5  end loop;
  6  for i in 1 .. 500 loop
  7  execute immediate 'create table tab'||(i*2-1)||' ( x number ) tablespace sample_data '||
  8  'storage ( initial 16k next 16k )';
  9  end loop;
 10  end;
 11  /

PL/SQL procedure successfully completed.

SQL> set timing on
SQL> drop tablespace sample_data including contents;

Tablespace dropped.

Elapsed: 63 seconds
Drop objects first then tablespace (dictionary managed)
(prelim as before)

SQL> drop table tab1;

Table dropped.

SQL> drop table tab2;

Table dropped.

(every 10 tables we issue)

SQL> alter table SAMPLE_DATA coalesce;

Tablespace altered.

etc.

(Total) Elapsed: 55 seconds

SQL> drop tablespace sample_data;

Tablespace dropped.

Elapsed: 00:00:00.80

NB: As pointed out by the FAQ owner, if you wish to use PL/SQL to automate this, you will not get any joy using 'alter tablespace ... coalesce' as a recursive SQL (ie within the PL/SQL routine). You would need to code

execute immediate 'alter session set events ''immediate trace name drop_segments level n''';

where 'n' is the tablespace ID plus 1.
If you are using locally managed tablespaces, then you are insulated in some respect, namely, the tablespace (including its contents) can be dropped more quickly. The same test above gave 50 seconds for a plain 'drop tablespace' and 66 seconds for a preliminary drop of the objects. There is probably still a good case for the preliminary drop to avoid the resource pain of rolling back to tablespace drop in the event of a session crash.
What is the difference between an 'index full scan' and an 'index fast full scan' ?
The fast full scan was an operation that appeared in Oracle 7.3, although in that version of Oracle there was a time when it had to be invoked explicitly by the hing /*+ index_ffs(alias index) */.
Under this operation, the index would not be treated like an index, it would be treated as if it were a narrow table, holding just the columns defined as part of the index. Because of this, Oracle is able to read the index segment using multiblock reads (discarding branch blocks as it goes), even using parallel query methods to read the segment in the shortest possible time. Of course, the data coming out of an index fast full scan will not be sorted.
The full scan, however, simply means that Oracle walks the index from end to end, following leaf blocks in the right order. This can be quite slow, as Oracle can only use single block reads to do this (although in fact 8.1.5ish introduced the _non_contiguous_multiblock_read, and various readahead strategies). However the results do come out in index order without the need for an sort. You might notice that Oracle tends to choose this option when there is an index that happens to suit the order by clause of a query, even if it doesn't suit the where clause of the query - this may also result in a SORT ORDER BY (NO SORT) line appearing in your execution plan.
I have a column with a default value, but I keep finding it set to NULL - what's going wrong ?.
Because a default value is used only for insertion into a table, but you may update the value of column to NULL
I have created a table with the nologging option, but any insert, update, or delete still generates redo log , why?
First, it must be understood that redo generation is at the very heart of Oracle's ability to recover from virually any media failure.  For that reason, the ability to disable it only exists for a small subset of commands.  They are all related to serial and parallel direct-path loads.
What operations allow NOLOGGING?
DML statements such as insert, update, and delete will always generate redo.  However, the nologging option may be utilized for the following SQL statements (from the Oracle 8i concepts manual, chapter 22):
·    direct load (SQL*Loader)
·    direct load INSERT (using APPEND hint)
·    CREATE TABLE ... AS SELECT
·    CREATE INDEX
·    ALTER TABLE ... MOVE PARTITION
·    ALTER TABLE ... SPLIT PARTITION
·    ALTER INDEX ... SPLIT PARTITION
·    ALTER INDEX ... REBUILD
·    ALTER INDEX ... REBUILD PARTITION
·    INSERT, UPDATE, and DELETE on LOBs in NOCACHE NOLOGGING mode stored out of line
For the statements listed above, undo and redo logging can be almost entirely disabled.  New extents are marked invalid, and data dictionary changes are still logged.  Note that a table or index with the nologging attribute (which can be seen in the LOGGING column of the DBA_TABLES or DBA_INDEXES view) will default to nologging when one of the above statements is executed.
What is the recoverability of objects created as NOLOGGING?
Since nologging disables writing redo log entries, there is no way for Oracle to recover data related to nologging operations.  In the case of a media failure subsequent to a nologging operation, Oracle will apply the redo log transactions, but, when it reaches the transactions related to the nologging operation, it will only be able to apply the extent invalidation records, since that is all that was recorded.  Any subsequent attempt to access data in those extents will result in an ORA-26040 "Data block was loaded using the NOLOGGING option".  The only resolution to the error is to drop and recreate the object.  Note that this risk only exists until the next successful backup.  Backups taken after the completion of the nologging operation will provide complete recovery.
How much benefit is there in building a table/index that fits into a single extent
Does number of extents matter ?
Some DBA's are reluctant to allow more than a few extents in any segment with the mistaken belief that such "fragmentation" degrades performance. Within reason, the performance impact of multiple extents is almost insignificant if they are sized correctly
Does extent size matter ?
Yes, extent size does matter, but not greatly. Nevertheless, all extents should be a multiple of the multiblock read count. Otherwise, when a full table or index scan is performed, an extra multiblock read will be required to read the last few blocks of each extent, except probably the last one. This is because multiblock reads never span extent boundaries, even if the extents happen to be contiguous.
Consider for example the table T1. It is comprised of 8 extents of 50 blocks each. The first block is the segment header, there are 389 data blocks in use, and there are 10 free blocks above the high water mark. With a multiblock read count of 16 blocks, and assuming none of the blocks are already in cache, a full table scan of this table will require 4 data block reads per extent, except the last - a total of 31 multiblock reads.
If the table is rebuilt as T2 with an extent size that is an exact multiple of the multiblock read count, then the number of multiblock reads required to scan the table is minimized. Assuming the table is now comprised of 5 extents of 80 blocks each. A full table scan now requires 5 multiblock reads per extent, or a total of 25 multiblock reads.
Please note that it was not the reduction in the number of extents as such that made the difference. There would be no further saving in rebuilding the table with a single extent of 400 blocks. The number of multiblock reads required to scan the table would still be 25..
The number of redo copy latch misses reported in V$LATCH is a large fraction of the gets. What should I do ?
The quick answer to the question is that you are probably looking at the half of the latch report that lists 'Willing to Wait' latches which means that you don't have a problem.
It is important to remember that there are two sets of 'gets' and 'misses' recorded by the V$LATCH dynamic performance view. Two columns are simply named gets and waits, the other two are named immediate_gets and immediate_waits. If you look at the complete set of latch statistics for your system, you will notice that there are two high usage latches on 7.3 and 8.0 - Redo Allocation and Redo Copy. Oracle 8.1 introduces a third latch, the Redo Writing latch.
It is worth looking at just the redo latches, and the gets/misses on just those latches , for example with the following SQL:
column name format a15

select
    name,
    gets, misses,
    immediate_gets, immediate_misses
from    v$latch
where    name like '%redo%'
;


NAME                 GETS    MISSES IMMEDIATE_GETS IMMEDIATE_MISSES
--------------- --------- --------- -------------- ----------------
redo allocation   1937729         1              0                0
redo copy              13         3        1814906                6
redo writing       645563         1              0                0
It takes only a moment, when presented with the results in this form, to realise that the Redo Copy latch is almost invariable acquired on an immediate get, and rarely accessed on a 'willing to wait' get. The fact that the ratio of misses to gets (3 to 13) is rather high is totally irrelevant. In fact it is actually the expected behaviour as Oracle only acquires the redo copy latch in this fashion when it is trying to suspend redo activity temporarily whilst doing some special log file activity.
If you do see a relatively large number of gets (as opposed to immediate gets) on the redo copy latch, it may suggest that your redo log file size is too small, and you have a completely different performance problem to address anyway. In this case, check the alert log for indications of 'checkpoint not complete' and check the system statistics for excessive writes to disc due to checkpointing.

Further reading: For a detailed description of the actvity associated with redo, the best reference site is Steve Adams' site http://www.ixora.com.au
What's the quickest way of deleting all (or a large fraction of) the data in a table ?
The answer depends on the circumstances, but if you need to delete all the rows in a table then the truncate command is the way to go as this command minimizes the writing of redo and undo (rollback) entries. The table and any indexes have their high water mark, stored in the header block, reset and except for the dictionary or bitmap updates for freed extents beyond the initial extent the job is done. If the table is to be reloaded and will need to reuse all or nearly all of the space it held then the reuse option can be used to eliminate even the need to update the dictionary space management tables or bitmap header for locally managed objects:
    truncate table owner.tablename [drop storage| reuse storage]
The drop storage clause is the default and does not need to be provided.
When all the data cannot be deleted then your options are limited by whether you can get exclusive access to the table to perform the task or the table has to be kept available at all times and how your applications use the table. The next set of approaches all involve the same idea: Instead of deleting 90% of the data, extract the 10% of the data that is good to a new table and discard the original.
In the case where the in use applications only need insert access to the target table then if access can be stopped for a very short window the following approach can be used:
When no FK to target and no insert or delete triggers exist then
  1- rename target to target_old
  2- Re-Create the target table
  3- recreate indexes, re-establish PK and UK constraints, and re-apply grants
  4- allow access
  5- extract good data from target_old and insert it into the recreated target
  6- truncate the target_old table
  7- drop the target_old table
If there is an insert trigger but no delete trigger or FK referencing the table then the re-insertion of the data needs to happen before the users are allowed access to the table since it is not desirable to have the trigger fire for data it has already processed, but as long as the time to select and insert this data is satisfactory then a create tables as select can be used for the re-create table step. This eliminates a step from the list above, but the ordering of the list above is designed to minimize the time the table needs to be unavailable for use.
If the target table has FK references to other tables then since all existing rows either meet the requirement or already violate it because the novalidate option was used in creating the FK constraints then they can be disabled in those cases where the good data is copied prior to allowing access to the table. This can save a fair amount of work and the constraints should again be re-enabled using the novalidate option to save performing unnecessary work. The existence of FK's referencing the target table complicates matters in that either the necessary deletes in the referencing tables must be done in a separate process allowing the FK to be disabled, or the deletes must take place against the target, and the re-naming/re-creating method cannot be used.
If the table is partitioned then native Oracle parallel DML can be used so that each partition has a delete job dedicated to it. This can greatly reduce clock time at the expense of system resources. For a normal table the delete can be logically partitioned to manually duplicate the parallel delete process be running multiple delete statements, each of which has its where clause limited to a specified set of rowid values. If you take this approach then I recommend that you create the indexes for this table with delete job number of transaction work areas itl, such as initrans 4. This will help to ensure each delete job can obtain an existing itl entry so that they do not have to dynamically allocate any, or worse have to wait because the space to allocate an itl is unavailable. When multiple delete jobs are ran the first job should have an unbounded lower range and the last job an unbounded upper range to ensure no section of the target table is missed. It is probably advisable to generate the rowids rather than hardcode them. Sample code to find the boundary rowids follows at the end of the FAQ. It would be fairly easy to change the dbms_output statements to an insert of a single row into a delete job control table that the delete jobs reference in their where clauses to control the delete ranges.
Still another concept is to partition the table such that all rows to be dropped exist in the same partition. This method requires that the data lend itself to partitioning on the columns that are used to determine that the data can be deleted and this is often not practical. If the data requires global indexes or a few rows within the partition key range end up not being valid for deletion then the benefit of portioning the table to support deletes is lost. Still the method should not be overlooked.
set serveroutput on
declare
--             counters
v_ctr          pls_integer := 0 ;
v_ct_rows      pls_integer := 0 ;
v_rowid        rowid            ;
v_rows_per_job pls_integer := 0 ;
--
cursor  c_get_rowids is
 select rowid
 from   po_blanket_hist ;
--
--             pl/sql table to hold boundry rowids
--
type t_rowids is table of varchar2(18)
  index by binary_integer     ;
t_rows         t_rowids       ;
I              binary_integer ;
--
--             get total row count
--
begin
select count(*)
into   v_ct_rows
from   po_blanket_hist ;
--
--             In this example we are going to use 4 jobs
--
v_rows_per_job := floor(v_ct_rows / 4) ;
--
--             While data read, test, save rowid
--
open c_get_rowids ;
I := 1            ;
loop
  fetch c_get_rowids into v_rowid ;
  exit when c_get_rowids%notfound ; -- quit when done
  v_ctr := v_ctr + 1              ; -- count rows returned
  if v_ctr = v_rows_per_job
    then t_rows(I) := v_rowid     ; -- mod takes longer than using a counter
         v_ctr := 0               ;
         I := I + 1               ;
  end if                          ;
end loop                          ;
--
close c_get_rowids ;
--
--          This is where we could save the rowids to a job control row
--
dbms_output.put_line('Row Count is    '||v_ct_rows)      ;
dbms_output.put_line('Rows per job    '||v_rows_per_job) ;
dbms_output.put_line('First  Rowid is '||t_rows(1))      ;
dbms_output.put_line('Second Rowid is '||t_rows(2))      ;
dbms_output.put_line('Third  Rowid is '||t_rows(3))      ;
dbms_output.put_line('Forth  Rowid is '||t_rows(4))      ;
end ;
/

Row Count is    32695051
Rows per job    8173762
First  Rowid is AAAAb+AFQAAALHWAAO
Second Rowid is AAAAb+AFgAAAMiNAA5
Third  Rowid is AAAAb+AGQAAADZnAAL
Forth  Rowid is AAAAb+AGQAAAKCMABB

PL/SQL procedure successfully completed.


Further reading: If you have complex delete logic then the section on views for performance in Jonathan Lewis's book Practical Oracle 8i might be of interest. You can also see Metalink document id 19890.996 and 89799.996, which are forum threads. Oracle support did not offer anything not mentioned in this FAQ.
I have done a truncate on a table, and it takes hours to run - what is going on ?
Whilst the truncate command is normally instantaneous, a quick consideration of what it is doing reveals why they sometimes take a long time. The 'truncate' command must do two things in order for a table to be reduced to "zero" size (ie a single extent with a reset high water mark).
·    Free up any used extents
·    Reset the HWM on the remaining single extent
We can presume that moving or resetting the high water mark is a relatively painless operation, given that the database is always doing this as tables grow. However, if your table is in hundreds/thousands of extents, then freeing them up can take some time. In particular, when using dictionary managed tablespaces, the two system tables FET$ and UET$ need to be updated. For example, if we perform
SQL> select extents from user_segments where segment_name = 'BLAH'
  2  /

   EXTENTS
----------
        15

SQL> alter session set sql_trace = true;

Session altered.

SQL> truncate table blah;

Table truncated.
and then look at the trace file, we'll see
select length
from
 fet$ where file#=:1 and block#=:2 and ts#=:3

insert into fet$ (file#,block#,ts#,length)
values
 (:1,:2,:3,:4)

delete from uet$
where
 ts#=:1 and segfile#=:2 and segblock#=:3 and ext#=:4

delete from fet$
where
 file#=:1 and block#=:2 and ts#=:3
and these operations are SERIAL, namely, only one session can be performing this at a time. So if you have anything on your database that could be "attacking" FET$ and UET$ (for example, sorting in a permanent tablespace, dropping/adding objects frequently), then you will get these kinds of problems occurring.

Further reading: Some versions of 8.0 would crash when a truncate is terminated with Ctrl-C, so take care
How do you write a query that ignores the effects of upper and lower case ?
Some databases have an "ignore case" flag that can be set for the entire database. Oracle does not, and thus case-insensitive queries have long caused problems, not with coding them, but with their performance (since indexes are typically not used to determine the result).
Its relatively straightforward to create a case-insensitive query:
SQL> select *
  2  from EMP
  3  where upper(ENAME) = upper(:b1)
but of course (by default) the "UPPER(ENAME)" cannot take advantage of an index that may have been defined on the ENAME column.
Enter 8i, where the concept of a function-based index is now possible. Before you rush off and try to create them, take note of the following:
·    You must have the system privelege query rewrite to create function based indexes on tables in your own schema.
·    You must have the system privelege global query rewrite to create function based indexes on tables in other schemas
·    For the optimizer to use function based indexes, the following session or system variables must be set:
QUERY_REWRITE_ENABLED=TRUE
QUERY_REWRITE_INTEGRITY=TRUSTED
·    You must be using the Cost Based Optimiser (which means analyzing your tables/indexes)
and then its just a case of creating the index in the conventional way:
create index UPPER_ENAME_IX on ENAME ( UPPER(ENAME) ) ;
Note that this index will NOT be used for case-SENSITIVE queries. You could always have two indexes, one on ENAME and one on UPPER(ENAME), but it would probably be more efficient to just have the function-based one and code:
SQL> select *
  2  from EMP
  3  where upper(ENAME) = upper(:b1)
  4  and ENAME = :b1
for the times where you do not want case-insenstivity.

Further reading: Querying the dictionary for function-based indexes
How much space does a number take up.
Oracle uses an internal 'base 100 encoding' format for storing numbers, which stores two digits of precision per byte, an extra byte that holds both the sign and the 'mantissa' indicating where to put the decimal point in relation to the digits of precision, and for negative numbers a terminating byte holding the value 0x66.
Consequently the actual space used by a number depends on the number of significant digits that appear in the number and the sign of the number, so the numbers 1, 100, 10,000 and 1,000,000 all take two bytes, and -1, -100, -10,000, -1,000,000 will take 3 bytes , whereas 1,234,567 will require 5 bytes, and -1,234,567 will require 6 bytes.

Value    Representation    Bytes Stored      
1    1 x power(100,0)    c1, 2      
100    1 x power(100,1)    c2, 2      
10,000    1 x power(100,2)    c3, 2      
1,000,000    1 x power(100,3)    c4, 2      
-1    -1 x power(100,0)    3e, 64, 66      
-100    -1 x power(100,1)    3d, 64, 66      
-10,000    -1 x power(100,2)    3c, 64, 66      
-1,000,000    -1 x power(100,3)    3b, 64, 66      
1,234,567    1.234,567 x power(100,3)    c4, 2, 18, 2e, 44      
-1,234,567    -1.234,567 x power(100,3)    3b, 64, 4e, 38, 22, 66   
Although there are special cases, as indicated by the values for 1, 100, 10,000, and 1,000,000 above, the rules tell us that the typical N-digit positive number will have require 1 + ceil(N/2) bytes, and a negative number will required 2 + ceil(N/2) bytes, where ceil() is found by rounding up to the next integer where necessary.
For example, most of the (six digit) numbers from 100,000 to 199,999 will encode to 4 bytes i.e.1 + (6/2), whereas most of the (nine digit) numbers from 100,000,000 to 100,099,999 will encode to 6 bytes i.e.1 + ceil(9/2) = 1 + 5. You can run the following SQL (as SYS because of the reference to the X$ object) to verify the second result:
select len, count(*)
from
    (
    select
        vsize(rownum + 99999999) len
    from x$ksmmem
    where rownum <= 100000
    )
group by len
;
Giving results:
      LEN  COUNT(*)
--------- ---------
        2         1
        4         9
        5       990
        6     99000
Don't forget that when stored in a row, each numeric column will also require the standard one-byte row-length overhead
Is there a good way of counting the number of rows in a table ?
You can use the count(*) function to get row counts from Oracle and with Oracle rdbms version 8.1.7+ it would appear that the cost based optimizer, CBO, optimizes unqualified select count statements for you where a primary key constraint exists. The example table is allocated at 200M and the PK index is 20M: For reference the ANSI aggregate function of min, max, count, sum, and avg ignore null values in their input sets and have the form of:
    function( ALL | DISTINCT column_name | * )
Examples - Starting with a full table count on a table with a primary key, PK, constraint defined.
DDC2> EXPLAIN PLAN  SET statement_id = 'mpowel01' FOR
  2  select count(*)
  3  from   item_master
  4  /

Explained.

DDC2> set echo off

      COST CARDINALITY QUERY_PLAN
---------- ----------- -----------------------------------------------------
       123           1  SELECT STATEMENT
                     1   2.1 SORT AGGREGATE
       123      379426     3.1 INDEX FAST FULL SCAN ITEM_MASTER_PRIME UNIQUE


3 rows selected.
The CBO was able to utilize a fast full scan of the PK index to perform the count rather than read the full table sequentially which is what the RULE based optimizer did for the same query. Also notice the cardinality column where Oracle tells you how many rows it thinks it will process in this step; the actual rows returned will probably be different so running explain plans is not a good substitute for running select counts:
DDC2> select count(*) from item_master;

  COUNT(*)
----------
    387835

1 row selected.
If you are running rule based a full table scan is employed to solve the unqualified full count but with version 7.3 on, a hint provided in the SQL overrides the database and session optimizer settings, even if no statistics exist on the table, so you can use an index hint on the PK to force use of the index. Under version 7 the CBO would also full scan the table to solve this query, but again you can hint the SQL as long as statistics existed on the table for version 7.0 - 7.2 and with or without statistics in version 7.3.
Select /*+ INDEX(t t_pk) */ count(*)
From   table_name t

   t = label for the table in the from clause and t_pk is the primary key or unique index name
The index fast full scan option was not available until version 8 and the hint is INDEX_FFS. (addendum from Zach Friese zfriese@earthlink.net in fact the fast full scan was introduced in Oracle 7.3 - see page 5-30, Oracle7 Server Tuning).
In the case where a where clause needs to be provided to obtain the row count for a specific condition then Oracle's ability to optimize the query depends on the where clause columns being indexed just like with any normal query. The PK for this table consists of item_no and plant. The plant is the second column in the key. Remember that in version 7 that Oracle can only use a concatenated index when the leading column of the index is referenced in the where clause but observe:
DDC2> EXPLAIN PLAN  SET statement_id = 'mpowel01' FOR
  2  select count(*)
  3  from   item_master
  4  where  plant = '12'
  5  /

Explained.

DDC2> set echo off

      COST CARDINALITY QUERY_PLAN
---------- ----------- -----------------------------------------------------
       123           1  SELECT STATEMENT
                     1   2.1 SORT AGGREGATE
       123       63238     3.1 INDEX FAST FULL SCAN ITEM_MASTER_PRIME UNIQUE


3 rows selected.
The CBO is still able to scan the PK to count the number of plant 12 entries. Under RULE a full table scan is once again used:
DDC2> EXPLAIN PLAN  SET statement_id = 'mpowel01' FOR
  2  select /*+ RULE */ count(*)
  3  from   item_master
  4  where  plant = '12'
  5  /

Explained.

Elapsed: 00:00:00.04
DDC2> set echo off

      COST CARDINALITY QUERY_PLAN
---------- ----------- --------------------------------------------------
                        SELECT STATEMENT
                         2.1 SORT AGGREGATE
                           3.1 TABLE ACCESS FULL ITEM_MASTER
The CBO and RULE base optimizers can both use indexes to solve count requests for leading indexed columns even those that are not part of the PK or a UK constraint. In the example product_code is the leading column of item_master_idx6 and may be null.
DDC2> EXPLAIN PLAN  SET statement_id = 'mpowel01' FOR
  2  select count(*)
  3  from   item_master
  4  where  product_code = '12'
  5  /

Explained.

DDC2> set echo off

      COST CARDINALITY QUERY_PLAN
---------- ----------- ----------------------------------------------------
         3           1  SELECT STATEMENT
                     1   2.1 SORT AGGREGATE
         3         234     3.1 INDEX RANGE SCAN ITEM_MASTER_IDX6 NON-UNIQUE


3 rows selected.

DDC2> EXPLAIN PLAN  SET statement_id = 'mpowel01' FOR
  2  select /*+ RULE */ count(*)
  3  from   item_master
  4  where  product_code = :1
  5  /

Explained.
[Note the rule hint so the cost and cardinality column of the explain plan table are now null, but the plan is the same because under RULE if the index exists we use it. Also note that in this case using bind variables instead of constants does not prevent the optimizer from using the index under either optimizer approach.]
DDC2> set echo off

      COST CARDINALITY QUERY_PLAN
---------- ----------- ----------------------------------------------------
                        SELECT STATEMENT
                         2.1 SORT AGGREGATE
                           3.1 INDEX RANGE SCAN ITEM_MASTER_IDX6 NON-UNIQUE


3 rows selected.
If you need a count based on the where condition referencing a non-indexed column even if is constrained to be not null then the optimizers will have no option but to full scan the table:
DDC2> EXPLAIN PLAN  SET statement_id = 'mpowel01' FOR
  2  select count(*)
  3  from   item_master
  4  where  family_cd    is null
  5  /

Explained.

DDC2> set echo off

      COST CARDINALITY QUERY_PLAN
---------- ----------- --------------------------------------------------
      1296           1  SELECT STATEMENT
                     1   2.1 SORT AGGREGATE
      1296           1     3.1 TABLE ACCESS FULL ITEM_MASTER

3 rows selected.
The column family_cd is constrained to be not null but does not appear in any index. Logically if the column is constrained to be not null then it has a value for every row in the table so the result of a count of nulls is zero, but the optimizer is still going to count.
In the case where you want the not null count for a null allowed column then you can make the following change:
From:
DDC2> EXPLAIN PLAN  SET statement_id = 'mpowel01' FOR
  2  select count(*)
  3  from   item_master
  4  where  family_cd is not null
  5  /

Explained.

DDC2> set echo off

      COST CARDINALITY QUERY_PLAN
---------- ----------- --------------------------------------------------
      1296           1  SELECT STATEMENT
                     1   2.1 SORT AGGREGATE
      1296      379426     3.1 TABLE ACCESS FULL ITEM_MASTER


To:
DDC2> EXPLAIN PLAN  SET statement_id = 'mpowel01' FOR
  2  select count(family_cd)
  3  from   item_master
  4  /

Explained.

DDC2> set echo off

      COST CARDINALITY QUERY_PLAN
---------- ----------- -----------------------------------------------------
       123           1  SELECT STATEMENT
                     1   2.1 SORT AGGREGATE
       123      379426     3.1 INDEX FAST FULL SCAN ITEM_MASTER_PRIME UNIQUE
By removing the where clause condition test and counting the specific column the CBO can use the PK index, but the RULE based optimized solved the above using a full table scan.
The above covers the case where the column is not-null but what about when it is or can be null. I am pretty sure that at one time if you ran a query of the form select count(*) from table where [nullable] column is null OR is not null then the optimizer would perform a full table scan even if the column was indexed. But with version 8.1.7+ if at least one column in the index that the where clause column is the leading column of is constrained to being not null then the CBO can use it to satisfy the query:
DDC2> EXPLAIN PLAN  SET statement_id = 'mpowel01' FOR
  2  select count(*)
  3  from   item_master
  4  where  product_code is not null
  5  /

Explained.

DDC2> set echo off

      COST CARDINALITY QUERY_PLAN
---------- ----------- --------------------------------------------------------
       132           1  SELECT STATEMENT
                     1   2.1 SORT AGGREGATE
       132        2799     3.1 INDEX FAST FULL SCAN ITEM_MASTER_IDX6 NON-UNIQUE


3 rows selected.

DDC2> EXPLAIN PLAN  SET statement_id = 'mpowel01' FOR
  2  select count(*)
  3  from   item_master
  4  where  product_code is null
  5  /

Explained.

DDC2> set echo off

      COST CARDINALITY QUERY_PLAN
---------- ----------- --------------------------------------------------------
       132           1  SELECT STATEMENT
                     1   2.1 SORT AGGREGATE
       132      376627     3.1 INDEX FAST FULL SCAN ITEM_MASTER_IDX6 NON-UNIQUE


3 rows selected.

DDC2> select count(*) from item_master where product_code is null;

  COUNT(*)
----------
    385100

1 row selected.

DDC2> select count(*) from item_master where product_code is not null;

  COUNT(*)
----------
      2744

1 row selected.

DDC2> select count(*) from item_master;

  COUNT(*)
----------
    387844

1 row selected. 
[This count differs from the one at the top of the page because this is production and activity is taking place, but the null and non-null product_code counts add to the current total rows so Oracle was able to use the index.]
If all the columns of a normal index are capable of being null then expect the optimizer to full table scan. However bitmap indexes store nulls so the CBO can use bitmap indexes to solve null and non-null query requests including count(*). The RULE base optimizer does not recognize bit-map indexes.

Further reading: Is there a simple way to produce a report of all tables in the database with current number of rows ?

Why do some queries go very fast and others very slowly when the optimizer_mode is choose ?           
Are composite indexes more useful than single column indexes ?           
Is is possible to allow Microsoft Query receive the output from the dbms_output package ? If not, what are the alternatives ?           
Is there any way to identify which init.ora parameters are relevant to tuning the database ?           
How do I decide on a good value for pga_aggregate_target in 9.2 ?           
I have a query that I run on two identical databases - on one it is very quick on the other it is very slow - what's up ?           
Is there any way of identifying how parameters in the Oracle INIT.ORA control file can be used for tuning the database including: (1) Memory usage (2) CPU usage (3) Disk usage           
What is the fastest way of dumping a database table to a flat file - is utl_file a good idea ?           
How do I get global histograms onto partitioned tables in Oracle 8.1 - apparently they can only be generated properly in v9.           
In documentation about tuning SQL, I see references to parse trees. What is a parse tree ?           
Are there any benefits or drawbacks to regular reboot of a machine running an Oracle database ?           
Is there any way to keep track of when datafiles are resized.          
Should I worry about having some relationship between extent sizes and db_file_multiblock_read_count ?           
How can I apply an Oracle database (application) change to many equal instances located across the country on a single date ?
We're looking for he most efficient way to apply our database releases to dozens of Oracle instances on the same day.            
Are there any good strategies for backing up a data warehouse ?           
Why is the Comment_Text Column on my Sys.Dba_Audit_Trail Table is always empty ?            
I am getting error while restarting the database (9.0.1) after changing the parameter db_block_buffers in SPFILE.ora file. The error was "could not start the database with old and new parameter". What is wrong ?           
I have audit_trail=db - but want to capture the program name from v$session as well. How can I do this ?          
Why do I keep getting Oracle error - ORA-1002: FETCH OUT OF SEQUENCE in my batch code ?           
I lock my table before truncating it, but I still get ORA-00054: resource busy ... Why ?           
How do I determine the degree of fragmentation of a table ?           
How do I determine how much space I could reclaim by deleting a chunk of data from a table (and cleaning up afterwards)           
How do I decide on a sensible size for my redo log buffer           
I created a user with the wrong default tablespace - How can I move his entire schema from one tablespace to another.           
I have created a trigger on a table, and this seems to have wrecked the performance of some bulk inserts - why ?           
v$sysstat shows the number of sorts that have occurred. How do I find the number of hash joins that have occurred ?           
I have a table partitioned on column timestamp, with a primary key on column id. How do I create a locally partitioned index for this PK ?           
How do I calculate suitable values for PCTFREE and PCTUSED for a table.           
What is the best way to move a 500GB from 32 bit 8.1.7.4 to 64 bit 9.2 ? exp/imp, mig, SQLLDR, or CTAS ?           
Is it only committed transactions that are written into the redo log ?            
I wish to update certain portions inside long column in 8.1.7. If this were a varchar2 field I would use the replace function. How do I achieve the same thing in with a long ?       


PL/SQL

Is there a way to read the names of a set of files with a given extension from a directory as if from a SQL cursor?
In 10g there is a new procedure hidden away in the undocumented  DBMS_BACKUP_RESTORE package. It's called searchfiles, which is a bit
of a giveaway and appears to have been introduced for the new backup features in 10g, as RMAN  now needs to know about files in the recovery destination.

 Calling this procedure populates an in memory table called x$krbmsft, which is one of those magic x$ tables, the only column which is of relevance to us
 is fname_krbmsft which is the fully qualified path and file name.  This x$ table acts in a similar fashion to a global temporary table in that its contents
 can only be seen from the calling session. So two sessions can call searchfiles and each can only see the results of their call (which is extremely useful).

The code sample below will only really run as sys, due to the select from x$krbmsft, it's just intended as a demo. The first two parameters in the call to
searchfiles are IN OUT so must be defined as variables, even though the second parameter is of no consequence to us and should be left as NULL.
Even though they are IN OUT, testing shows they don't appear to change. The first parameter is the string to search for, in much the same format as
you would pass in a call to dir (Windows) or ls (Unix).

        DECLARE
               pattern   VARCHAR2(1024) := 'C:\temp\*.csv'; -- /tmp/*.csv
               ns        VARCHAR2(1024);

        BEGIN

               SYS.DBMS_OUTPUT.ENABLE(1000000);

               SYS.DBMS_BACKUP_RESTORE.searchFiles(pattern, ns);

               FOR each_file IN (SELECT FNAME_KRBMSFT AS name FROM X$KRBMSFT) LOOP
                   DBMS_OUTPUT.PUT_LINE(each_file.name);
               END LOOP;

        END;
        /

This procedure appears to raise no exceptions, passing an invalid search string, such  as a non-existant path or one with no permissions, simply results in
an empty x$krbmsft.  However, if the database parameter db_recovery_file_dest is not set, you will get ORA-19801.

Interestingly, this procedure recursively searches sub directories found in the search string.  So passing a string of 'C:\windows' (for example) populates
x$krbmsft with not only the files found in that directory but all the files found in all directories beneath, such as C:\windows\system32. 
As x$krbmsft is an in memory table, you have been warned! Calling this procedure on a directory with thousands of sub directories and files has the potential
to consume large amounts of memory (or more likely just generate an exception).


Further reading:       http://www.ixora.com.au/scripts/prereq.htm  (shows how to create views on x$ fixed tables)
Further work:          http://www.chrispoole.co.uk/
The way forward is to wrap this functionality in a package, perhaps using  directory objects instead and checking access priviliges, creating a view based on
x$krbmsft, maybe even allowing/disallowing subdirectory traversal and limiting memory usage search size. I see another project coming, stay tuned for XUTL_FINDFILES!

How do I send an email from PL/SQL, possibly with attachments?
It is often desirable for a long running PL/SQL program to be able to send an email to a designated recipient, possibly including a file of results. Although this can be done by wrapping the procedure in a shell-script or BAT file, it is more elegant to send the email direct from the PL/SQL procedure. Oracle versions 8.1.6 and later include the UTL_SMTP package for doing this, but you need to know a bit about SMTP to use it, and it doesn't lend itself to easily sending attachments. Is there a simple way to send an email from PL/SQL, with attachments, without having to understand how SMTP works?
How do I delete an O/S file from within PL/SQl
One 'near-soultion' is to use the utl_file package to re-open the file for writing (without the append option), and then close the file without writing to it. This recovers most of the disk space, but still leaves the file on the system as an empty O/S file.
Another approach is to write a short piece of Java, which can then be called from PL/SQL. Java currently offers far more flexibility than PL/SQL when dealing with O/S files, for example you could use Java to invoke and load a directory listing from PL/SQL so that you know what files exist for deletion. (See further reading).
A pure simple PL/SQL solution, however, appears to exist in the dbms_backup_restore package. This is virtually undocumented (other than in the script dbmsbkrs.sql) in 8.1.7, but contains a simple deletefile() procedure.
begin
    dbms_backup_restore.deletefile('/tmp/temp.txt');
end;
/
This appears to work as required with no side-effects.
Update for 9.2
In version 9.2, the utl_file package has been enhanced with the fremove() procedure for deleting a file.

Updated 24th Jan 2005
I have received an email from S Kumar pointing out that the call to dbms_backup_restore.deletefile() always gives the message: “PL/SQL procedure successfully completed” even if the path or file is not present or file or path name is invalid or if open. So we can not depend on this package's output.

Further reading: Expert One-on-One: Oracle (Tom Kyte) for examples of using Java from PL/SQL packages, in particular a routine to call Java to perform a directory listing.
Can I check that the old password before changing to a new password ?
The Quick Answer
The quick answer is to use SQL*Plus as the following examples show :
SQL> connect norman/********
Connected.

SQL> password
Changing password for NORMAN
Old password: ****
New password: ******
Retype new password: ******
ERROR:
ORA-28008: invalid old password
Password unchanged
The example above shows that the use of an incorrect old password has caused the changing of norman's new password to fail.
The next example shows the password being changed correctly when the correct old password is supplied :
SQL> password
Changing password for NORMAN
Old password: ********
New password: ******
Retype new password: ******
Password changed
So far, so good except if you run the password utility as a DBA user, you don't get prompted for the old password as follows :
SQL> connect system/************
Connected.

SQL> password norman
Changing password for norman
New password: ******
Retype new password: ******
Password changed

The Long-Winded Answer
Getting back to the original question, which was "In a change password procedure can I check that the old password has been entered correctly before changing to a new password ", one solution is the following procedure.
This needs to be created in the SYSTEM user (or similar) and the user needs to be granted select on sys.dba_users, and alter user privileges directly (not through a role) otherwise it doesn't work.
Here's the code :
create or replace procedure change_password(iUsername    in dba_users.username%type,
                                            iOldPassword in dba_users.password%type,
                                            iNewPassword in dba_users.password%type)
as
  -- Variables.
  vOldEncryptedPassword dba_users.password%type := null;
  vOldConfirmPassword   dba_users.password%type := null;
 
  -- Constants.
  cUsernameMissing      constant varchar2(100) := 'Username must be supplied.';
  cInvalidUsername      constant varchar2(100) := 'Username does not exist in this database.';
 
  cOldPasswordMissing   constant varchar2(100) := 'Old password must be supplied.';
  cOldPasswordMismatch  constant varchar2(100) := 'Old password mismatch - password not changed.';
  cNewPasswordMissing   constant varchar2(100) := 'New password must be supplied.';
 
  cPasswordChangeFailed constant varchar2(100) := 'ERROR: failed to change password.';
 
  -- Internal helper procedures and functions. 
  function GetPassword(iUsername in dba_users.username%type)
  return varchar2
  as
    vResult dba_users.password%type := null;
  begin
    select password
    into   vResult
    from   dba_users
    where  username = upper(iUsername);
   
   return vResult;
  exception
    when others then
      return null;
  end;
 
  procedure AlterPassword(iUsername in dba_users.username%type,
                          iPassword in dba_users.password%type,
                          iValues   in boolean default false)
  as
    vSQL varchar2(1000) := 'alter user ' || iUsername ||
                          ' identified by ';
  begin
    if (iValues) then
      vSQL := vSQL || 'values ''' || iPassword || '''';
    else
      vSQL := vSQL || iPassword;
    end if;  

    execute immediate vSQL;
               
  exception
    -- All that can go wrong is a malformed SQL statement
    -- Famous last words .....
    when others then
      raise;
  end;
  
  
 
begin
  ---------------------------------------------------------------------------------------
  -- Parameter validation first is always a good idea :o)                              --
  ---------------------------------------------------------------------------------------
 
  -- Username must be supplied and be present in the database.
  -- We save the password for later on as we will need it.
  if (iUsername is null) then
    raise_application_error(-20001, cUsernameMissing);
    return;
  else
    vOldEncryptedPassword := GetPassword(iUsername);
 
    if (vOldEncryptedPassword is null) then
      raise_application_error(-20001, cInvalidUsername);
      return;
   end if; 
  end if;
 
  -- Old password must be supplied. We will be using this later to
  -- validate the changing of the password.
  if (iOldPassword is null) then
    raise_application_error(-20001, cOldPasswordMissing);
    return;
  end if;
 
  -- New password must also be supplied - for obvious reasons.
  if (iNewPassword is null) then
    raise_application_error(-20001, cNewPasswordMissing);
    return;
  end if;
 
  ---------------------------------------------------------------------------------------
  -- We should now be in posession of a set of valid parameters, lets have some fun !  --
  ---------------------------------------------------------------------------------------
  
  -- We obtained the current encrypted password above, so now we can change the existing 
  -- password to the old one, and check to see if it is the same. If so, we are able to
  -- continue by changing to the new password. If not, we reset the old password to the
  -- encrypted on, and bale out.
 
 
  -- First of all, change the password to its current value - strange, but needed.  
  begin
    AlterPassword(iUsername, iOldPassword);
  exception
    when others then
      -- Cannot change the user's password to its existing value - bale out.
      -- So far, we have changed nothing so no corrective actions required.
      raise_application_error(-20001, cPasswordChangeFailed);
  end;
 
  -- Next, fetch the 'new' old password and compare it with the 'old' old password.
  vOldConfirmPassword := GetPassword(iUsername);
 
  if (vOldEncryptedPassword <> vOldConfirmPassword) then
    begin
      -- Old password doesn't match, reset to the old one and bale out.
      AlterPassword(iUsername, vOldEncryptedPassword, true);
      raise_application_error(-20001, cOldPasswordMismatch);
      return;
    exception
      when others then
        raise;
    end;     
  end if;
   
  -- Old password matches, change password to the new one.
  begin
    AlterPassword(iUsername, iNewPassword);
  exception
    when others then
      raise;
  end;
end;
/                                    
Examples of use
The following examples are running in user SYSTEM and are attempting to change the password for user FRED. Fred's current password is also fred. Just to prove it all :
SQL> connect fred/fred
connected

SQL>connect system/************
connected
First an attempt to use the wrong old password :
SQL> exec change_password('fred','trouble','wilma')
BEGIN change_password('fred','trouble','wilma'); END;
*
ERROR at line 1:
ORA-20001: Old password mismatch - password not changed.
ORA-06512: at "SYSTEM.CHANGE_PASSWORD", line 127
ORA-06512: at line 1
Next, not supplying a valid username :
SQL> exec change_password('nobody', 'something', 'new');
BEGIN change_password('nobody', 'something', 'new'); END;
*
ERROR at line 1:
ORA-20001: Username does not exist in this database.
ORA-06512: at "SYSTEM.CHANGE_PASSWORD", line 78
ORA-06512: at line 1
Finally, a working example
SQL> exec change_password('fred','fred','barney');
PL/SQL procedure successfully completed.

SQL>connect fred/barney
connected
I'm sure you can amend the above code to suit your system. Note that it uses execute immediate internall. If you are using an older version of Oracle, you will need to change this to use the dbms_sql package instead.


No comments:

Post a Comment