Tuesday, October 6, 2009

10 Things I like about Oracle 10G

Let's consider this blog to be a review of sorts. Oracle 10G has been Oracle's flagship release for some time now. I thought it might be beneficial to focus on some of the benefits that 10G has to offer. They aren't in order of importance. It’s just a general list that may help readers remember that there are some pretty cool features in Oracle 10G. Hopefully you will find it to be both informational and an interesting read.
Ease of Installation
I remember the good old days. When I began my career, the Oracle installers were just screen painters that forced you to use the TAB key to navigate back and forth between selections and screens.
Installations used to take hours and experiencing problem after problem was commonplace. Successfully installing Oracle became a rite of passage as a DBA. Four hours after you started the installation (and fighting through errors, warnings and placing a half dozen CDs in the reader), you would finish up and think to yourself "I am an installation GENIUS!". You next thought would be "I hope I don't have to go through this again for a while."
With each new release, the installer became easier to use, installation speeds increased and the number of installation problems decreased. In a few years, I'll be telling new DBAs "You don't know how good you have it. I remember when installations used to take three days... We were real DBAs back then. We even had to log in at NIGHT TIME and take the data OFFLINE to do work." Darn kids don't know how good they have it nowadays….
We do DOZENS of Oracle installations a month here at Remote DBA Experts. One of the benefits of working here is the sheer thrill of working on virtually every operating system that Oracle makes a product on. Name the O/S and hardware combination and we can truthfully say "been there and installed an Oracle product on it."
When our first set of 10G CDs arrived a couple of years ago, the DBA opened up the package and said "Hey, where's the rest of them? I think they forgot to send me all of the CDs I need to do the install." We checked MetaLink and found that Oracle10G only needs a couple of CDs. One main CD and another called "The Companion CD" which really should be called "The Companion CD that is required for all installs" since it pretty much has to be installed when you install Oracle.
The team here really likes the fact that the installer now performs pre-installation requirement checks before laying down the software. The installer itself now determines if the O/S environment is configured to correctly run the Oracle software. Having the installer verify that you have performed "your pre-installation" duties is very beneficial.
10G Data Pump
I'm a big fan of 10G Data Pump. Once you become accustomed to the new environment, it's a pretty good utility. There is a learning curve - like the first time you try to restart a Data Pump job by just rerunning the script again. You'll quickly find out that Data Pump has a command for that. At least it provides you with the "Master table exists" error message to point you to your mistake.
Here's a quick list of some of the features I like:
     The utility IS faster than Export and Import. Maybe not as much faster as Oracle says it is - but it is faster. I have personally seen Exports and Imports run for hours - and days.
     I like being able to use parallelism to improve performance.
     You can stop and restart jobs. How great is that? The master table knows where you left off so you don't have to spend all of that time trying to code a new Import statement that won't duplicate rows in tables that were successfully loaded. This is really helpful.
     How many times did you have to "pick a number, any number" when you had to determine how much space you would need for your Export Dump file? Data Pump provides an option that estimates the output dump file size that would be created by a Data Pump Export.
     One of my favorite features is the network transfer option that allows administrators to attach to any source database that can be accessed from the network, transfer the data and write it to the target database. No dumpfile required! Administrators use the NETWORK_LINK parameter to specify a pre-created database link that points to the source database.
     You can use the Data Pump Flashback feature to go back in time and export data how it looked in the past. This helps when you get a visit from one of your developers stating "I think I might have updated 47 production tables by mistake, but I won't know until I see the data." Use the flashback feature during the Data Pump Export and use the Data Pump Import utility to pump the data into a new schema for the developer to review.
     You can use the EXCLUDE parameter to exclude specific objects using custom syntax. For example, EXCLUDE=INDEX:"LIKE 'DEPT%'" would exclude indexes that start with the name "DEPT".
The more I thought about the Data Pump features I liked, the longer I thought this blog would be. So here are links to three blogs I wrote on Data Pump.
     Data Pump Part 1 Features and Benefits - A general overview of the 10G Data Pump Utility.
     Data Pump Design and Architecture - We dig deeper into inner-workings of the Data Pump utility.
     Data Pump Export - We continue the Data Pump discussion by learning how to use 10G Data Pump Export to "pump" data out of an Oracle database.
     Data Pump Import - The series finale on Data Pump Import.
10G Automatic Alerting
Oracle10G provides early warning mechanisms to alert administrators of possible error conditions. The database itself collects numerous metrics that were previously collected by 9i's Oracle Enterprise Manager. Administrators are able to select from dozens of server-generated alerts. Server generated alerts are activated by a new background process called MMON. MMON is able to access the SGA directly and perform the metrics calculations and threshold monitoring.
10G Grid Control enhances these capabilities by providing administrators with a GUI interface to administer alerts and configure notification mechanisms. Administrators are able to use Grid Control or the standalone version, Database Control to set up e-mail and pager notifications. In addition, all server-generated alerts are displayed on the Database Control/Grid Control home pages.
As readers of this blog know, I'm a huge fan of 10G Grid Control. Although the database alerts can be set up in the database using the archaic command line interface, if I can do it using Grid Control with a few clicks of the mouse, that's the tool I will use. And to think I used to call 9i OEM "a Sissy GUI Tool".
Setting up 10G's "early warning system" places you in the proactive DBA category as opposed to reactive. The term we us to describe a pure reactive DBA is "firefighter". That's the DBA or DBA team that runs from one fire to the next. I will acknowledge that we are ALL firefighters from time to time. At one of my previous jobs, I spent six months working with another DBA trying to beat an absolutely atrocious third-party vendor package into submission.
You all know the type of third-party vendor package I'm talking about. Someone from your management food chain shows up in your cube and states "Hey, even though you rated this vendor an absolute 0 in all technical categories, and flatly stated that we should not consider them as a viable alternative, we chose them anyway. The Senior VP from the business unit liked the color of their screens. I'm sure you'll be able to get it working."
You then get on 72 conference calls with the vendor after the package crushes a 6 CPU box with 10 concurrent users. Most of the conversations are with the vendor's lead technical experts who are as arrogant as they are inexperienced. Their continued mantra is "you don't understand how Oracle works, it's your fault, you don't understand how Oracle works, it's your fault….". This mantra never stops even though you show them poor SQL coding, transaction looping problems, etc…
But I digress... During my career as an Oracle instructor, one of the most common recommendations I used to provide to my Oracle students was the benefits of database monitoring. My catch phrase was "We monitor by day so we don't get called at night." By the end of the DBA I class, all I would have to say is "We monitor by day…" and the class would respond accordingly. I would then award them with a set of scripts that allowed them to do database connect checks and track datafile free space.
10G Grid Control does all that (and more) for you automatically. The set of checks range the spectrum, from agent unable to connect to target (that means either the database, server or agent broke), to tablespace freespace and alert log errors. As I stated previously, 10G provides dozens of alert choices and is your one-stop-shop for proactive database monitoring. That's a good thing. I would rather wade through dozens of alerts I don't want to activate than spend ANY time coding an alert that wasn't provided by the toolset.
One of my previous blogs titled "Configuring 10G OEM Grid Control's Automatic Alert Notification System" will help you set up proactive performance monitoring in 10G. The title says it all. It provides everything you need to activate 10G Grid Control's early warning system.

I thought I would combine two different topics of conversation. We'll review of some of the 10G features that we take for granted or are easily overlooked. The discussion also continues its lighthearted review of day-to-day DBA life and provides a few helpful hints and tips to make that life easier. Weird combination but it may make for an interesting and informative read.
Here are a couple of my favorite DBA discussions….
Favorite DBA Conversation #1
Any discussion that ends in "I hope you're not too busy because we need it now." I really dislike the conversation that ends in "we needed it yesterday." If you needed it yesterday you should have stopped in to discuss this request a week before yesterday. We all know these types of requests because we have all gotten them. One of your customers pops in and states:
"We need a complete refresh of XYZ database from production. Oh, and its an ERP application database and we don't have enough disk space in test to hold everything so you'll need to figure out how to reduce its disk size requirements AND make sure the application still works.
I know the box only has 1 CPU and 250 MEG of memory, but can you make sure we still get good performance? We're testing some application coding changes and we need to get performance benchmarks. We'll compare the stats from the test box to the ones we get from our 22 CPU production box. It may not be an "apples to apples" but I think the boxes are close enough to compare the performance stats.
I know its Friday afternoon at 3:00 and even though we knew about this refresh 14 weeks ago, we didn't think we needed to tell you until 1 day before we wanted it. So, we would like to have the system ready by Monday."
Over the last 20 years I have received dozens of these last minute requests. Requests that have made me cancel plans, postpone trips and generate numerous apologies to friends and family. The work never bothered me, but the timing of the requests sometimes did. It also reduced the quality of service I provided to my other customers. When you have DBAs jumping from one non-planned request to another, it affects their ability to work on planned requests as well as perform proactive administrative duties.
Fix #1: You minimize these requests by creating, advertising and enforcing DBA work request SLA agreements. Agreements that clearly state lead times per type of request. The more widespread the problem is at your organization, the more advertising you must do. It really becomes a sales pitch on the importance of a DBA unit being able to forecast workloads. The unit can't provide quality service to their customers if they are being affected by work requests that could have been planned for - but weren't.
The mantra should be "all of our customers suffer when we receive un-planned work requests." I'm not afraid of telling application manager A that she can't get her planned requests done because application manager B just gave us 32 emergency requests. Kind of like putting them both in a jar and shaking it up a bit.
Fix #2: 10G Grid Control allows you to clone both the Oracle Home software installation as well as the Oracle database. Grid will automatically adjust the specific home properties (IP address, host name, listener settings) during the cloning process. 10G DBCA templates allow you to clone a new database from existing databases. In addition, DBCA allows you to clone just the source database file structures or the structures and the data.
You can also use some of the 10G Data Pump features to tailor the new database to your user's specifications. 10G Data Pump provides numerous options that:
     Specifiy what objects you want to transfer from one database to another
     Transfer a subset of data from those objects using various filtering options
     Remap objects during the migration process. You can remap schema objects from one user to another, from one tablespace to another
You already know I'm a huge fan of 10G Data Pump. You need to leverage the features this tool provides to customize schema and data transfers from one database to another. You shouldn't be writing scripts when you don't need to. The only way you will realize the full potential of 10G Data Pump is to READ THE UTILITIES manual and see how powerful 10G Data Pump really is. It will make your life easier.
Favorite DBA Conversation #2
You come in to a meeting and you're confronted by an application manager who tells you that you need to tune your database because his ace developer "Sir Lawrence of LongSQL" can't get his 32 page SQL statement to achieve sub-second response times.
You know the SQL statement that I'm talking about. It's the one that contains 15 inner and outer joins, 12 subselects, and…….(feel free to add any other type of complex SQL Syntax here).
Note from database administrators to application developers:
Just because you can code a 32 page SQL statement, it doesn't mean you should. We all understand how powerful the SQL language is, but do you really need to try and use every feature available in a single statement? There's a difference between leveraging the power of the SQL language and attempting to perform too much work in a single SQL statement. Generating long, complex SQL statements that no one can debug does NOT classify you as a supercoder. It does classify you as something else, but I can't say that publicly.
Favorite DBA Conversation #3
Favorite DBA Conversation #3 combines components of conversations #1 and #2. It occurs when you are approached by a customer who says they have developed this "small" application that goes live in two weeks and it isn't performing "as well as we think it should." This means that you have to review the SQL from 23 new application programs in those two weeks.
Fix #1: Reinforce the need for the DBA team to be involved in all application development projects. This can be done by creating a series of application design review meetings. A dedicated an entire blog on this subject.
Fix#2: You will need to start a top down performance analysis project that focuses on tuning the most poorly performing queries first. If you don't have access to the 10G Grid Control sissy GUI tools, you can use the V$ views to retrieve the top resource consuming statements. You can use the V$SQLAREA view for Oracle9I and below. 10G offers the V$SQLSTATS view that provides some distinct advantages over its previous counterparts.
10G V$SQLSTATS Performance View
Before we discuss the new V$SQLSTATS view, let me deviate for a minute to provide you with some quick SQL tuning advice. V$SQLAREA is one of my favorite SQL tuning views. Until V$SQLSTATS came along…
If I want to perform a traditional "top down" tuning approach and tune the highest resource consuming SQL, I'll use the statements below to identify the top resource consuming queries.
The following query identifies the SQL responsible for the most disk reads:
SELECT disk_reads, executions, disk_reads/executions, address, sql_text FROM v$sqlarea WHERE disk_reads > 5000 ORDER BY disk_reads;
The following query identifies the SQL responsible for the most buffer hits:
SELECT buffer_gets, executions, buffer_gets/executions, address, sql_text FROM v$sqlarea WHERE buffer_gets > 10000 ORDER BY buffer_gets;

You can create a more readable report in SQLPLUS by inserting report breaks between the output lines. To generate the report breaks in SQLPLUS, issue the following statement before running the query:
BREAK ON disk_reads SKIP 2 --- for the disk read report and
BREAK ON buffer_gets SKIP 2 --- for the buffer get report

The first query returns SQL statements responsible for generating disk reads greater than 5,000 while the second query returns SQL statements responsible for generating buffer reads greater than 10,000. I used these numbers just as an example but I sometimes use them as a starting point. I'll then adjust them up or down accordingly, based on their output. The numbers also depend on the system I'm reviewing. I'll use different numbers for OLTP environments than I would for data warehouses.
You'll notice that I divide the number of disk and buffer reads by the number of statement executions. If a statement is generating 1,000,000 disk reads but is executed 500,000 times, it probably doesn't need tuning.

Heavy disk reads per statement execution usually means a lack of proper indexing, poor selection criteria, etc.. Heavy buffer reads sometimes means the exact opposite - indexes are being used when they shouldn't be.
But I'm personally most interested in workload, that's why I most often use the buffer cache hits in my initial queries.
But the SQLTEXT column in V$SQLAREA does not provide the entire text of the SQL statement. That's why I include the address column in the report. I can use that value to dump the entire SQL statement from V$SQLTEXT using the statement below (where xxxxxxxx is the value in the address column from the V$SQLAREA reports above):
select sql_text from v$sqltext where address = 'xxxxxxxxx' order by piece;
Oracle 10G R2 provides a new view called V$SQLSTATS that contains a combination of columns that appear in V$SQL and V$SQLAREA. The benefits that V$SQLSTATS provides are as follows:
     Since V$SQLSTATS contains the entire text of the SQL statement AND its associated performance statistics, we are no longer required to access both the V$SQLTEXT and V$SQLAREA to obtain the information we need.
     Oracle states that V$SQLSTATS is faster and more scalable.
The data in V$SQLAREA has a tendency to get its contents flushed out just when you need to get additional information from it. The V$SQLSTATS view provides users with a longer access window. That's one of the key benefits to this view.
The key to being a successful Oracle DBA is that you must be willing to accept the fact that with each new database release, you will have to adjust the way you administer your environments. In my profession, I work with many DBAs from companies across the globe. The most successful administrators I have met are the ones that intelligently leverage the new features that are available in the releases of the database they are administering.
Running Commands and Scripts Against Multiple Targets
Being involved in an IT audit, like death and taxes, is inevitable. If you work as a technician for any length of time, you will be working with an auditor and some point. Since we provide remote database services, our organization is audited numerous times per year. Sox audits, security audits, HIPPA audits - you name the audit and we have been involved in it. We have an office that is left open full-time for onsite auditors.
Auditors often ask you to run scripts on the operating systems and databases your unit is responsible for supporting. In the past, you would review the scripts and then distribute them to team members to run on the environments they were responsible for supporting. 10G Grid Control makes this task a whole lot simpler. Grid provides you with the capability of running host commands and scripts against multiple host targets at the same time. Virtually any command or script that can be executed on a single host can also be executed on multiple hosts using 10G Grid Control R2. Here's a blog that shows you how to use 10G Grid Control R2 to run operating system scripts against multiple targets.
10G Grid Control R2 also provides an easy-to-use panel that allows administrators to run SQL scripts against multiple database targets simultaneously. Just like its host command counterpart, all you need to do is code up a SQL statement, choose a set of database targets, run the SQL against the multiple targets and review the output. The benefits that this feature provides is only constrained by the DBAs imagination. Here's a link to a blog that will show you how to run scripts against multiple database targets.
Restore Points
How many times have database recoveries been performed because of incorrect changes made to database data? A user deletes or updates "one too many rows" by mistake and the result is a time-consuming and error prone process to restore and recover the database to a point-in-time before the error occurred.
Let's go back to my days as an Oracle instructor…. A common question in the Oracle DBA backup and recovery class was "why can't I just roll the database back to remove unwanted changes instead of restoring the database from a backup and applying the redo logs to roll forward to a point-in-time before the error occurred?" The question was invariably followed by "don't you think that would be easier?" The answer was always the same "yes, it would be easier, but the Oracle database doesn't have that capability."
That was before Flashback Database came along. Oracle's Flashback Database feature provided a new tool in the DBA's recovery toolbox. Flashback Database allows the DBA to "roll back" a table, set of tables or the entire database to a previous point-in-time. Administrators often used a timestamp during the execution of flashback operations.
10G R2 enhances Flashback Database by allowing administrators to associate a user-defined name with a point-in-time. The user-defined name, called a restore point, can be used in place of a SCN or timestamp when executing a FLASHBACK TABLE or FLASHBACK DATABASE statement.
A SQL statement is used to create the restore point which means it can be embedded in application programs as we as executed on an as-needed basis in SQL*PLUS. The example below shows the creation of a restore point:
CREATE RESTORE POINT batch_daily_cycle_complete;
Now if I execute the following statement:
UPDATE cti.employee_salary_table SET
salary = 200000 WHERE empl_lname = 'FOOT';
My boss can execute this statement to correct the 'mistake' (although I prefer to call it a fully justifiable pay adjustment):
FLASHBACK TABLE cti.employee_salary_table TO batch_daily_cycle_complete
10GR2 also provides guaranteed restore points which ensure that the database can be flashed back to the point-in-time they were created. Guaranteed restore points save disk space because only the flashback logs required to meet the guaranteed restore point need to be retained by the database. The statement below creates a guaranteed restore point:
CREATE RESTORE POINT batch_daily_cycle_complete GUARANTEE FLASHBACK DATABASE;
Automatic Undo Retention
In Oracle9i, administrators had their choice of continuing to manage rollback segments on their own (manual undo management) or configuring the database to manage its own before image data (automatic undo management). Oracle refers to system managed before image segments as undo segments.
You didn't have to be an Oracle expert to know that manual rollback segments were "somewhat troublesome." Out of space conditions, contention, poor performance and the perennial favorite "snap shot too old" errors had been plaguing Oracle database administrators for over a decade. Oracle finally decided that the database could probably do a better job of managing before images of data than we could.
But implementing automatic undo retention didn't necessarily guarantee users a trouble free undo environment. There really is only one parameter that administrators can tweak in a system that uses automatic undo. The UNDO_RETENTION parameter specifies the amount of time in seconds that Oracle attempts to keep undo data available. Setting this parameter to the appropriate value could be described as more of an art than a science.
Set it too low and you are wasting disk space. In addition, you aren't taking advantage of being able to flashback your data to as far back as the disk space allocated to the undo tablespace allows. Set it too high and you are in danger of running out of freespace in the undo tablespace.
10G R2 comes to the rescue! The database now collects undo usage statistics, identifies the amount of disk space allocated to the undo tablespace and uses that information to tune the undo retention period to provide maximum undo data retention. Administrators can determine the current retention time period by querying the TUNED_UNDORETENTION column of the V$UNDOSTAT view.
RMAN Automatically Creates Temporary Datafiles During Restore
Ever perform a restore and forget to recreate the temporary tablespace's data files? I have. The database comes on-line but the first time a user's sort overflows from memory to disk, an error code is returned. Recreating a database's temporary files is one of those activities that are often overlooked. 10G R2 RMAN will automatically recreate temporary datafiles belonging to locally managed temporary tablespaces during the recovery operation.
Enterprise Manager Statistics Management Support
A new link titled 'Manage Optimizer' statistics has been added to the database administration home page in Enterprise Manager. Clicking on the link displays the new Manage Optimizer Statistics home page. The new home page provides wizards and templates that facilitate and automate optimizer statistics management. In 10GR2 administrators are able to use Enterprise Manager to gather, restore, delete, lock and unlock statistics
Why would you want to lock and unlock statistics? Let me describe one real-life example from a recent project. We have a warehouse system that contains an extremely volatile table. Describing it as volatile is like stating the Titanic sprung a small leak. Hundreds of thousands of rows are added and removed around the clock.
The level of activity is based on the particular business process taking place. At one particular time, the table can contain hundreds of thousands of rows and at other times it can contain dozens. OK, so when do you generate statistics? If you run DBMS_STATS at the same time each day, in most cases you would think you would get a predictable set of statistics generated. Not so in this table's case, sometimes you get hundreds of rows and some times you get hundreds of thousands.
If you are unlucky and generate statistics on the table when it has hundreds of rows, access paths suffer when the table grows to hundreds of thousands. So, we ran DBMS_STATS when we knew the table contained numerous rows and never generated statistics on it again. Problem is that we can no longer easily use the SCHEMA option for DBMS_STATS. Freezing statistics on this table will allow us to use the SCHEMA option and not generate statistics for selected tables.
DML Error Logging
What I like about SQL*LOADER is its ability to continue processing through load errors. If the record being loaded is rejected because of a unique constraint violation or discarded because it does not meet some user-defined criteria, SQL*LOADER places the record into a discard or reject file and keeps on running until it has reached a user-specified maximum number of rejects or discards.
The loader's log file will show how many records were loaded, rejected or discarded. I can look at the messages and review the discard or reject files, fix the problem and attempt to reload them again.
In 10G R2, this same type of processing has been applied to bulk DML operations. Users are able to specify whether they want to log errors or abort the entire DML statement, set the amount of detail information logged and the maximum error threshold. So instead of the entire DML statement blowing up and rolling back, Oracle will log the errors to the error table and continue processing until it exceeds the maximum number of errors. Just like SQL*Loader!
Oracle 10G R2's DBMS_ERRLOG package is used to create the error log output table and link it to the table being updated. The package's specification is provided below:
DBMS_ERRLOG.CREATE_ERROR_LOG (
dml_table_name IN VARCHAR2,
err_log_table_name IN VARCHAR2 := NULL,
err_log_table_owner IN VARCHAR2 := NULL,
err_log_table_space IN VARCHAR2 := NULL,
skip_unsupported IN BOOLEAN := FALSE);
Most of the columns are pretty self explanatory: table name being updated, error log table name, owner of error log table and the error log table's tablespace. If the SKIP_UNSIPPORTED is set to TRUE, column types that are not supported by error logging will be skipped over and not added to the error logging table. If it is set to FALSE, an unsupported column type will cause the procedure to fail.
Here's a quick example:
Let's create or table that will be updated:
SQL> r
1 CREATE TABLE foot.emp_table
2 (empno number(4), ename varchar2(10), job varchar2(8))
3* TABLESPACE users;
Table created.
Add a primary key:
SQL> ALTER TABLE foot. emp_table ADD PRIMARY KEY(empno)
2 USING INDEX TABLESPACE users;
Table altered.;
Load some rows:
SQL> INSERT INTO foot.emp_table VALUES
2 (7499, 'ALLEN', 'SALESMAN');
1 row created.
SQL> INSERT INTO foot.emp_table VALUES
2 (7521, 'WARD', 'SALESMAN');
1 row created.
SQL> INSERT INTO foot.emp_table VALUES
2 (7566, 'JONES', 'MANAGER');
Let's cause a unique constraint violation:
SQL> insert into foot.emp_table select * from foot.emp_table;
insert into foot.emp_table select * from foot.emp_table
*
ERROR at line 1:
ORA-00001: unique constraint (FOOT.SYS_C009069) violated
Create the 10G R2 error logging table:
exec dbms_errlog.CREATE_ERROR_LOG ('emp_table','dml_errors_emp_table')
Let's create a table that we can update and change one row to see if we can get one row to load and 2 to fail and be placed into the DML_ERRORS_EMP_TABLE:
SQL> create table foot.emp_load_table tablespace users as select * from foot.emp_table
Table created.
Update one row to change the EMPNO column's value to avoid the primary key violation:
SQL> update foot.emp_load_table set empno=123 where empno=7499;
1 row updated.
Rerun the statement specifying the new 10G R2 error logging syntax. Use our new load input table so that our one changed row will be loaded and two will be rejected and placed into the DML_ERRORS_EMP_TABLE:
SQL> insert into foot.emp_table select * from foot.emp_table
LOG ERRORS INTO dml_errors_emp_table('test_load_20050718')
REJECT LIMIT UNLIMITED;
1 row created.
OK, we loaded one. What happened to our other two? Let's see what our DML_ERRORS_EMP_TABLE contains.
First, let's describe the DML_ERRORS_EMP_TABLE:
SQL> DESC foot.dml_errors_emp_table

ORA_ERR_NUMBER$ NUMBER
ORA_ERR_MESG$ VARCHAR2(2000)
ORA_ERR_ROWID$ ROWID
ORA_ERR_OPTYP$ VARCHAR2(2)
ORA_ERR_TAG$ VARCHAR2(2000)
EMPNO VARCHAR2(4000)
ENAME VARCHAR2(4000)
JOB VARCHAR2(4000)
Our error logging table contains an incrementing error counter, the error message, ROWID, error type, tag (contains our users specified name from above -'test_load_20050718') and the three columns of our table.
Let's select from the table. I have truncated the ORA_ERR_MESG$ table for readability sake:

SQL> SELECT ora_err_number$, ora_error_mesg$, emp_no FROM foot.dml_errors_emp_table;
ORA_ERR_NUMBER$ ORA_ERR_MESG$ EMP_NO
--------------- -------------------------------------------------- ------
1 ORA-00001: unique constraint (FOOT.SYS_C009069) violated…..7521
2 ORA-00001: unique constraint (FOOT.SYS_C009069) violated….7566
We loaded one row and rejected two rows due to primary key constraint violations.
Top 10G Tuning Tools
This blog contains a listing of my top 10G Tuning Tools. These are the tools that I most often turn to when I am faced with a “database performance challenge.” In upcoming blogs, we’ll discuss tools that are available in Oracle 9i as well as review some generic tuning utilities.
We've spent a lot of time discussing SQL statement tuning. We started off this 15 blog series with a discussion on system triage and finished with Oracle access path identification and tuning. I also dedicated a dozen blogs or so to 10G Grid Control performance monitoring and analysis tools. With the abundance of long-winded information I have made available on this blog, I thought it might be beneficial to condense some of it into a review of some of my favorite tuning toolsets.
My Number One Tuning Tool of All Time - 10G Grid Control
If we are administering a 10G database, our administration tool of choice is either 10G Database Control (non-grid, standalone version), or 10G Grid Control to administer it. The advanced alerting, monitoring and administration features make this tool our preferred method of administration. The advanced performance monitoring and analysis features make it my number one tool for performance problem determination.

The future of Oracle database tuning will be administrators interpreting and implementing the recommendations generated by the intelligent advisors and ADDM. It is a foregone conclusion that Oracle will continue to improve upon its performance monitoring and analysis toolsets. Self tuning features are no longer options that are "nice to have", they are requirements for Oracle's competitive survival. SQL Server is continuing to scale, moving into areas that were once dominated by UNIX big-iron machines running Oracle databases. Oracle must compete with SQL Server's ease of use or it will undoubtedly lose market share.
As the intelligence of the advisors and ADDM increases, the need to possess an in-depth knowledge of Oracle and the usage of detailed diagnostics to improve database performance will decrease. And you heard it here first folks, I also think that reading SQL traces and statistics dumps will be a thing of the past. My crystal ball tells me that its just a matter of time until Oracle's SQL advisors make SQL traces and statistics dumps less and less important until they become totally unnecessary.
The new breed of top tuners will be the administrators who focus on how to use the toolsets and interpret their output. Not the tuners who spend the majority of time digging down into the dark, inner workings of the Oracle software. I'm not saying that knowing how the database works is immaterial. I am stating that this intimate knowledge will become less and less important as the tools mature. It's only a matter of time until tuning is relegated to a minor sub-task that can be scheduled between other activities.
Here's a listing of my favorite 10G Grid Control tools.
Host Performance Home Page
If someone tells me "the whole system is slow", the first thing I'm going to do is review the host platform's key performance indicators. 10G Grid Control's Host Performance panel provides me with exactly the information I'm looking for - current CPU, memory and disk resource utilization.
Each resource's graphical display contains links that allow me to view more detailed statistical information. Each of the drill down panels for CPU, Memory and Disk contains a drop down menu list that I can use to view current as well as historical performance statistics for the last 24 hours, 7 days and 31 days. The main performance panel also displays information on the top resource consuming processes that are currently running on the host. Once again, a drop down menu lets me switch between top CPU and top memory consumption reports.
For a more in-depth discussion on host performance tuning, please refer to my blog titled "Host Performance Monitoring Using 10G Enterprise Manager Grid Control".
10G Grid Control Database Performance Home Page
If I want to review database performance, I will access the 10G Grid Control Database Performance Home Page, which is available in both 10G Database Control and Grid Control.
The database performance home page allows me to review performance historically, identify what sessions are dominating "finite system resources", activate advisors and drill down into the top resource consumers. I can then quickly review the SQL they are executing. Does it get any better than that?
I challenge anyone to tell me that they would be able to monitor database performance more efficiently using archaic command line toolsets. By the time you have identified the problem, I'll have already contacted the user running the SQL and created a profile to improve its performance. If I have Grid Control's alerting system configured (which I always do), Grid Control's database performance threshold alert would have already notified me that a problem was occurring. By the time you got that call from an irate user, I would have had it solved. If you want to learn how to configure 10G Grid Control's performance alerting feature, please turn to my blog titled "Configuring 10G OEM Grid Control's Automatic Alert Notification System".
To learn more about 10G Grid Control's database performance monitoring and analysis features, please turn to my blog titled "Database Performance Monitoring Using 10G OEM Grid Control".
10G Grid Control Advisors
OK, I'll admit it. I'm hooked on the advisors. At first I was very wary of their recommendations, but the longer I worked with them, the better I liked them. I have lots of experience tuning Oracle, but if an advisor is going to make my job easier (and let me spend less time debugging a performance problem), I'm all for it.
For an overview of 10G Grid Control's advisory features, please refer to my blog that is aptly titled "An Overview of 10G Advisors".
Here's a listing of blogs that provide more specific information on the individual advisors. Please note that each title below also links to a blog on that advisor.
     10G SQL Access Advisor - The SQL Access Advisor recommends a set of materialized views and indexes based on a supplied workload input. The tool can also help administrators optimize materialized views to take advantage of fast refresh and query rewrite capabilities. In addition, the SQL Access Advisor may recommend dropping indexes and materialized views that aren't being used. The SQL Advisor ranks and groups the recommendations based on the positive impact they will have if implemented.
     SQL Tuning Advisor - The SQL Tuning Advisor, as its name implies, provides SQL tuning recommendations and includes information justifying why it generated those recommendations. The recommendations may include collecting statistics on objects, new index creation, restructuring the SQL statement or the creation of a SQL profile to create a more optimal access path. I have a more detailed blog the SQL Tuning Advisor titled "SQL Access Advisor Revisited".
     Segment Advisor - Although I don't use this advisor as much as the SQL Tuning and SQL Access advisors, I do use it on a regular basis. The 10G segment advisor identifies segments that have become fragmented as a result of update and delete operations. The 10G R2 Segment Advisor has been enhanced to identify tables that suffer from excessive row chaining and row migrations. Oracle describes these objects as being sparsely populated. Not only do sparsely populated objects waste space but they can also contribute to SQL performance problems.
Automatic Database Diagnostic Monitor (ADDM)
ADDM is a "recommendation engine" that uses performance information stored in the Automatic Workload Repository (AWR) as input. Here's a blog that provides a detailed description of the Automatic Database Diagnostic Monitor. By default, AWR snapshots occur every 60 minutes. For more information on the AWR snapshots, please turn to my blog titled "Working with Automatic Workload Repository Performance Snapshots".
To learn how to tailor the 10G AWR snapshot environment to meet your specific needs, please refer to my blog titled "Configuring and Administering Automatic Workload Repository Performance Snapshots".
After the AWR information snapshot is executed, the Automatic Database Diagnostic Monitor is triggered to analyze the information contained in the AWR for the period pertaining to the last two Snapshots. ADDM's output includes a plethora of reports, charts, graphs, heartbeats and related visual aids. In general, ADDM provides users with a top-down analysis of performance bottlenecks and their associated resolutions. But you can also execute ADDM manually to generate recommendations for current, as well as historical, time periods.
Statistical information, by itself, is often useless. If you have nothing to compare a measurement to, how will you know if its good or not so good? 10G Grid Control allows two sets of snapshots to be compared to one another. The results of the comparison are displayed graphically using horizontal bars to indicate differences between the two measurements. To learn how to perform snapshot comparison, please refer to my blog titled "You Can't Improve What You Can't Compare".
Access Path Scientific Analysis
We combine all of the knowledge we learned in previous blogs of this series to begin our scientific analysis on Oracle optimization. We’ll review some of tools we can use to display access path information and look at some graphical displays that will assist us during the analysis process. I’ll also provide you with some examples to jump-start your testing.

Introduction
This information is intended to help you begin your quest to become a database performance scientist. There's a difference between tuning and learning how to tune. Tuning is what happens when you get that call at 3:00 on a Friday afternoon. You have a concerned customer stating that something is running longer (usually way longer) than they would like. Your first response is to consider your choice of career paths, dreaming how nice it would be to sell wood carvings at a roadside stand in Montana. You then perform your diagnostic analysis, come to some sort of conclusion and implement your solution.
Learning how to tune requires that you spend time with the optimizer experimenting with different environmental settings and database objects that affect access paths. It also requires that you have a general understanding of the basics of optimization.
I dedicated an entire blog on educational resources that will help you build a firm foundation of Oracle optimization knowledge. It's critical that we understand as much as we can about optimization before we begin our experimentation. But the key to success is to spend dedicated time experimenting. I'm not professing that you spend the rest of your career becoming "at one" with the optimizer. But if you don't want to be at a total loss each time something runs long, you need to experiment to learn how to tune.
Environmental Documentation and SQL Statement Selection
Before we begin, we'll need to document some of the key parameters that affect optimization and gather some information on the data objects we will be using in our experimentation. Please refer to my blog titled "Access Path Scientific Analysis Part I" for more information on the importance of documentation as well as the documentation process itself. In a previous blog, I provided a few hints and tips on identifying what types of statements and access paths would be most beneficial to analyze.
Access Path Identification Tools
Here are the tools that I most often use to review Oracle access paths:
     SQL Scratchpad - I like to see graphical representations of access paths. Even though I have lots of experience reading SQL Trace output and the multitude of variations of access path displays, I like seeing a graphical representation. The problem is that Scratchpad must be installed on your client (comes embedded with 9I OEM) and it doesn't like accessing 10G databases. If I am tuning a 9I database and I have access to SQL Scratchpad, that is the tool I am going to use to display access path information. For more information on SQL Scratchpad and other 9I OEM tools, please turn to my blog titled "Access Path Identification Part V".
     SQL*PLUS Autotrace - If I want to quickly display the statement's access path and associated run-time performance statistics, I will use SQL*PLUS Autotrace. Here's a blog containing a few hints and tips on Autotrace.
     Explain Plan - If I don't want to run the query, and I am not concerned about the access path changing during SQL statement execution (read next bullet), I will used the tried and true "explain plan into" clause to ask Oracle to dump the access path information into a table. I'll run a statement to retrieve the data from the plan table and format the output. Please read my blogs titled "Access Path Identification Part II" and "Access Path Identification Part III" for more information.
     SQL Trace - If I want to be absolutely sure that the access path the other tools are predicting the statement will take matches what access path is chosen during runtime, I'll run SQL Trace. It is one of the only tools you can use to determine the path chosen during execution. How can the access path change during execution? Read this blog to find out! The blog contains an example of a production problem that was caused by an access path changing during runtime. For lots of information on activating SQL Trace, please turn to my blog titled "Access Path Identification Part IV". SQL Trace is the tool I will turn to if I want to analyze a set of statements that belong to a specific online transaction or batch job. The output quickly tells me what the top resource consuming statements are.
     V$SQLPLAN - V$SQLPLAN contains access path information the SQL statement takes during execution. I'll run the statement, review the access path taken during execution, make my tuning change, alter the text of the statement look different to the optimizer, retrieve the access path information again and compare the before and after results. My blog titled "Access Path Identification Part II" provides all of the information you need when you are using the V$SQLPLAN table to retrieve information on access paths taken during execution.
     Regardless of which tool I select, I will use the output to graphically display the statement's access path. Although it is time consuming, graphically displaying a statement's access path allows me to more clearly understand the access path and simplifies the tuning process. This blog will show you how to graphically display a statement's access path.
Another Graphical Display I find Useful
During my career as an Oracle instructor, I was often asked in the SQL tuning classes how I personally documented a query I was attempting to tune. This is the display that I showed them. Let's take a look at the contents, starting from the top of the page:
     SQL statement text - the full text of the SQL statement.
     Columns in the SELECT clause - a listing of the columns that are in the SELECT clause are displayed under the table they belong to.
     Columns in the WHERE clause - a listing of the columns used in the WHERE clause are displayed under the table they belong to. If the columns are tables used in a join operation, an arrow is used to designate which of the other table's columns they are joined to. If I am interested in cardinality and data skew, I would document that information directly below the column name. For more information on determining column cardinality and data skew, please turn to this blog.
     Indexed columns - The bottom of the page contains all of the indexes that are on columns in the WHERE and SELECT clauses. This allows me to easily determine if I need to create additional indexes are begin analyzing why an index isn't being used.
Performing the Scientific Analysis
Before we begin, we need to understand the effect that preloading the buffers has on SQL statement performance. If you run the query and return the results, you will be loading the buffer cache with data. As a result, subsequent runs of the same statement could be faster. This is because your statement is retrieving data from memory while the initial runs were forced to retrieve the data from disk. You'll need to run the statement twice, make your change and run the statement twice again to ensure that buffers don't impact your timings. You can then compare the output of the second execution of each test. 10G provides a SQL statement to flush the buffer cache.
If you are building indexes during your testing, don't forget to generate statistics on the new objects. You don't want the lack of object information to affect the optimization process.
In the discussion that follows, I'll be providing you with some examples of changes to make during your scientific analysis. You will run the SQL statement, record the execution time and statistics and document the access path. You will then make the suggested alteration, review the access path to see if it has changed and run the statement again to record the new performance statistics and execution time. Please note that in the examples below, I'm not going to remind you each time to run the query to evaluate the performance statistics and execution times.
Index Testing
Let's use the same sample query I used in my graphical display in this discussion:
select a.employee_id, a.last_name, b.department_id, b.department_name,
c.street_address, c.postal_code, c.city, c.state_province
from hr.employees a, hr.departments b, hr.locations c
where a.department_id=b.department_id
and b.location_id=c.location_id
and a.employee_id = 174
order by a.last_name;
There is a myriad of different changes we can make. Here's a couple of examples to get you started:
     Drop all of the indexes on the tables and review the access path. We know it will use table scans, but what join method did it choose, how about the join order?
     Build an index on one of the join columns (i.e. dept.department_id) and review the access path. Determine if it chose the index, what join method was used and the order in which the tables were joined.
     Drop that index and build an index on the other column in the join. Identify if it used the index, join method and join order.
     Build an index on employee_id, department_id and last_name. Identify if it used the index, join method and join order. Did you get an index only access path?
     Drop the index you just created and change the order putting employee_id between department_id and last_name. What was the access path?
     Change the WHERE clause to look for rows based on the last_name and first_name columns. Remove the a.employee_id = 174 clause. Build an index on the first_name and last_name columns. What happened?
     Drop the index built on the first_name and last_name columns and build two indexes, one on first_name and the other on last_name. Did the optimizer use both indexes? Which one did it choose?
     Add local predicates to the other tables and identify the access path changes.
     Identify a column in one of the tables that has low cardinality. Change the WHERE clause to access that low cardinality column. If you don't have a low cardinality column available, its pretty simple to add a column and update it with a few repeating values to create a low cardinality column. Build a bitmap index on the low cardinality column. Did the optimizer choose the bitmap index? What happens if you use a hint?
     Build a bitmap index on a column that has high cardinality (i.e. employee_id). If it doesn't use the bitmap index, use a hint to influence the optimizer to choose it. Drop the bitmap index and build a B-tree index. Does the optimizer use that index without the hint? Compare before and after execution performance statistics and timing.
     Identify columns that have skewed data. Build an index on the skewed column. Change the query to search for a value that occurs many times. Compare that execution to a search on a value that appears few times. Analyze the table and build a histogram. Re-execute the tests and compare the results.
Parameters that Affect Optimization
We also need to learn how different parameter settings affect optimization. We can start by adjusting a few of the basic parameters that have the best chance of changing the access path for a given statement. Most of the parameters can be changed in your session by using the ALTER SESSION statement. Using the ALTER SESSION statement will allow you to change the parameter for your session only. You won't have to worry about affecting other folks executing statements in the same database that you are using for your testing.
Here's a blog that provides a listing of parameters we can use during our initial testing. Please note that this is not an all inclusive list. It is a listing of parameters that I feel have the greatest chance to influence the optimizer to choose a different access path.
You need to execute the statement, change a parameter and note its affects on the statement's access path, performance statistics and runtimes. If you are going to test changing values for different parameters, don't forget to change the one you just tested back to its original setting. That way you won't have multiple changes impacting your test results. You can also test combinations of changes. Just be aware of the changes you are making.
Hints
Administrators embed hints in a SQL statement to influence the optimizer to choose a particular access path. Using hints will allow us to evaluate the affect that different access paths have on SQL statement performance. We will run the statement without any modification, review the access path and performance statistics, use a hint to (hopefully) change the access path, run the statement again and compare the before and after results. Since hints can be embedded in virtually any SQL statement, they will provide us with an easy mechanism to learn more about access paths. I have devoted a previous blog to hints. The blog also contains a demo document that will show you how to embed hints and the affects the hints have on access paths. One of the key tests is to determine the impact that operation cardinality has on SQL statement performance. You can test this by using the ORDERED hint to ask the optimizer to change the join order for statements that join more than two tables together.
Access Path Scientific Analysis Part II
We continue to analyze the affects that initialization parameters, statistics and hints have on SQL statement access paths. In this blog, we'll take a look at the hints we will be using to influence the optimizer to select an access path that is different from the one it would normally choose. We'll also review a few of the tools that we will be using to monitor and compare SQL statement access paths and performance for our upcoming tests.
In my last blog, we reviewed the parameters that we will be modifying to influence the access paths. In this blog, we'll review the hints that we will use to ask the optimizer to select an access path that is different that the one it would normally choose.
SQL Hints
Administrators embed hints in a SQL statement to influence the optimizer to choose a particular access path.
By using hints, you are telling Oracle that your access path is better than the one the optimizer is choosing. It’s a safe assumption that most of us aren’t as smart as the optimizer. Let it make the choice, unless you are certain the optimizer is choosing the incorrect access path.

But what happens if the optimizer is making incorrect decisions? Before you begin adding hints to SQL or freezing access paths using Optimizer Plan Stability or 10G Profiles, consider taking the following steps first:
     Determine if it is actually an incorrect access path that is causing the performance problem. It may be some external influence affecting the SQL (hardware, workload, and so on).
     Identify and review the SQL taking the bad access path for proper SQL coding techniques.
     Verify that statistics have been generated on the tables and indexed columns. The Oracle-supplied utility DBMS_STATS is currently the preferred method of collecting statistics.
     Review parameters that affect SQL optimization (optimizer_mode, optimizer_index_cost_adj, optimizer_index_caching, optimizer_dynamic_sampling, optimizer_features_enable, optimizer_max_permutations).
     Investigate system statistics. Is it activated? Is it configured correctly if it is activated? Should it be activated?
     Does the application use bind variables? If so, investigate bind peeking quirks.
     Check for skewed data. Consider using histograms to compensate.
     Go to Metalink and review optimization bugs for your release. Oracle could have already identified your issue and fixed it.

OK, so you have performed all of the actions cited previously and you find that the optimizer is actually making an incorrect decision. Regardless of what some industry pundits may tell you, the optimizer is NOT infallible; it can make mistakes. Oracle created hints for a reason, and wouldn’t have made them public if it didn’t think we really needed them from time to time. If you are forced to add hints to the query to improve its performance, do so intelligently and judiciously.
Using Hints to Compare Oracle Access Paths
OK, now that I have provided you with my standard warning on hints, the intent of this blog is to learn how to use hints to influence access paths for testing purposes. Using hints will allow us to evaluate the affect that different access paths have on SQL statement performance. We will run the statement without any modification, review the access path and performance statistics, use a hint to (hopefully) change the access path, run the statement again and compare the before and after results. Since hints can be embedded in virtually any SQL statement, they will provide us with an easy mechanism to learn more about access paths. We are on our way to becoming database performance scientists!
We'll begin our access path scientific analysis by using a very basic set of hints to influence the optimizer to choose a different access path. The hints I will be using in my introductory demo are:
     Hints for optimization mode - We will be asking Oracle to optimize the statement using different optimization goals. Since we are using Oracle9i for my demo, we'll be asking it to use choose, first_rows, all_rows and rule.
     Hints for access paths - Access path hints ask the optimizer to choose the access path it recommends. We'll be asking Oracle to use an index that it didn't choose in the original access path it generated. We'll also be asking the optimizer to choose a full table scan instead of using an index.
     Hints for join operations - Oracle provides several different join methods for statements that join one, or more, tables together. We'll ask the optimizer to choose nested loop, merge scan and hash joins.
     Hints for join order - Oracle only joins two tables at a time. If multiple tables are joined, join order also describes the overall order of the tables being accessed. Oracle will join two tables and create an intermediate result set which is then used as input to the next join.

Join order plays a significant role in query performance. Both in the outer and inner tables selected and the overall join order. In general, you want to reduce the number of rows processed as soon as you can during the processing of a given SQL statement. The sooner you can reduce the number of rows being sent to future operations, the faster the query will usually run. We'll ask the optimizer to choose different join orders to determine the impact it has on SQL performance.
The bullets above are just a subset of all of the hints that are available. For a complete listing (and their definitions), please refer to the Oracle Database Performance and Tuning Guide for your release. You'll find the documentation on Oracle's Technet website.
Recommended Toolsets
The two tools that I most often use to review SQL performance during my own scientific analysis is SQL*PLUS Autotrace and SQL Trace. Please refer to my blog titled "Access Path Identification - Part IV" to learn more about using SQL*PLUS Autotrace and SQL Trace utilities to evaluate SQL access paths and SQL statement performance.
Demo Document
The intent of this demo is not to train you to identify which access path is most optimal for a given situation. Its intent is to help you gain experience interacting with the Oracle optimizer. Take it from your friendly ex-Oracle instructor, spending time experimenting with the optimizer and analyzing the performance statistics that different access paths generate is critical to your tuning education. There really is no substitute for time spent "in the seat" performing your own scientific analysis on query optimization.
The demo will show you how to use hints and the ALTER SESSION SQL statement to influence the optimizer to take a different access path than it normally would. You can then compare the the access paths and their associated performance statistics to obtain a better
Access Path Scientific Analysis Part III
Now that we have an understanding of how we can influence access paths using hints and session parameter changes, let’s continue our discussion by reviewing the various types of indexes as well as indexing strategies that affect Oracle access path selection. We’ll complete this series next week when we use all of the information we have learned to perform our own scientific analysis on Oracle optimization.
Introduction to Oracle Indexes
Generally, the fastest way to access Oracle data is with an index. The Oracle database contains several different indexing types that are designed to provide complementary performance functionality.
While standard B-tree indexes are most effective for columns containing a high number of different values (high selectivity), bitmapped indexes are most appropriate for columns with a relatively limited number (low selectivity) of different values. The low selectivity statement above comes with several caveats. Instead of spending this entire blog on the bitmap index/low selectivity issue, please turn to Jonathan Lewis's DBAZine article titled "Bitmap Indexes Part 1". Oracle also provides function-based indexes to allow index access using SQL that contains column manipulations in the WHERE clause.
Administrators supporting large data stores use partitioned indexes to decompose large index structures into smaller, more manageable pieces called index partitions. Starting with the 8i release, Oracle places index data in the separate index partitions based on the index's partitioning key.
Before we begin our reviewing some of the more popular Oracle index types, let me provide you with a few thoughts on indexes in general.
How Many Indexes Can I Build?
This subject has always been a matter for great debate. The DBA must balance the performance of SELECT statements with their DML (INSERT, UPDATE and DELETE) counterparts. SELECT statements that return a limited number of rows from a table yet access non-indexed columns will suffer from poor performance. Conversely, if you have too many indexes on a particular table, DML statements may be adversely affected.
The DBA must take the business requirements, application processing workload and workload scheduling into consideration when determining how many indexes to build. If you compare the performance improvements an index makes on a SELECT statement to the negative affect it has on DML statements, you will find that the benefits of building the index usually far outweigh the performance drawbacks.
Indexes on columns in the WHERE clause of SELECT statements can reduce query times by minutes and even hours. The creation of additional indexes may add additional time to on-line transactions that execute DML statements. Additional indexes will have the greatest negative impact on DML statements that access a large number of rows. The more rows that are inserted, deleted or changed, the greater the negative impact will be. Traditionally, programs that process large volumes of rows are scheduled to execute during off-hours.
The DBA must also consider the needs of the business. What process is more important to the business unit - getting the data in or getting the data out? Who complains the most? Is it the business user that must wait minutes (or hours) for their transaction or report to retrieve data or the business user that is waiting an extra few seconds for their update transaction to complete? Is the nightly batch window tight on time?
The DBA will need to find out how much time the additional indexes add to programs that process large volumes of rows. In addition, the DBA must determine when these programs run. If they run at night or do not require high-performance, consider building the index.
If the transaction update performance requirements are excessive (dot com applications are one example), keep the number of indexes to a minimum. A good recommendation is to build a set of tables that have no indexes for lighting-fast update performance and move the data to historical tables (with proper indexing) during off-hours to improve retrieval performance.
Index Monitoring
Determining if an index will increase performance is a pretty straightforward process. The administrator is focusing their tuning efforts on a particular query and is able to gather the specific information necessary to assist in the decision making process.
Dropping unused indexes is also an important part of application tuning. We learned previously that indexes force Oracle to occur additional I/O every time a row is inserted or deleted into the table they are built upon. Every update of the table's columns incurs additional I/O to all indexes defined on those columns. Unused indexes also waste space and add unnecessary administrative complexity. Since unused indexes (excluding those used to enforce integrity constraints) do not add any benefits to the environment, why keep them?
Determining if indexes were being used in releases prior to Oracle9i was a time consuming and error-prone process. EXPLAIN plan and trace output could be used but there was no single mechanism that monitored index usage at the database level.
Starting with release 9i, the Oracle database simplifies the index usage monitoring process by providing the ALTER INDEX……… MONITOR USAGE command. The statement below turns monitoring on for the index SCOTT.EMPIDX while the second statement ends the monitoring session:
ALTER INDEX scott.empidx MONITORING USAGE;
ALTER INDEX scott.empidx NOMONITORING USAGE;
The V$OBJECT_USAGE table can then be accessed to determine if the index was used during the monitoring session. When the session is started, Oracle clears the information in V$OBJECT_USAGE for the index being monitored and enters a new start time identifying when the index monitoring session started. After the index monitoring session is concluded, the USED column in the V$OBJECT_USAGE table will contain the value 'YES' if the index was used during the monitoring session and the value 'NO' if it was not.
Parameters that Impact Index Usage
The parameters listed below influence the Oracle cost-based optimizer to favor or not favor index access. Please note that this is not an all inclusive list. It is a listing of parameters that I feel have the greatest chance to influence the optimizer to choose, or not choose, index access paths. We'll use these parameters during our scientific analysis.
     OPTIMIZER_MODE = first_rows or first_rows_nnnn - The optimizer chooses the best plan for fast delivery of the first few rows or the first nnnn rows. The first_rows_nnn replaces the first_rows parameter in later Oracle releases. The first_rows is available for backward compatibility. More often than not, that access path will include an index. This optimizer mode tends to favor nested loop joins over hash and merge scan.

It is important to note that using this mode is not a switch. It won't change each and every table scan and hash join to index access and the nested loop join method. The optimizer, at times, will favor index access and nested loop joins. The reverse goes for the all_rows optimization mode below.
     OPTIMIZER_MODE = all_rows - The optimizer chooses the best plan for fast delivery of all of the rows that queries return. The optimizer may decide to choose a full table scan over index access and hash joins instead of nested loop.
     OPTIMIZER_INDEX_COST_ADJ = xxxx - This parameter lets you tune the optimizer to be more or less index "friendly." It allows the administrators to influence the optimizer to make it more or less prone to selecting an index access path over a full table scan.

The default for this parameter is 100. This setting tells the optimizer to evaluate index access paths at the regular cost. Any other value makes the optimizer evaluate the access path at that percentage of the regular cost. For example, a setting of 50 makes the index access path look half as expensive as normal.
     OPTIMIZER_INDEX_CACHING = xxxx - You set this parameter to a value between 0 and 100 to indicate the percentage of the index blocks the optimizer should assume are in the cache. Setting this parameter to a higher value makes the indexes on the inner table of a nested loop joins look less expensive to the optimizer. The end result is that the optimizer may for favor a nested loop join using an index.

Index Affects on Access Paths
If a table only contains a few hundred rows, queries may run faster if the optimizer chooses to read all of the blocks in the table as opposed to using an index. The I/O generated traversing the index blocks to get to the table row entries would be higher than if Oracle read just the blocks allocated to the table being accessed.
What if we access a larger table using a column with poor selectivity? Selectivity describes the number of different values stored in a column. Poor selectivity means that the column has many values that are the same. If our statement contains a WHERE clause that searches for a column value that is contained in 90% of the table's rows, it is best that Oracle, once again, read each and every row in that table. Conversely if a WHERE clause searches for a column value that appears in 1% of the table rows, it would be beneficial for the optimizer to choose an index.
Here are some examples of index access paths that we will see during our testing:
     Index only - Oracle is able to read all of the data required to satisfy the query's data needs from the index structure alone.
     Index to table - Oracle searches through the index structure looking for the key value(s), then uses row identifiers to probe the table to satisfy the data request.
     Index unique scans - The SQL statement accesses an index using a column (or columns) that are defined in a unique or primary key index with an equality condition.
     Index range scans - Oracle scans a set of entries in the index to satisfy a query.
     Index skip scans - Oracle is able to break down a multi-column index and view them as smaller subindexes. This is achieved by Oracle "skipping" the leading columns in the index and using columns that appear later in the index's definition.
     Full scans - Oracle scans all of the index entries. Kind of like a tablescan on an index. Oracle drops down to the leaf blocks and traverses the leaf blocks using the leaf pointers.
     Fast full scan - Oracle uses multi-block reads to read both leaf and non-leaf blocks. Non-leaf (branch blocks) are discarded.
But indexes influence more than just index-related access paths. Indexes can also impact the type of join operations used. Here's an "over the top" example to clarify. If you are joining two tables together in a SQL statement on join columns that have good selectivity (returns relatively few rows compared to the table size), Oracle will favor index access paths and a nested loop join. If you don't have indexes on the join columns, Oracle may choose table scans using a hash join instead of the nested loop join method.
Index Types
Let's continue our discussion by reviewing some of the more popular types of indexes: B-Tree, Bitmap and Function.
B-Tree Indexes
A traditional B-Tree index stores the key values and pointers in an inverted tree structure. The key to good B-Tree index performance is to build the index on columns having a lot of different values. Oracle describes this as "good selectivity" Oracle is able to quickly bypass rows that do not meet the search criteria when searching through indexes built on columns having a high degree of selectivity.
Bitmap Indexes
As I stated previously in this blog, there is a sense of confusion about bitmap indexes and the benefits they provide to columns with low selectivity. Instead of spending this entire blog on the bitmap index/low selectivity issue, please turn to Jonathan Lewis's DBAZine article titled "Bitmap Indexes Part 1".
The optimizer can be stubborn at times. It can be particularly stubborn when you want it to choose a single bitmapped index for an access path. A single bitmap index may not be chosen at all. The optimizer will be more inclined to choose bitmapped indexes as an access path if it can use multiple bitmapped indexes simultaneously. That's where the benefits of bitmaps are realized.
Bitmap Indexes and Concurrency
Anyone accustomed to database programming understands the potential for concurrency problems. When one application program tries to update data that is in the process of being changed by another, the DBMS must sometimes forbid access until the modification is complete in order to ensure data integrity.
Each entry in a B-Tree index entry contains a single ROWID. When the index entry is locked during an update, a single row is affected. A bitmap lock affects a range of entries which could have a negative impact on other transactions attempting to update rows already locked.
Locking issues affect data manipulation operations in Oracle. As a result, bitmapped indexes are not appropriate for OLTP applications that have a high level of concurrent insert, update and delete operations. Concurrency is usually not an issue in a data warehousing environment where the data is maintained by bulk loads, inserts and updates.
Function Based Indexes
Oracle 8i solved an indexing problem that had been affecting database performance for close to a decade. Before Oracle8i, any SQL statement that contained a function or expression on the columns being searched on in the WHERE clause could not use an index.
For example, the statement:
SELECT * FROM employee_table
WHERE Upper(first_name) = 'CHRIS';
would not use an index. A full table scan would be required to retrieve the desired result set. We now know that we are able to use B-tree and bitmap indexes to speed query performance.
In Oracle8i and later releases, we are able to build both bitmap and B-tree indexes on columns containing the aforementioned functions or expressions. The following index could be used to increase performance of the query:
CREATE INDEX upper_first_name on employee_table (upper(first_name));
Function based indexes will affect the performance of DML statements that manipulate the columns contained in the function based index. The more complex of the expression used, the more time the database will require to update the index.
Wrapup
The intent of this blog was not to provide you with an all-inclusive education on Oracle indexes. There are folks that spend dozens of hours learning about access paths and the affects that indexes, hints, statistics and parameters have on Oracle optimization. We are laying the groundwork to begin our own scientific analysis of Oracle access paths. As I stated previously, we need to begin experimenting on our own to fully understand the Oracle optimization process. In my next blog, we'll use the information we learned in this series to start our experimentation.
Acces Path Scientific Analysis Part IV
We continue our series on Oracle access path scientific analysis. In this latest installment, we’ll do a quick review of some of the blogs that led us to this point. We’ll also learn how to select a set of SQL statements that we will use in our test cases.
Introduction
The latest installment in a series of blogs on Oracle access paths. Before we begin,it is important for us to do a quick review of my previous blogs on SQL performance. We'll be using this information during our scientific analysis phase.
     Let's Get Technical! - Using Deductive Reasoning and Communication Skills to Identify and Solve Performance Problems
The first blog of this series provides some helpful hints on what questions to ask during the initial stages of application performance problem analysis.
     System Triage Part II - Host Performance Analysis Using Grid Control and Host Commands
Using operating system commands and 10G Grid Control to monitor database server performance.
     System Triage Part III - Finding the Top Resource Consumers
We learned how to use 10G Grid Standalone to identify the top resource consumers for a given instance.
     System Triage IV - Access Path Identification Part I
A high-level overview of Oracle query optimization. Also discussed the difference between estimated and runtime access paths.


     System Triage V - Access Path Identification Part II
We reviewed the two data objects that contain access path raw data - plan_table and v$sql_plan. In addition, we discussed a few of the V$ dynamic performance views that provide information pertaining to SQL statements executing in our Oracle database environment.
     Access Path Identification - Part III
A discussion on the tools we can use to display graphical and text versions of explain plan data including the DBMS_XPLAN package, UTLXPLS.SQL, UTLXPLP.SQL and Oracle 9I Scratchpad (graphical). We also learned how to create our own graphical displays of Oracle access paths.
     Access Path Identification - Part IV
An investigation of everyone's favorite performance analysis tools - SQL*PLUS Autotrace and SQL Trace.
     Access Path Identification - Part V
A discussion on a couple of 9I Oracle Enterprise Manager tools that we can use to identify Oracle access paths. We also learned how we can use 9I OEM's SQL Analyze and the Virtual Index Wizard to help us better understand access paths, monitor database performance and tune statements running in an Oracle 9I database environment.
     Access Paths VI - 10G Grid Control SQL Details Panels
We turned our attention to the 10G Grid Control's access path display panel - the SQL Details Panel. It seems like no matter where you start your investigation in 10G Grid Control, sooner or later, you'll end up with a panel that contains a link to the SQL Details Panels. We took a look at a few of the more popular ways of finding our way to them.
     Access Paths VII - Access Path Education
Focuses on the importance of Oracle access path education. The blog contains links to resources and numerous Metalink Notes that provide access path educational information.
     Access Path Scientific Analysis Part I
An introduction to Oracle Access Path Scientific Analysis. We learn why it is important for us to become "database performance scientists" if we want to fully understand Oracle access paths and the affect they have on SQL statement performance. We also learned how to select a test system to use as well as document the parameters and database objects that play key roles in the optimization process.
     Access Path Scientific Analysis Part II
We learn how to use hints to influence access paths for testing purposes. Using hints allows us to evaluate the affect that different access paths have on SQL statement performance. When we finally begin our access path scientific analysis, we will run the statement without any modification, review the access path and performance statistics, use a hint to change the access path, run the statement again and compare the before and after results. Contains a document that provides a demo on using hints to influence access paths. The demo document also contains a description of some of the basic access paths that Oracle can choose from.
     Access Path Scientific Analysis Part III
An overview of the various types of indexes as well as indexing strategies that affect Oracle access path selection. Also includes a list of parameter that can influence the optimizer to favor index access.
Selecting SQL Statements to Analyze
We need to select a few SQL statements that we can use as input to our scientific analysis. Let's take a look at the various tools we can use to select the SQL statements we can use for testing.
     Using 9I Oracle Enterprise Manager - We can use the Session Details Panel to retrieve the SQL text of statements that are currently executing in the database. The Session Details Panel also provides the access path the statement is taking. The blog on 9I OEM will show you how to drill down to find the text of the SQL statement and the access path it is taking.
     10G Grid Control - Grid Control' s Execution Plan Panel allows administrators to display the entire text of a SQL statement and the access path it is taking. This blog on 10G Grid Control will show you how to navigate through the various panels to access the Execution Plan Panel.
     SQL Trace - If we don't have access to the "SGTs" (Sissy GUI Tools), all is not lost. We can also use the SQL Trace utility to capture the text of the SQL statement as well as its access path. To use this method, we would contact one of our friendly application developers and ask them to run a series of batch jobs or online transactions in the environment we have selected as our test bed.

We can then activate SQL Trace, contact the developer to run the selected workload, deactivate the trace and format the trace's output into an easily readable format. SQL Trace provides the benefit of allowing us to capture multiple SQL statement access paths in a single execution. The blog on Autotrace and SQL Trace provides instructions on how to activate the SQL Trace utility.
What Statements Should I Choose?
We are looking for statements that utilize the different Oracle access paths. Try and find statements that use:
     Table scan and index access paths:
     Index only - Oracle is able to read all of the data required to satisfy the query's data needs from the index structure alone.
     Index to table - Oracle uses a row identifier to probe the table to satisfy the data request.
     Full table scan - Oracle reads all rows from the table. Oracle will scan the table to the last block used (as opposed to the last block that actually contains data).
     Join access paths:
Used when the statement retrieves data based on matches between two tables (i.e. retrieve all of the employees that have the department name of "Welding"). The employee information is contained in the employee table and the department information (including the department name) is in the department table.
     Nested loop join - Good path when the join is accessing a small subset of rows.
     Hash join - Efficient access path for joins that access larger sets of data.
     Sort merge join - Sorts rows to allow quicker access during the join.
     Outer joins - An outer join returns all of the rows that satisfy the particular join condition and returns additional rows from one table that do not satisfy the join condition.
We don't have to find each and every access path I outlined above. In addition, each statement that does a join will also contain table scan and/or index access paths. Try to find statements that access a single table and statements that join two, three and four tables together. It should be relatively easy to find the nested loop and hash join access paths. Sort merge and outer joins may be a little harder to find. Try and stay away from SQL statements that use parallel processing, are 13 pages long or join 5 or more tables together. We want to start with the basics. We also don't want to use statements that have exorbitantly long execution times. We'll be executing them on a fairly regular basis during testing and waiting hours for the results will lengthen the testing process.
Access Path Identification - Part V
The series on access path identification continues. We’ll review a couple of 9I Oracle Enterprise Manager tools that we can use to identify Oracle access paths. I’ll also show you a couple of beneficial utilities that will help you better understand access paths, monitor database performance and tune statements running in an Oracle 9I database environment.
Introduction
Readers of this blog know that 10G Grid Control provides a wealth of tools that facilitate database and SQL tuning. I have written a couple of dozen blogs that focus on the various advisors and utilities we can use to assist us in the tuning process.
But 9I's Oracle Enterprise Manager (OEM) is no slouch either when it comes to tuning and diagnostic tools. Since the focus of this blog has been to cover Oracle 10G, I won't be reviewing all of the 9I tools we can use to assist us in the tuning process. But this recent series of blogs focuses on the various tools that we can use to view access path information and there are tools that 9I OEM provides that are definitely worth discussing. And lets face it, there are LOTS of Oracle9I databases out there.
9I OEM allows us to log in with a repository or in stand-alone mode. Most of the tuning and diagnostic tools require the use of a repository which is a set of tables stored in a database. If you install the Oracle9I software in a windows environment, you'll find a 9I Oracle Enterprise Manager Configuration Assistant contained in your Oracle menus. When you run the configuration assistant, Oracle will prompt you for database connection information. Once the repository is configured, you will have the ability to run the advanced diagnostic tools that Oracle9I OEM provides. After the initial configuration is complete, you'll need to run the Navigator to add databases into the Oracle9I OEM infrastructure.
Sessions Panel
After the databases are added using the Navigator drop down, each time you activate 9I OEM, the first panel shown will be a database listing. You select the database you want to administer or monitor and then activate the various tools and utilities you want to use.
Let's take a look at this first screenshot. On the far left of the screen, you'll see a listing of icons we can use to activate the various toolsets provided by 9I OEM. The display screen itself is split into two subpanels. The subpanel on the left allows me to navigate to a database and then drill down to view and administer configuration settings, schema definitions, security, locking, etc..
Please note that this screen shows that I have already activated the Sessions display panel. I use this panel a lot when I am reviewing SQL performance for a 9I database. Look at the column headings at the top of the screen. It allows users to sort the sessions by the column headings. The columns provide information on program name, machine name, OS User and process ID, which allows users to easily identify specific sessions.
The columns also provide key performance indicators. We can sort sessions by CPU, memory and physical and logical I/O. These columns make it easy for us to identify the top resource consumers. We are then able to drill down into that session to determine what SQL statement the session is executing and how well it is performing.
Drilling Down to View Specific Session Information
If I double-click on a specific session, 9I OEM displays the session drill down panels. I am able to select from general information, long operations (if available), the SQL statement currently being executed (blank if the session is not currently running a statement), performance statistics and locking information. Here's a screenshot of long operations information for a session in my database. The long operations display tells me how many blocks have been accessed and how many more blocks Oracle predicts must be read before that individual operation is complete. This display shows that I have almost completed a table scan on the HR.EMPLOYEE_TEST table.
The SQL tab at the top of this panel allows me to display the current statement being executed by the session. The panel also displays the access path that Oracle is predicting the statement is taking. Remember that we learned in previous blogs that because of certain conditions (like bind peeking for one), that the access path can change during execution.
The panel allows me to choose from either a text or graphical description of the access path. In this case, I have chosen to view the graphical display of the access path. The cool feature of this panel is that the icons on the right hand side of the screen allow me to step through the access path. As we step through the access path, Oracle will display a description of the step being performed at the bottom of the panel. Not only does it allow us to more quickly identify how the query is accessing the data, but it also provides us with a quick education on the access paths themselves. If you aren't an expert in Oracle access paths, this is a quick way for you to start your education.
SQL Scratchpad
Let's close this subpanel and navigate back to the main 9I OEM screen. When I click on the top tool icon, 9I OEM will respond by displaying a horizontal menu listing of additional icons. The icon on the far right will be for the SQL Scratchpad utility. If I click on the far right icon,9I OEM responds by displaying SQL Scratchpad. Scratchpad allows me to run SQL statements, edit them and then save them to a folder. For this discussion, we are most interested in Scratchpad's graphical access path display capabilities.
If I click on the icon that looks like an open book, 9I OEM responds by displaying the predicted access path the query will use when it executes. Like its session drilldown counterpart, SQL Scratchpad allows me to view the access path in text or graphical mode and allows me to step through the individual steps. Once again, the tool provides us with an easy way to educate ourselves in Oracle access paths.
SQL Analyze
Another tool that 9I OEM provides to view access path information is SQL Analyze. SQL Analyze is only available if you have purchased the tuning pack option for 9I OEM. This screenshot shows me navigating through 9I OEM's menus to activate SQL Analyze.
When you activate SQL Analyze, the first step the utility performs is to display a listing of all of the top resource consuming SQL statements that are contained in the library cache. If the statement has been flushed from cache, you won't find it in this display.
The tool lists all of the databases on the left side of the panel that you worked with in the past. As you can see, I am a pretty active user of SQL Analyze.
I can double-click on any statement in the top SQL listing to view its access path information. I can also use the menu system at the top of the screen to create a blank worksheet for new queries that I can enter manually. I found the statement I wanted from the listing and double-clicked on it to activate the SQL tuning panel.

The panel displays the SQL statement, and like our other utilities, allows us to step through the access path the query is taking. I can click on the SQL drop down menu at the top of the screen to ask Oracle to explain the statement using the various optimizer modes that are available.
SQL Analyze Tuning Tools
SQL Analyze provide us with tools that facilitate the SQL tuning process. The tools menu at the top of the screen allows me to choose from three different wizards. We'll review the Virtual Index Wizard in just a moment. The Hint Wizard allows us to choose hints from a drop-down menu system, while the SQL Tuning Wizard activates an intelligent advisor that provides us with SQL tuning recommendations. Once you have run the SQL Tuning Wizard a few times, it's pretty easy to see that it was the precursor to some of the more advanced utilities that are now available in Oracle 10G.
Virtual Index Wizard
The Virtual Index Wizard allows me to create virtual indexes on tables that our query is accessing. The key word in that last sentence is "virtual". Oracle does not permanently create the index. The intent of the wizard is to allow users to determine the potential performance impact a new index would have on the statement being analyzed. It is then up to the user to permanently create the index.

If I click on the Virtual Index Wizard menu selection, 9I OEM responds by displaying the Virtual Index Wizard introduction panel. When I click on Next, the wizard displays a panel that allows me to select the columns for my virtual index. Clicking next again displays a panel that allows me to attempt to force Oracle to use the new index or allow it to choose the new index on its own. My personal preference is to allow Oracle to choose the index on its own without providing any additional prodding.
Clicking Next again displays the projected cost improvement the new index will have on the statement being analyzed. I used a full table scan on a rather large table as an example. Based on the display, we can be pretty sure that the new index will have a positive affect on our statement's performance. The last panel also displays a button that allows us to view the before and after access paths.
The Virtual Index Wizard does a fairly good job of estimating the impact that new indexes will have on a SQL statement. It is an excellent tool to use when you are starting your tuning education as well as evaluating access paths in Oracle 9I databases.
I hope you enjoyed this blog on Oracle 9I access path identification. I think that after reading this blog, you'll agree that Oracle9I OEM provides users with a host of utilities to facilitate the access path identification and SQL statement tuning processes.
Next Up
We'll take a look at the various tools in 10G Grid Control that we can use to display access path information.
Access Paths VI – 10G Grid Control SQL Details Panels
    
    
We continue our discussion on Oracle access path identification. This blog takes an in-depth look at 10G Grid Control’s SQL Details Panels.
Introduction
The intent of this first series of blogs is educate ourselves in the various tools we can use to identify Oracle access paths. In my next series, I'll provide you with links to information provided by various Oracle experts on access paths. Finally, we'll discuss how we can influence access paths when we think our choice of access path is better than Oracle's.
All Roads in 10G Grid Control Lead to the SQL Details Panels
10G Grid Control's primary display tool for providing information on specific SQL statements is the SQL Details Panels. It seems like no matter where you start your investigation in 10G Grid Control, sooner or later, you'll end up with a panel that contains a link to the SQL Details Panels. Let's take a look at a few of the more popular ways of finding our way to them.
Database Performance Home Page
The Database Performance Home page provides a couple of links that lead to the SQL Details Panels. If we start from the top, we'll see the Sessions: Waiting and Working Chart in the middle of the screen. This display allows us to determine exactly what resource our disgruntled users are waiting for.
Clicking on one of the colored areas of the Sessions: Waiting and Working Chart allows us to drill down to the specific resource being waited on. Clicking on any of the colored areas will display a drill down panel that provides details on that specific resource. For example, if we click on the purple I/O color on our chart, we'll navigate to the I/O drill down page.
No matter what resource you drill down into, they all have the same look and feel. All of the resource panels display the particular resource's historical utilization and two pie charts on the bottom of the screen that allow administrators to drill down into the top sessions and SQL statements utilizing that resource. If we click on the links listed under the Top Waiting SQL: User I/O heading, 10G Grid Control will respond by displaying the SQL Details Panels. If we click on the links under the Top Waiting Sessions: User I/O heading, 10G Grid Control will display the Session Information panel. The Session Information panel contains a link to the current SQL being executed which is displayed using the SQL Details Panels.
If you are interested in learning more about the Database Performance Home Page, please turn to my blog titled "Database Tuning Using Oracle 10G Grid Control".
Top Sessions Panel
If we return to our Database Performance Home Page, we need to scroll down to see the Top Sessions and Top SQL links. When we click on the Top Sessions link, 10G Grid Control will respond by displaying the Top Session display panel. If you look at the red box, you'll see that the Top Sessions panel is actually part of the Top Consumers set of display panels.
We navigate to the Session Details Panels for each session in our report by clicking on the link under the SID column heading. The Session Details Panel provides a link to the current SQL being executed which 10G Grid control displays using the SQL Details Panels.
Top SQL Panel
Let's return to our Database Performance Home Page. The link below Top Sessions is the Top SQL. Clicking on the link displays the Top SQL panel. The Top SQL panel has to be one of my favorite display panels in 10G Grid Control because it allows us to view SQL performance historically. The Top SQL panel provides a listing of SQL Details links that provide us with access to the SQL Details Panel.
Viewing Performance Historically
10G Grid Control allows us to take the database "back in time" and review performance historically. This screenshot shows the Database Performance Home Page in historical mode. Historical mode is activated by the drop down menu (red box) that is displayed in the upper right hand corner of the page.
The historical mode display provides a slider that allows us to go back in time and view performance statistics captured during a specific time period. When I drag the slider to a time in the past, 10G Grid control displays the Database Performance Home page statistics for the chosen time period. If we click on the Top SQL link at the bottom of the page, 10G Grid Control responds by displaying the Top SQL Period display panel. This panel provides links to the SQL Details panels for the top SQL statements that were active during the chosen time period.
SQL Details Panels
Now that we have learned how to navigate to the SQL Details Panels, let's take a look at the information they provide. The first screen displayed is the Execution Plan panel.
Execution Plan
The Execution Plan panel provides us with the SQL statement's access path taken during execution. The top section will display the entire text of the SQL statement. The next section displays where the panel retrieved this information from (cursor cache), the optimizer mode, data capture time, plan hash value and optimizer mode.
The bottom section provides us with all of the basic information we need to identify the access path this query is taking. It also includes key performance indicators including the number of rows that were used in each step, the time to perform the step and CPU and I/O cost indicators.
Current Statistics
Let's take a look at the Current Statistics Panel and the wealth of tuning information it provides. The panel will tell us if this SQL statement is being influenced by a SQL Profile. We'll have an entire discussion on SQL Profiles in an upcoming blog. But for the sake of this discussion, a SQL Profile is a stored access path that we can generate using the SQL Tuning Advisor.
Time Model information provides key performance indicators including elapsed time per execution, CPU time per execution and the statement's wait ratio. The Execution Statistics includes the number of executions, parse calls, rows per fetch, rows per execution and executions per parse call.
Execution History
The Execution History panel provides us with another key set of performance indicators. It shows us the statement's performance historically. The top chart displays the number of seconds the statement took to execute. This allows us to quickly determine if a statement's spike in elapsed and CPU times can be attributed to an event that occurred during a specific time in the past.
The Executions chart shows us how many times a statement was executed historically. This allows us to determine if an overall degradation in database performance can be attributed to an increase in the number of executions of this statement. I have used this panel to find a problem in a program that was causing it to loop and execute a particular statement millions of times.
Tuning History
The Tuning History Panel displays information on past tuning exercises for this statement. If you have generated any SQL Profiles, this panel will display information on each SQL Profile generated. It will also provide information on which SQL Profile is currently being used. The panel allows users to activate and deactivate each of the SQL Profiles that were generated for this statement. As I stated, we'll focus on SQL Profiles in an upcoming blog and I'll show you how to use this panel to switch between the different SQL Profiles created for a given statement.
Wrapup
I hope you enjoyed this blog on the SQL Details panels. I think you'll agree that it provides us with a good set of information to begin our SQL performance tuning activities.
Access Paths VII – Access Path Education
    
    
A few recommendations from your friendly ex-Oracle instructor on resources that will help you learn more about Oracle access paths. These resources will benefit beginners and tuning gurus alike.
Introduction
This blog is not about the SQL tuning process, the last 6 blogs contain a plethora of information on how to identify performance issues and drill down to find the poorly performing SQL. It's intent is also to not teach you to tune, it's focus is to help you begin, or brush up, on access paths and SQL tuning. Remember the Foot Rule of Thumb "The mark of being a good DBA is not knowing everything, its knowing where to look when you don't".
Gaining a well-rounded and deep understanding of Oracle access paths and SQL performance is a wonderfully challenging task. One of the hurdles is the time required to learn how to tune SQL. DBAs are being challenged with ever-increasing workloads, shrinking DBA staffs and increasingly complex technologies to support.
In addition, the majority of shops view database administrators as much more than just "table jockeys." The DBA is often seen as the go-to technician because of their traditionally strong problem solving skills. The DBA is also viewed as the IT staff's technical generalist because of the working knowledge they have in many different facets of information technology. Those of us that have been working in this profession for any time at all understand that the term "database administration" is really a misnomer. We have to know everything from application and data design to network communications and operating systems (and everything in-between).
As a result, many of us don't have enough time to dig deep into the many different facets of the Oracle database environment. Oracle has recognized this issue and has created the various advisors in 10G to assist us in the monitoring and tuning process. I have written dozens of blogs on the toolsets provided in 10G that are designed to reduce the amount of time we spend administering, troubleshooting and tuning the Oracle Ecosystem (database, operating system, hardware server).
The Importance of Understanding Oracle Access Paths
That being said, all DBAs must have a firm understanding of Oracle access paths and SQL tuning. During my career, I've learned that there is nothing that can drag down an Oracle Ecosystem than poorly performing SQL statements.
That's when management suddenly doesn't care about how much you know about RAC, standby databases and data design. They want the database "fixed" and running smoothly again. They suddenly have a single-minded purpose. They quickly begin the chant, the chant that can only be described as the management mantra - "is it fixed yet, is it fixed yet, is it fixed yet, is it fixed yet, is it fixed yet….."
I've actually had my entire chain of command stand behind me in order of where they fit into the management food chain (team lead, project manager, section manager, division manager, VP…) while I was working on a database performance issue. I turned around and chuckled when I saw the lineup of managers. Funny, they didn't share my humor.
You need to get the performance problem fixed, you're nervous and you have a 15 page SQL statement on your screen. This is where a strong education in Oracle access paths becomes "somewhat handy."
SQL Tuning Topics You Must Understand
Here's a quick laundry list of topics you'll need to know. All of them are important. I have tried to include resources later in this blog that will provide you with information on all of them. You can use this listing to check off the topics during the education process.
     The Oracle release's impact on optimization and SQL tuning. Each new release contains features that affect access paths and SQL performance. Sometimes good and sometimes not so good.
     Oracle parameters that affect the optimizer. There are a handful or two of startup parameters that can influence the SQL optimization process. I have provided several articles below on this topic.
     Optimizer modes - Rule, choose, first_rows, all_rows. Each of the modes influence the optimizer to create access paths for the type of workload the database is responsible for supporting. For example, the first_rows optimizer mode may be OK for online transaction processing (read a record/write a record) but probably won't generate efficient access paths for a data warehouse database where millions of records are summarized.
     Oracle data statistics - How the optimizer uses them, how they affect access paths.
     Oracle system statistics - Later releases of Oracle can also incorporate the system load information during optimization.
     Basic access paths
     Index only - Oracle is able to read all of the data required to satisfy the query's data needs from the index structure alone.
     Index to table - Oracle uses a row identifier to probe the table to satisfy the data request. Why read all of the rows in a table if you can use an index structure to retrieve just the rows you need?
     Full table scan - Oracle reads all rows from the table. If the statement is going to read the majority of a table's rows, why would you want it to needlessly traverse an index to get the data? You are reading extra index blocks for no reason. You also need to learn the impact that the high-water mark has on full table scans. Oracle will scan the table to the last block used (as opposed to the last block that actually contains data).
     Join access paths - Used when the statement retrieves data based on matches between two tables (i.e. retrieve all of the employees that have the department name of "Welding"). The employee information is contained in the employee table and the department information (including the department name) is in the department table.
     Nested loop join - Good path when the join is accessing a small subset of rows.
     Hash join - Efficient access path for joins that access larger sets of data.
     Sort merge join - Sorts rows to allow quicker access during the join.
     Cartesian join - The tables being joined do not have join clauses that relate the two tables together.
     Outer joins - An outer join returns all of the rows that satisfy the particular join condition and returns additional rows from one table that do not satisfy the join condition.
     Join order - Oracle only joins two tables at a time. If multiple tables are joined, join order also describes the overall order of the tables being accessed. Oracle will join two tables and create an intermediate result set which is then used as input to the next join.

Join order plays a significant role in query performance. In general, you want to reduce the number of rows processed as soon as you can during the processing of a given SQL statement. The sooner you can reduce the number of rows being sent to future operations, the faster the query will usually run.
     Subqueries - A select within a select statement. Can be one of the trickier statements to tune, especially when you have multiple subqueries embedded within each other.
     Indexes and selectivity
     B-tree indexes are good for column(s) that have many unique values (high selectivity)
     Bitmap indexes are used for column(s) that do not have many unique values (low selectivity)
     How SQL statement predicates can determine if an index can be used. There are times when the way a statement is coded prevents Oracle from choosing an index as the access path. A common problem that often leads to poor performance. You do get a chance to flog the application developer responsible, though.
     Types of index access paths
     Index unique scans - The SQL statement accesses an index using a column (or columns) that are defined in a unique or primary key index with an equality condition.
     Index range scans - Oracle scans a set of entries in the index to satisfy a query.
     Index skip scans - Oracle is able to break down a multi-column index and view them as smaller subindexes. This is achieved by Oracle "skipping" the leading columns in the index and using columns that appear later in the index's definition.
     Full scans - Oracle scans all of the index entries. Kind of like a tablescan on an index.
     Fast full scan - Oracle uses multi-block reads to retrieve the index blocks.
     Sorting - Many operations require the database to sort a result set. It could be that the query wants to return the data in a particular order (ORDER BY, GROUP BY). In addition, some joins require that the data be sorted during the operation's execution. You'll need to understand why sorts are performed and the impact they have on performance.
     Views - Views can really complicate the tuning process. You think you are accessing a few tables in a query and then find that you are actually joining views together that also contain join operations.
     Hints - My next blog will show you how to use hints to influence access paths and how hints can be used to educate yourself on the performance of a particular access path operation.
     Bind variables and bind peeking - In the first and second blogs of this series, I described the impact that bind peeking can have on SQL statement optimization. Bind peeking may lead to the predicted access path not matching the access path taken during execution.
     Query transformation - Oracle can rewrite a query during the optimization process. Learn how Oracle uses view merging, predicate pushing, OR expansion and subquery unnesting to attempt to improve execution performance. This will occur on a regular basis and you need to understand how query transformation works and how it affects SQL performance.
     Local predicates vs join predicates - A local predicate accesses a bind or hardcoded variable (i.e. emp_id = :empid, emp_id = 13344) while a join predicate is used to join two tables together (i.e. emp.dept_id = dept.dept_id)
     Predicate usage - You need to understand how predicate usage affects indexes and access path generation. I have provided information below that discusses predicate conditions (=, >, <, etc) as well as how predicates affect index utilization and access paths.
     Operation cardinality - The number of rows returned by a particular access path operation. Importance of operation cardinality is magnified as the number of tables accessed in the query increases. As stated previously, the sooner you can reduce the rows sent to future operations, the better your query will perform.
     Skewed data and histograms - What happens when you have an index built upon a column in a million row table that has twenty occurrences of the value "OUT OF STOCK" and the rest of the column values contain the value "IN STOCK". This is an extreme case, but the impact is that even though you may access the table looking for an "OUT OF STOCK" value, Oracle will most likely perform a table scan. You'll be searching close to a million values, while you need to retrieve only twenty of them. An index would be a much better access path, but Oracle sees that the column has such poor selectivity that it won't choose it. A histogram identifies skewed data and is able to provide the optimizer with the information it needs to make a more educated decision when choosing between a table scan and index access.
     Parallel processing and partitioning - A best practice for large data stores is to partition data into smaller subsets of data. Partitioning allows data to be broken down into these smaller subsets yet still be viewed as a single-entity by the application. Parallel processing breaks a single request for data into one or more processes that access the data in parallel and return the data to the calling application.
     Parsing - Learn the differences between hard parses vs soft parses. When the application sends a statement to the database for processing, one of the first steps in the execution process is called a parse. Oracle will check the statement's syntax, check security, generate the access path, etc.. Like most operations the less steps it needs to perform the better. You will need to understand the impact that bind variables have on the parsing process and how the hard parse/soft parse ratio affects query and database performance.
Access Path Education
You can start your education on the different access paths that are available to the optimizer by reading Oracle's Database Performance Tuning Guide that is provided in each Oracle release's documentation. Before you buy third-party books on any topic, I highly suggest that you read Oracle's documentation first. The importance of this suggestion bears repeating - READ ORACLE's DOCUMENTATION FIRST. Here's a link to the 10G Database Performance Tuning Guide.
You'll need to create an account, but its free to register and the process is painless. Virtually every topic that I described above is covered in the Database Performance Tuning Guide. It is very important that you read the guide that pertains to the release that you are working with. Each release contains enhancements to the optimizer as well as new features that affect the optimization process.
Oracle Classroom Education
OK, since I'm an ex-Oracle Instructor, you could have predicted that my next recommendation was to sign-up for an Oracle class on SQL performance tuning. Here's a previous blog that provides you with a few hints and tips to obtain the most from your classroom experience.
Oracle Education offers a class that focuses on SQL tuning. Here's an excerpt from the class description provided on the Oracle Education Website:
What you will learn:
--------------------------------------------------------------------------------
This course is designed to give the experienced SQL Developer or DBA a firm foundation in SQL tuning techniques. The participant learns the necessary knowledge and skills to effectively tune SQL in the Oracle Database 10g. They learn about tuning methodology as well proactive tuning and reactive tuning methods. Students are introduced to the benefits of the new automatic tuning mechanisms available in Oracle Database 10g. On completion of the course they are able to compare and contrast the steps involved to tune manually as in prior releases as well as use the automatic SQL tuning features provided in the current release. Students gain a thorough conceptual understanding of the Oracle Optimizer, and reinforce instructor-led learning with structured hands-on practices. The course uses a series of challenge-level workshops, allowing students to "play, discover, and learn" at their own level and pace. The students learn to use the Oracle diagnostic tools and facilities: Automatic SQL Tuning components, EXPLAIN, SQL Trace and TKPROF, SQL*Plus AUTOTRACE. Students also learn to influence the behavior of the Optimizer by changing the physical schema and modifying SQL statement syntax.
Metalink Notes
Oracle's premier web support service is available to all customers who have current support service contracts. Oracle MetaLink allows customers to log and track service requests. Metalink also allows users to search Oracle's support and bug databases. The website contains a patch and patchset download area, product availability, product life-cycle information and technical libraries containing whitepapers and informational documents. A few of the white papers and notes that pertain to SQL tuning, optimization and access paths are provided below (do a search using the note number on the main page in Metalink to retrieve the note):
     199083.1 - Query Tuning Overview - Lots of good links to other articles.
     398838.1 Frequently Asked Questions. Very good discussion on the optimizer. Discusses queries not using indexes.
     248971.1 - Query tuning best practices. LOTS of links to other notes. Links discuss a wide range of topics. From parameters that affect optimization to system statistics impact on optimization.
     35934.1 - Common Issues and Misconceptions about the cost based optimizer.
     68735.1 Diagnostics for Query tuning.
     67522.1 - Diagnosing why a query doesn't use an index. Very helpful.
     207434.1 - Tuning Queries - Quick and Dirty Solution.
     372431.1 - Troubleshooting a new queries.
     154354.1 - Using the FIRST_ROWS for fast query response.
     100229.1 - Measuring Index Selectivity.
     41954.1 - Hash join operations - a little dated but still valid.

Third Party Books
If you want to learn how the optimizer works, I highly suggest that you read Jonathan Lewis's book titled Cost-Based Oracle Fundamentals. It is one of the most educational and informative books I have ever read on the cost-based optimizer. Here's my review of the book for DBAZine. If I could only recommend one book on the Oracle optimizer, Jonathan's book would be it. Jonathan also maintains a blog that focuses on a wide range of topics but does include a lot of discussions on Oracle tuning.
Tom Kyte's Ask Tom Website also provides a lot of information on SQL tuning, access paths and proper coding techniques. One of Tom's trademarks is to use a snippet of code to reinforce the information he is conveying. I'm a big fan of using examples and a big fan of Tom's website.
Articles and Presentations
Article by Don Burleson on cost based optimization. Discusses parameters that affect the cost based optimizer. Statistics, helpful hints and tips.

Advanced CBO Article and Presentation. Advanced discussion on how statistics influence the optimizer, why the optimizer doesn't always take the correct access path. Written By Wolfgang Breitling.

Article on the Oracle 10G Costing model changes and how they influence access paths. Listing of parameters that affect optimization. Author is Kimberly Floss.

Oracle Statistics Article - System statistics affects on the optimizer. Very good article that provides lots of details. Written by Jonathan Lewis
Article on why Oracle won't use an index (when you think it should). Excellent article written by Jonathan Lewis.

Test, Test, Test
Experience pays. You need to spend time "in the seat" learning how to tune. Read the above information and find a database that you can use as a test environment. You need to work with tables that have small numbers of rows and queries that return small result sets. You also need to run queries that access tables with high numbers of rows and return large result sets.
I'll show you how to use hints to change access paths in then next blog. Change the access path with a hint and run the query. Influence the optimizer to take a different access path and join order and keep a tuning record of the changes and runtimes.
Don't' fall into the trap of favoring one access path or join method for all situations. I once overheard a conversation between a developer that just moved from an online transaction system to the data warehouse team and the data warehouse DBA. The developer was looking at an access path and stated "I hate hash joins." The warehouse DBA stated "Not in this environment you won't". All access paths and join methods have a place in Oracle optimization. Its up to you to learn which ones apply for a given situation.
SQL*PLUS Tips and Tricks
    
    
I thought I’d provide you with a couple of quick SQL*PLUS tips and tricks this week. Some of the features are new in 10G, while others are available in earlier releases. I think you’ll find them all to be very helpful.
What Database Are You Working In?
Working in the wrong database is a common problem for database experts as well as their less experienced counterparts. How many times have YOU found yourself running statements in the wrong environment? Feel free to include me in that not so select group. The operating system command SET can be used in Windows systems to display environment variables. The ENV command can be used to display the environment variables in UNIX. Many seasoned database administrators change their UNIX shell prompt in their.profile to display the current Oracle SID. Displaying the current Oracle SID in the shell's prompt provides a continuous reminder to the DBA of the database they are working in.
GLOGIN.SQL and LOGIN.SQL
Administrators are able to use two operating system files, GLOGIN.SQL and LOGIN.SQL to customize their SQL*PLUS environment (including the prompt). When SQL*PLUS starts, it will look for GLOGIN.SQL in $ORACLE_HOME/SQLPLUS/admin and will execute its contents as soon as the connection to the database is complete.
Once the GLOGIN.SQL file has been executed, the LOGIN.SQL file will be executed. Oracle will look for the LOGIN.SQL file in the current working directory (where you started SQL*Plus) and the operating system environment variable SQLPATH. Remember that LOGIN.SQL can overwrite the actions of GLOGIN.SQL. The exact names of these files may be different on some operating systems. Check the Oracle installation and administration guides provided for your operating system for the exact names.
Here's an example of my GLOGIN.SQL file on my PC that displays the time and instance name in my SQL*PLUS prompt. The file also contains a few formatting commands to format SQL*PLUS output. In this example, I'm using a SELECT statement to populate a variable that is displayed in the prompt. I'll show you how to use a SQL*PLUS pre-defined variable to do the same thing later in this blog.
COLUMN file_name FORMAT a44
COLUMN tablespace_name FORMAT a20
COLUMN owner FORMAT a15
COLUMN segment_name FORMAT a20
set lines 132
set pages 100
set termout off
col dbname new_value prompt_dbname
select instance_name dbname from v$instance;
set sqlprompt "&&prompt_dbname> "
set termout on
set time on
The Problem in Releases Prior to 10G
When I first tested this feature many years ago, I thought to myself "hey, this is a pretty cool feature." A fellow administrator walked by and saw that I was changing my prompt. He told me "you'll have to watch that because if you connect to another database, you won't execute the GLOGIN.SQL file again. That means you'll still have the original database name displayed in the prompt but you will be connected to the new database. There's a few other tricks you can do but just putting the display commands in GLOGIN won't work and its actually kind of dangerous."
10G SQL*PLUS comes to the rescue. GLOGIN.SQL and LOGIN.SQL are now executed after each successful connection to a target database. This is in addition to the files being run during the initial execution of the SQL*PLUS tool. Each time you connect to a different database in 10G SQL*PLUS, the prompt will change accordingly. The two demos below will show you the differences between 10G and 9I SQL*PLUS. Both 9I and 10G installations have the script above embedded in the GLOGIN.SQL file.

10G SQL*PLUS DEMO
SQL*Plus: Release 10.2.0.1.0 - Production on Thu Jun 22 07:14:10 2006
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle9i Enterprise Edition Release 9.2.0.6.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production
07:14:11 orcl9i> * Prompt shows database I connected
07:14:17 orcl9i> * to when I activated SQL*PLUS
07:14:18 orcl9i>
07:14:18 orcl9i>
07:14:18 orcl9i>
07:14:18 orcl9i> connect system/maxwell@orcl
Connected.
07:14:26 orcl>
07:14:28 orcl> * Prompt changes when I connect
07:14:28 orcl> * to a different database
07:14:28 orcl>

9I SQL*PLUS DEMO
SQL*Plus: Release 9.2.0.6.0 - Production on Thu Jun 22 07:16:01 2006
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to:
Oracle9i Enterprise Edition Release 9.2.0.6.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production
07:16:02 orcl9i> * Prompt shows database I connected
07:16:03 orcl9i> * to when I activated SQL*PLUS
07:16:03 orcl9i>
07:16:04 orcl9i>
07:16:04 orcl9i>
07:16:04 orcl9i>
07:16:04 orcl9i> connect system/maxwell@orcl.com
Connected.
07:16:14 orcl9i>
07:16:15 orcl9i> * Notice that the prompt doesn't change
07:16:16 orcl9i>
07:16:16 orcl9i> select name from v$database;
NAME
---------
ORCL
07:16:23 orcl9i>

Pre-Defined Variables
Pre-defined SQL*PLUS variables can also be used to customize a user's prompt. Currently, SQL*PLUS provides the following pre-defined variables (variables that are new in 10G are highlighted in bold):
     _CONNECT_IDENTIFIER - Connection identifier used to establish the database connection
     _DATE - Current date, or a user defined fixed string
     _EDITOR - Specifies the editor that will be used when the EDIT command is executed
     _O_VERSION - Oracle database version
     _O_RELEASE - Full release number of the Oracle Database
     _PRIVILEGE - The privilege level the current user has assigned to them
     _SQLPLUS_RELEASE - Full release number of the SQL*PLUS program being used
     _USER - Account name used to make the connection
For example, the command below could be embedded in GLOGIN.SQL or LOGIN.SQL to display the account used to connect to the database and the connection string:
SET SQLPROMPT "_USER'@'_CONNECT_IDENTIFIER > "
The demo below shows the results:
>SET SQLPROMPT "_USER'@'_CONNECT_IDENTIFIER > "
SYSTEM@orcl >
SYSTEM@orcl >
SYSTEM@orcl >
I don't care how experienced of a DBA you are, you are always in danger of working in the wrong database. It takes but a momentary distraction to lay the groundwork for a catastrophic error. Anything I can do to reduce my anxiety when I execute that 'DROP USER…. CASCADE" command, makes me a more efficient and more productive administrator. Taking five minutes to customize my prompt is well worth the effort - when you compare it to the disastrous results that can occur as the result of administering the wrong database. Now I can finally quit executing "SELECT NAME FROM V$DATABASE" to tell me what database I'm in.

Enhancements to the SPOOL Command
The SPOOL command tells SQL*PLUS to send all output to the specified flat file. Think how many times you have used the SPOOL command to save your output for later review. Well, 10G improves the usablity of the SPOOL command by adding the following syntax:
     APPEND - Appends output data to an existing file. The command will create a new file if the specified file is not found
     CREATE - Creates a new output file and will return an error if the file already exists
     REPLACE - This is the default option. REPLACE will replace an existing file or create it if it is not found

DBMS_OUTPUT Line Length and Output Limit Increases
Oracle provides the DBMS_OUTPUT package to display debugging information from PL/SQL code. The package was not designed to display reports or format output to standard output. But there are times when developers need to pump out large amounts of diagnostic information to help them debug their programs.
But developers using 10G R1 and earlier versions were frequently constrained by DBMS_OUTPUT's 255 byte single line limit and total session limitation of 1 million bytes. As a result, when using SQL*PLUS to debug their PL/SQL code, developers often received the following error messages:

ORA-20000: ORU-10028: line length overflow, limit of 255 bytes per line
Single line limit of 255 bytes exceeded.
ORA-20000: ORU-10027: buffer overflow, limit of 1000000 bytes
Maximum amount of 1 million bytes per session exceeded.
10G R2 increases the single length line limit to 32,767 bytes and the overall session limit is removed altogether.

Using PUPBLD.SQL
PUPBLD.SQL can be used for more than stopping those irritating "PRODUCT PROFILE TABLE NOT LOADED" messages from occurring when you log on to SQL*PLUS. PUPBLD.SQL creates a product profile table that is read by SQL*PLUS when users log on to the product.
Front-end applications can be designed to prevent users from updating data incorrectly. Other applications don't require database accounts because the user logs on the application and not the database. But there are still numerous applications that require database defined accounts and passwords. What happens if the user logs on to the database using their database account and password using SQL*PLUS? The application code is not available to prevent them from making incorrect or unwarranted changes to the database data.
Administrators can prevent this from happening by inserting rows into the product profile table to disable a SQL*PLUS user's ability to execute:
     SQL*PLUS commands - COPY, EDIT , EXECUTE, EXIT, GET ,HOST (or your operating system's alias for HOST, such as $on VMS, and ! on UNIX),QUIT, PASSWORD, RUN, SAVE, SET, SPOOL, START
     SQL statements ALTER, ANALYZE, AUDIT, CONNECT, CREATE, DELETE, DROP, GRANT, INSERT, LOCK, NOAUDIT, RENAME, REVOKE, SELECT, SET ROLE, SET TRANSACTION, TRUNCATE, UPDATE
     PL/SQL commands - BEGIN, DECLARE
SQL*PLUS reads restrictions from PRODUCT_USER_PROFILE when a user logs on using SQL*PLUS and maintains those restrictions for the duration of the session. To disable a SQL or SQL*PLUS command for a given user, insert a row containing the user's username in the USERID column, the command name in the ATTRIBUTE column, and DISABLED in the CHAR_VALUE column.
The Importance of Soft Skills
    
    
I thought I would deviate from my usual discussion on 10G technical topics for one blog. I’d like to talk about non-technical skill sets and the important role they play in our careers.

It used to be that you could get by with a total lack of interpersonal skills in our profession. Well those days are long gone. If you want to succeed, you’ll need to learn how to communicate effectively and play well with others. This blog contains a few pearls of wisdom, and a story or two, to help you become a well-rounded technician that is on the fast track to a successful career.
The DBA's Evolving Role
You have read numerous articles on the changing role of the DBA. How the various database vendors are making their products so easy to administer that, sooner or later, DBAs will no longer be required to be expert technicians (or be required at all). If you have been reading this blog, you'll know that I totally disagree.
It is my opinion that database administrators will always need to just as technically proficient as they had to be in the past. I will agree that database environments are becoming easier to administer. Oracle Grid Control allows us to administer, monitor, tune and troubleshoot an Oracle database without having to go as deep technically as we had to in the past. Although we won't have to know as much about the internals as we used to, our skill sets will become much more broad in scope.
The database engine is taking on a much more strategic role in most organizations. It provides the mechanism to store physical data along with business rules and executable business logic. The entire application environment (data storage, business rule enforcement, application program storage, communication, system monitoring) is now controlled by the database. Over time, the database engine will store more information related to the understanding of the business, the meaning of the data stored (metadata) and the mechanisms to control and track versions of the database, access programs and related software. As the database's area of influence expands, so does the DBA's. Can any one of us predict what IBM, Oracle and Microsoft have up their sleeves in their next "latest and greatest" release? Not me. But THAT is what makes this job exciting. Our area of technical influence will be expanding, not contracting.
The Importance of Soft Skills
Now that I have expressed my opinion on the expanding role of the DBA, let''s talks about the importance of soft skills. Before we begin, I define soft skills as the ability to communicate in both written and verbal forms and the ability to interact with fellow employees in a positive manner. In the past, a technicians' lack of interpersonal skills was often overlooked. The more technically proficient the technician was, the more leeway they were given.
Let me provide you with a couple of quick examples. When I first started in this profession, I attended a meeting at a large financial institution that included some pretty high-level representatives from both the business and technical areas. One of the technicians that attended the meeting was a mainframe operating system support technician.
The guy was well known to be very good at as his job and as nasty a person as you would want to meet. The meeting started and it quickly became apparent that most of the issues being discussed would be about business processes. The O/S expert slammed his pen and pad down on the table and declared "It looks like this is going to be a waste of my time. I have work to do. Call another meeting with me when you need technical advice." He then promptly walked out. Being a junior level programmer, I was in awe. I asked my boss after the meeting who he was. He stated "one of our mainframe gurus, don't act like he does until you get as good as he is."
A dozen years later, I saw a technician raise his hand at an enterprise-wide IT meeting when the CIO asked (rhetorically I think), who was the most important person in the organization. I think the CIO thought the answer would be "the CEO." The techie who raised his hand said "I am." I think every manager attending that meeting shrunk down in his or her seat. When the CIO asked why, he stated, "when my computers go down, all business stops." Just a few months ago, I had a 20-minute conversation with a UNIX admin who never bothered to turn around from her screen to look at me.
Although my examples may be over the top, they show you the mindset that often plagues our profession. I have been guilty of falling into that trap myself. In the past, I could be described as being "somewhat temperamental". I can also say that once you get a reputation for being "somewhat temperamental", it becomes hard to shake.
As the years have gone by, I have migrated from DBA to DBA Unit Manager and now DBA Operations Manager. I have seen too many excellent technicians end up with a mediocre career because they achieved a reputation for "not playing well with others." I talk from experience. If you want to excel as a technician, you will need to be technical, but you will also need to work well with others.
Verbal and Written Communication Skills
I think people read this blog because they take pride in their work and want to become better at their chosen profession. So here's my second piece of non-technical advice. The importance of improving your communication skills can not be understated. I don't care how strong of a technician you are, if you can't communicate effectively with your peers, you won't be able to succeed in this profession. In the old days, you might have been able to get by with just your technical skills. That is definitely not the case in today's business world.

Take a look at your last performance appraisal forms, I'm betting that most of the criteria you are being judged upon depends upon communications. The key words and phrases to look for are "ability to work in a team environment", "keep supervisors informed", "maintain good communication with the user community", "ensure the content of the communication is at the appropriate level for the intended audience", "provide system and user documentation for projects and system enhancements." I pulled all of the aforementioned phrases verbatim from one of my own recent performance appraisals. I reviewed all of the criteria that I was being evaluated upon and found that almost ninety percent of the items depended upon verbal or written communications.

If you don' have good communication skills, all is not lost. Like anything else, these skills can be learned. I still consider myself to be only a fair writer. I am in awe of people like Craig Mullins (a fellow blogger here on DBAZine) who can just sit down and let the words flow. I often find myself agonizing over every word and sentence.

When I first started working in a corporate environment, my writing skills were terrible. My original career was not database administration, it was construction. A job that didn't require you to excel at written and verbal communications. One on the job accident, 9 operations and 11 months of vocational rehabilitation training later and I had a new career as a COBOL programmer. I went from working with a construction crew to working with computer programmers. My first employer was a very large and somewhat stuffy financial institution. When I was employed there, men couldn't leave their floor without wearing their suit coat.
I quickly learned that getting caught wiping your computer screen off with your tie didn't really show your managers that you were good at thinking "outside of the box." That was about the only thing I thought that piece of knotted cloth around my neck was good for. For the first six months, I refused to tie them. Being the non-conformist that I was, I just loosened them up, slipped them off and hung them up.
Saying that my communication skills were rough around the edges when I started my carrer would be an understatement. But I had the good fortune of having a manager that understood the importance of both verbal and written communications. I would write a memo, she would correct it with her red pen and send it back to me for a rewrite. Many of them had a "Nice Try!" and a smiley face on top. After becoming exasperated because of the numerous rewrites (and seeing all of those smiley faces), I thought I had better improve my writing skills. I read books, practiced writing and became involved with as many company newsletters and related communications as I could. When I asked to join a newsletter, I always started with "I'm not the greatest writer, but I'm trying to learn." I also asked my peers that worked on the newsletter to critique my work. The more I was critiqued, the better I became.
Same way with public speaking. My first speech could be described as being "somewhat less than stellar." Craig Mullins would gently prod me from time to time until he finally convinced me that speaking was something I should be doing from time to time. Craig promised to sit in the back of the room for my first speech and give me hand signals if I was speaking too fast, too slow, too loud or too soft. Halfway through the speech, my knees were knocking and his hands were in constant motion.
I found that like anything else, experience helps. But I will say that my speaking career was not without excitement. I learned that you really shouldn't drink a carbonated beverage wearing a tie mike that is attached to a set of 6 12-foot speakers. When I was done chugging the pop before the speech, I looked around and saw everyone laughing at the noises I had just made. I then found out that a tie mike doesn't help when you try to stifle a burp.
I also learned that some podiums are on wheels and those wheels aren't always locked. I started my first sentence, leaned against the podium and it began to move. I tripped a little trying to stop the podium from moving and ended up heading for the end of the stage at a very rapid rate. It was a raised stage too, about six feet higher than the first row of seats. As I quickly approached the end of that raised stage, I noticed that the people in the first row were making motions just like the extras did in the old Godzilla movies- right before they got stomped on. Lucky for me one of my work buddies in the first row had the good sense to jump to his feet and stop the podium (and me) from killing a few members of my audience. I got over those little snafus and kept plugging away. With each subsequent speech, I started to improve.
The point I am trying to make is that you can improve upon your communication skills. IT shops are no longer evaluating technicians purely on their technical skills. I have seen the soft skill evaluation pendulum swing a little more each year.. It is the total package of skills that you bring to the table that you are being evaluated upon. We all know the importance that our technical skill sets have upon our success in this field. But you also need to be well rounded in all
Application Design Review Meetings
    
    
Let's continue our discussion on the Non-Technical Art of Being a Successful DBA. The intent of this blog is to help administrators design and standardize on a formalized design review process. The goal of the design review process is to identify and address application design, process flow, program logic and SQL statement problems early in the development lifecycle. Identifying these issues early in the development lifecycle allows them to be more easily addressed than if they were to be identified during later stages.
One of the overlooked duties of an Oracle DBA is to inform, educate and assist the application development staff during the application development process. Although these responsibilities may not be formally assigned to the DBA, the DBA unit often finds that they are providing these services by default. The DBA is often considered to be a seasoned veteran who spends most of their time learning the subtle eccentricities of the Oracle database management system.
It is the DBA’s responsibility to ensure that the overall design and implementation of the new application is proceeding according to expectations. Although application developers may be experts in SQL, procedural languages (Java, PL/SQL, C variations, etc.), they still turn to the DBA for help with standards, procedures, performance design, error recovery and complex SQL.
A continuous and effective dialogue between the DBA unit, system support personnel and application developers is of utmost importance throughout the entire application design and implementation process. One method to foster effective communications is for the DBA unit to create a series of new application design review meetings. These meetings can be organized in a manner that promotes efficient application usage of the Oracle database environment. The design review meetings can be scheduled during logical break points between the different stages of the application development process.
The database administration team should invite representatives from the development and business units to assist in the creation (and enforcement) of the standardized review process. Application development teams will be able to help tailor the design review process to satisfy their specific design and implementation needs. Business users who will be impacted by the production turnover will provide valuable input on implementing new applications for their specific business areas. Customized checklists are created to ensure that all application and business specific issues are addressed during each meeting of the design review process.
It is recommended that the database administration team then communicate to all application development and business areas that any internal applications created without following the standardized review process will not be migrated to production. Having the application teams and business units participate during the creation of the standardized review process allows the DBA team to enforce the policy without being viewed as dictatorial or making rules in a vacuum. In addition, every effort should be made to “sell” the standardized design review process in all communications to application development teams and business units.
The following list of meetings can be used as a starting point in the creation of a structured application design review process:


Initial Overview of Proposed System
The kickoff meeting is held with datacenter operations support managers, operating system administrators, application team leaders, system analysts, end-user management and data administrators to discuss the application’s general purpose and function. This information will allow support technicians to determine the impact the new application will have on existing systems and allow them to begin planning the application design implementation process. The following information should be covered in this meeting:
     General description of purpose and function
     Application scope (enterprise wide application that affects all business units, intra-departmental that affects several business units or departmental)
     General application size (estimated number of programs, data objects)
     Criticalness of the application (mission critical, application critical, non-critical)
     Application availability requirements and downtime windows
     Application type (decision support, business intelligence, data warehouse, data mart, online transaction processing)
     Architecture design (web server, application server, N-tier, distributed database environment)
     Advanced features required (replication, advanced queuing)
     Data feeds required from other architectures
     Load processing
     Batch processing
     Online transaction processing
     Development tools used to build front-end application screens
     Third-party tools used to provide Ad-Hoc query access
     Procedural language used to build business logic (Java, PL/SQL)
     Application development methodology to be used (JAD, RAD)
     Number of concurrent users
     Disk storage and data growth estimates
     Highly available architecture discussion (RAC, Oracle Fail Safe, Data Guard, hardware vendor clustering and failover)
     Performance expectations
     Criteria used to judge performance
     Security and auditing requirements
     Hardware platform, O/S preferences/selection and sizing discussion
     Hardware platform, O/S installation, operation and administration
     Division of duties between the DBA, application development and business units


Logical Data Model Review
This meeting is convened as soon as the logical data model is complete. The major emphasis of this meeting is to determine if the logical data model is complete and correct. The application’s process model (if one is available) can also be verified at this time. Volume statistics, object growth rates, purge criteria, referential integrity needs and application-naming conventions are also discussed. Knowing your data before hand is essential to designing processes to manipulate that data. The following topics are covered in this meeting:
     Determine if the data model is fully documented (entities, attributes, relationships)
     Attributes have correct datatype, length, NULL status, default values
     General discussion of business rules that are to be enforced by database level constraints
§    Not null constraints
§    Check constraints
§    Unique constraints
§    Primary key constraints
§    Foreign key constraints
     Business rules to be enforced by triggers and procedures
     Business rules to be enforced by application code
     Logical/process model comparison
     Volume statistics
     Growth rates and purge criteria


Designing for Performance
This meeting is held with the application development units before any physical DDL is generated by the DBA. Proper transaction design and efficient SQL coding results in less performance-oriented database alterations being made during the latter stages of the design and implementation process. Determining predicate usage will allow the DBA to create additional database objects (indexes, materialized views, index organized tables) to increase SQL, and subsequently, application performance. The following information is discussed:
     Normalization vs denormalization
     Table access requirements and predicate usage
     Database objects used to increase SQL performance including:
§    B-Tree Indexes
§    Bitmap Indexes
§    Bitmap join indexes
§    Function-Based indexes
§    Materialized views
§    Index organized tables
§    External tables
     Data partitioning algorithms
§    Range partitioning
§    Hash partitioning
§    List partitioning
§    Composite partitioning
     Process parallelism (parallel query, parallel DML, parallel load)
     Transaction modeling
     Oracle SQL performance features
§    Full table scan vs index access
§    Hash joins
§    Star joins
§    Index skip scans
§    Index fast full scans
§    Cursor sharing (SQL statement reuse and soft parse vs hard parse)
§    Bind variables
     Reinforcement of effective SQL coding techniques


Setting up a Successful Test System in Oracle
This meeting is held as soon as the application developers are ready to begin the actual coding process. The ultimate goal of this meeting is for application developers to have a firm understanding of what is required to create and maintain a successful Oracle test environment. Discussions on Oracle utilities, SQL statements, procedural code, Oracle security, batch script testing, benchmarking, testing and monitoring are of prime importance at this time. The DBA must make it clear to all involved that the DBA unit is always available to answer questions and troubleshoot poorly performing SQL throughout the entire application development life cycle. It is imperative that the DBA make every effort to find and correct problems during this stage of the development process. Maintaining a proactive stance instead of a reactive one will always prove to be beneficial when it comes to correcting design problems. There are always enough potential problems lurking in the system for the DBA to solve without adding any additional ones through faulty application design. Topics to be discussed in this meeting include:
     Division of duties between DBA, systems, application development and business units
     Test hardware platform, O/S installation, configuration and administrationD
     Data feeds required from other architectures
     Data load and update processing and scheduling
     SQL Loader control cards
     Test system availability requirements
     Test system backup and recovery requirements
     Oracle security authorization
     Tools to aid in application development
     Oracle Enterprise Manager toolset
     In-house monitoring tools
     Third-party vendor products
     Benchmarking and performance measurements


Monitoring Performance During Testing
Because this meeting and the previously discussed Setting Up a Successful Test System are closely related, they may be discussed together to save time and provide greater clarity. The DBA and developers need to have an understanding of what information is required to effectively monitor the performance of the Oracle application throughout the entire testing process. The DBA can offer to show developers how to use the various toolsets provided by Oracle (Oracle Enterprise Manager, SQL Trace, explain plan, BSTAT/ESTAT, autotrace, Oracle traces, V$ tables and Statspack) so that developers can play an active role in performance measurement. Suggested discussion topics are as follows:
     Names of all SQL Loader control cards and batch scripts
     Performance measurement goals and expectations
     Determine the test load volume (data volume, concurrent users) required to accurately predict production performance
     Comparison of test system load to estimated production load
     Oracle performance measurement tools. Suggested tools include:
§    Explain
§    SQL*Plus autotrace
§    Oracle SQL trace
§    Statspack
§    Oracle Enterprise Manager with Tuning Pack option
§     V$ performance tables
     Index utilization monitoring via ALTER INDEX MONITORING statement
     Third party performance monitoring tools
     LAN performance monitoring tools
     Operating system performance monitoring tools
Performance Design Reviews
Information collected from the various performance monitoring tools is discussed at this time. One meeting may be sufficient, but large application development efforts usually require several discussions. If the DBA has maintained good communications throughout the initial stages of application design, there should be few surprises when these meetings are held. SQL tuning and tweaking recommendations are covered in this meeting. Depending on the length of the development process, follow-up meetings can be held to ensure that the application is performing as expected. Some suggested topics include:
Load testing assessment: Is the load being placed on the test system large enough to accurately predict production performance?
Review performance statistics collected during the testing process
Assess SQL coding techniques by reviewing explain plan output for performance critical transactions
§    Index usage
§    Local/join predicates
§    Join methods
§    Subselects
§    View materialization
Determine if additional Oracle objects need to be created to enhance SQL performance
§    B-Tree Indexes
§    Bitmap Indexes
§    Bitmap join indexes
§    Function-Based indexes
§    Materialized views
§    Index organized tables
§    External tables
Preparation for Production Turnover
This meeting is held to determine if there are any last-minute questions and to make certain that everyone knows what is expected of them during the final turnover process. All units that have participated in the application design or application design review process are invited to attend. To ensure that all steps necessary for a successful migration to production are executed, the use of a standardized Oracle migration checklist is highly recommended. This document will allow the DBAs and developers to concern themselves with topics that are related to this specific turnover rather than spending time on the more mundane turnover tasks that are just as important, but easily forgotten. Having a complete, well thought-out migration checklist produces a less stressful and less error prone production migration process. Topics include:
     Division of duties between DBA, systems, application development and business units
     Production hardware platform, O/S installation, configuration and operation
     Data feeds required from other architectures
     Data load and update processing and scheduling
     SQL Loader control cards
     Backup and recovery
     Oracle security authorization
     DBA forms and turnover procedures
     Contact information and escalation procedures
     Post-production monitoring tools to be used
Post-production Turnover
This final set of meetings is held after the production turnover is complete. Application logic errors and performance problem resolution are the prime topics of discussion. A comparison of the actual performance to the anticipated performance of the application is also discussed. The review and correction process, by its nature, is iterative. The problems are identified, a plan of attack to solve them is agreed upon and additional meetings are scheduled to review the results. As additional problems are identified, they are added to the list of problems to be solved. This iterative process continues until all major performance issues and application logic errors are addressed. The post production turnover meetings should include discussions on:
     Review performance statistics collected (Oracle Enterprise Manager, SQL Trace, explain plan, BSTAT/ESTAT, autotrace, Oracle traces, V$ tables and Statspack)
     Assess SQL coding techniques by reviewing explain plan output for transactions experiencing performance problems
§    Index usage
§    Local/join predicates
§    Join methods
§    Subselects
§    View materialization
     Determine if additional Oracle objects need to be created to enhance SQL performance
§    B-Tree Indexes
§    Bitmap Indexes
§    Bitmap join indexes
§    Function-Based indexes
§    Materialized views
§    Index organized tables
§    External tables


Oracle Database Design Review Meetings - Conclusion
These recommendations are not intended to coerce readers into using the Oracle application design review meeting examples verbatim but to emphasize the importance of a structured approach to the design review process. The seemingly endless combinations of application architectures and software products used for application development may require the creation of a fully customized design review process for each application development project. The overall goal of design review meetings is to ensure the involvement of technical support units during the application’s design and implementation. When Oracle design issues are addressed early in the development lifecycle, problems are minimized and the migration from test to production is more easily accomplished.
Some people suggest that indexes require rebuilding when deleted leaf rows appear or when the index has a suboptimal number of block gets per access. While it is tempting to write a script that rebuilds every index in the schema, bear in mind that your schema may contain many thousands of indexes, and a complete rebuild can be very time consuming.

Hence, we need to develop a method to identify those indexes that will get improved performance with a rebuild. Let’s look at one method for accomplishing this task.

How rare are "bad" indexes? 

You cannot generalize to say that index rebuilding for performance is rare, or even medium rare, it depends on many factors, most importantly the characteristics of the application.
·    In scientific applications (clinical, laboratory) where large datasets are added and removed, the need to rebuild indexes is "common".
·    Conversely, in system that never update or delete rows, index rebuilding rarely improves performance.
·    In systems that do batch DML jobs, index rebuilding "often" improves SQL performance.
Oracle MetaLink note 122008.1 has the officially authorized script to detect indexes that benefit from rebuilding.  This script detects indexes for rebuilding using these rules:  Rebuild the index when these conditions are true:
- deleted entries represent 20% or more of the current entries.
- the index depth is more then 4 levels.
Oracle's index rebuilding guidelines appear in Metalink note 77574.1 (dated April 2007) recommends that indexes be periodically examined to see if they are candidates for an index rebuild:
“When an index is skewed, parts of an index are accessed more frequently than others. As a result, disk contention may occur, creating a bottleneck in performance.
It is important to periodically examine your indexes to determine if they have become skewed and might need to be rebuilt.”
Oracle index nodes are not physically deleted when table rows are deleted, nor are the entries removed from the index. Rather, Oracle "logically" deletes the index entry and leaves "dead" nodes in the index tree where that may be re-used if another adjacent entry is required.

However, when large numbers of adjacent rows are deleted, it is highly unlikely that Oracle will have an opportunity to re-use the deleted leaf rows, and these represent wasted space in the index. In addition to wasting space, large volumes of deleted leaf nodes will make index fast-full scans run for longer periods.

These deleted leaf nodes can be easily identified by running the IDL.SQL script.


The number of deleted leaf rows

The term "deleted leaf node" refers to the number of index inodes that have been logically deleted as a result of row deletes. Remember that Oracle leaves "dead" index nodes in the index when rows are deleted. This is done to speed up SQL deletes, since Oracle does not have to allocate resources to rebalance the index tree when rows are deleted.

Index height

The height of the index refers to the number of levels that are spawned by the index as a result in row inserts. When a large amount of rows are added to a table, Oracle may spawn additional levels of an index to accommodate the new rows.

Hence, an Oracle index may have four levels, but only in those areas of the index tree where the massive inserts have occurred. Oracle indexes can support many millions of entries in three levels, and any Oracle index that has four or more levels would benefit from rebuilding.

Gets per index access

The number of "gets" per access refers to the amount of logical I/O that is required to fetch a row with the index. As you may know, a logical "get" is not necessarily a physical I/O since much of the index may reside in the Oracle buffer cache. However, any SAP index with a number greater than 10 would probably benefit from an index rebuild.

Unfortunately, Oracle does not make it easy to capture this information. In Oracle we must issue these commands:
ANALYZE INDEX index_name COMPUTE STATISTICS
ANALYZE INDEX index_name VALIDATE STRUCTURE

After you analyze the report above, you may want to consider rebuilding any index where the height is more than three levels, since three levels will support millions of index entries. Note that Oracle indexes will “spawn” to a fourth level only in areas of the index where a massive insert has occurred, such that 99% of the index has three levels, but the index is reported as having four levels.

We might want to rebuild an index if the “block gets” per access is greater than five, since excessive “blocks gets” indicate a fragmented b-tree structure. Another rebuild condition would be cases where deleted leaf nodes comprise more than 20% of the index nodes. As you may know, you can easily rebuild an Oracle index with the command:

ALTER INDEX index_name REBUILD

The ALTER INDEX index_name REBUILD command is very safe way to rebuild indexes. Here is the syntax of the command:

alter index index_name
rebuild
tablespace tablespace_name
storage (initial new_initial next new_next freelists new_freelist_number )

Unlike the traditional method where we drop the index and recreate it, the REBUILD command does not require a full table scan of the table, and the subsequent sorting of the keys and rowids. Rather, the REBUILD command will perform the following steps:
1.    Walk the existing index to get the index keys.
2.    Populate temporary segments with the new tree structure.
3.    Once the operation has completed successfully, drop the old tree, and rename the temporary segments to the new index.

As you can see from the steps, you can rebuild indexes without worrying that you will accidentally lose the index. If the index cannot be rebuilt for any reason, Oracle will abort the operation and leave the existing index intact. Only after the entire index has been rebuilt does Oracle transfer the index to the new b-tree.

Most Oracle administrators run this script, and then select the index that they would like to rebuild. Note that the TABLESPACE clause should always be used with the ALTER INDEX REBUILD command to ensure that the index is not rebuilt within the default tablespace (usually SYS).

Be aware that it's always a good idea to move an index into another tablespace and you must have enough room in that tablespace to hold all of the temporary segments required for the index rebuild, so most Oracle administrators will double-size index tablespaces with enough space for two full index trees.

Update:
When can we "prove" a benefit from an index rebuild?  Here, Robin Schumacher proves that an index that is rebuilt in a larger tablespace will contain more index entries be block, and have a flatter structure:
"As you can see, the amount of logical reads has been reduced in half simply by using the new 16K tablespace and accompanying 16K data cache."
In an OracleWorld 2003 presentation titled “Oracle Database 10g: The Self-Managing Database” by Sushil Kumar of Oracle Corporation, Kumar states that the new Automatic Maintenance Tasks (AMT) Oracle10g feature will "automatically detect and re-build sub-optimal indexes.“
This Kim Floss article shows the Oracle 10g segment advisor recommending a rebuild of an index: 
“The page lists all the segments (table, index, and so on) that constitute the object under review. The default view ("View Segments Recommended to Shrink") lists any segments that have free space you can reclaim.”

Oracle index rebuild advisor (Source: Oracle Corporation)
How rare are "bad" indexes? 

You cannot generalize to say that index rebuilding for performance is rare, or even medium rare, it depends on many factors, most importantly the characteristics of the application.
·    In scientific applications (clinical, laboratory) where large datasets are added and removed, the need to rebuild indexes is "common".
·    Conversely, in system that never update or delete rows, index rebuilding rarely improves performance.
·    In systems that do batch DML jobs, index rebuilding "often" improves SQL performance.
See Oracle MetaLink note 122008.1 for the officially authorized script to detect indexes that benefit from rebuilding.  This script detects indexes for rebuilding using these rules:  Rebuild the index when these conditions are true:
- deleted entries represent 20% or more of the current entries.
- the index depth is more then 4 levels.
I also have more sophisticated index rebuilding scripts in my book "Oracle Tuning: The Definitive Reference", with a code depot of downloadable diagnostic scripts. 

There are many compelling reasons to manage indexes within Oracle.  In an OLTP system, index space is often greater than the space allocated for tables, and fast row data access is critical for sub-second response time.  As we may know, Oracle offers a wealth of index structures:
·    B-tree indexes –.This is the standard tree index that Oracle has been using since the earliest releases. 
 
·    Bitmap indexes – Bitmap indexes are used where an index column has a relatively small number of distinct values (low cardinality).  These are super-fast for read-only databases, but are not suitable for systems with frequent updates 
 
·    Bitmap join indexes – This is an index structure whereby data columns from other tables appear in a multi-column index of a junction table.   This is the only create index syntax to employ a SQL-like from clause and where clause.
create bitmap index
   part_suppliers_state
on
   inventory( parts.part_type, supplier.state )
from
   inventory i,
   parts     p,
   supplier  s
where
   i.part_id=p.part_id
and
   i.supplier_id=p.supplier_id;

In addition to these index structures we also see interesting use of indexes at runtime.  Here is a sample of index-based access plans:
·    Nested loop joins – This row access method scans an index to collect a series of ROWID’s.
 
·    Index fast-full-scans – This is a “multi-block read” access where the index blocks are accessed via a “db file scattered read” to load index blocks into the buffers.  Please note that this method does not read the index nodes.
 
·    Star joins – The star index has changed in structure several times, originally being a single-concatenated index and then changing to a bitmap index implementation.  STAR indexes are super-fast when joining large read-only data warehouse tables.
 
·    Index combine access – This is an example of the use of the index_combine hint.  This execution plan combines bitmap indexes to quickly resolve a low-cardinality Boolean expression:

select /*+ index_combine(emp, dept_bit, job_bit) */
   ename,
   job,
   deptno,
   mgr
from
   emp
where
   job = 'SALESMAN'
and
   deptno = 30
;
Here is the execution plan that shows the index combine process:

OPERATION
----------------------------------------------------------------------
OPTIONS                        OBJECT_NAME                    POSITION
------------------------------ ---------------------------- ----------
 SELECT STATEMENT
                                                                     2
  TABLE ACCESS
BY INDEX ROWID                 EMP                                   1
    BITMAP CONVERSION
TO ROWIDS                                                            1
      BITMAP AND
        BITMAP INDEX
SINGLE VALUE                   DEPT_BIT                              1
        BITMAP INDEX
SINGLE VALUE                   JOB_BIT                               2
While the internals of Oracle indexing are very complex and open to debate, there are some things that you can do to explore the internal structures of your indexes.  Let’s take a closer look at the method that I use to reveal index structures.
Oracle index rebuilding scripts
Ken Adkins, a respected Oracle author, notes that it is often difficult to pinpoint the exact reason that indexes benefit from a rebuild:
“The DBAs were pulling out their hair until they noticed that the size of the indexes were too large for the amount of data in the tables, and remembered this old “myth”, and decided to try rebuilding the indexes on these tables.
The DELETE with the multiple NOT EXISTS went from running for 2 hours to delete 30,000 records, to deleting over 100,000 records in minutes. Simply by rebuilding the indexes….”
Andrew Kerber notes this script to rebuild his indexes.  "I eventually wrote a simple query that generates a list of candidates for index rebuilds, and the commands necessary to rebuild the indexes once the tables reached a point where it was necessary. The query reads a table we built called TABLE_MODIFICATIONS that we loaded each night from DBA_TAB_MODIFICATIONS before we ran statistics. Monitoring must be turned on to use the DBA_TAB_MODIFICATIONS table."

select
‘exec analyzedb.reorg_a_table4(’||””||rtrim(t.table_owner)||””||’,'||””||
rtrim(t.table_name)||””||’);’,
t.table_owner||’.'||t.table_name name,
a.num_rows,
sum(t.inserts) ins,
sum(t.updates) upd,
sum(t.deletes) del,
sum(t.updates)+sum(t.inserts)+sum(t.deletes) tot_chgs,
to_char((sum(t.deletes)/(decode(a.num_rows,0,1,a.num_rows)))*100.0,’999999.99′) per_del,
round(((sum(t.updates)+sum(t.inserts)+sum(t.deletes))/(decode(a.num_rows,0,1,a.num_rows)) *100.0),2) per_chg
from analyzedb.table_modifications t,
all_tables a
where t.timestamp >= to_date(’&from_date’,'dd-mon-yyyy’) and
t.table_owner = a.owner and t.table_owner not in (’SYS’,'SYSTEM’) and
t.table_name=a.table_name
having (sum(t.deletes)/(decode(a.num_rows,0,1,a.num_rows)))*100.0 >=5
group by t.table_owner, t.table_name, a.num_rows
order by num_rows desc, t.table_owner, t.table_name;


The Debate Continues
Today, a battle is raging between the “academics” who do not believe that indexes should be rebuilt without expensive studies, and the “pragmatists” who rebuild indexes on a schedule because their end-users report faster response times. 
To date, none of the world’s Oracle experts has determined a reliable rule for index rebuilding, and no expert has proven that index rebuilds “rarely” help.  Getting statistically valid “proof” from a volatile production system would be a phenomenal challenge.  In a large production system, it would be a massive effort to trace LIO from specific queries to specific indexes before and after the rebuild.
·     Academic approach - Many Oracle experts claim that indexes rarely benefit from rebuilding, yet none has ever provided empirical evidence that this is the case, or what logical I/O conditions arise in those “rare” cases where indexes benefit from rebuilding.
 
·    Pragmatic approach – Many IT managers force their Oracle DBAs to periodically rebuild indexes because the end-user community reports faster response times following the rebuild.  The pragmatists are not interested in “proving” anything, they are just happy that the end-users are happy.  Even if index rebuilding were to be proven as a useless activity, the Placebo Effect on the end-users is enough to justify the task.
It is clear that all 70 of the index metrics interact together in a predictable way.  Some scientist should be able to take this data and reverse-engineer the internal rules for index rebuilding, if any actually exist.  For now, the most any Oracle professional can do is to explore their indexes and learn how the software manages b-tree structures.
When can we "prove" a benefit from an index rebuild?  Here, Robin Schumacher proves that an index that is rebuilt in a larger tablespace will contain more index entries be block, and have a flatter structure:
"As you can see, the amount of logical reads has been reduced in half simply by using the new 16K tablespace and accompanying 16K data cache."
In an OracleWorld 2003 presentation titled “Oracle Database 10g: The Self-Managing Database” by Sushil Kumar of Oracle Corporation, Kumar states that the new Automatic Maintenance Tasks (AMT) Oracle10g feature will "automatically detect and re-build sub-optimal indexes.“ However, the 10g segment advisor only recommends index rebuilding from a space reclamation perspective, not for performance.
This Kim Floss article shows the Oracle 10g segment advisor recommending a rebuild of an index: 
“The page lists all the segments (table, index, and so on) that constitute the object under review. The default view ("View Segments Recommended to Shrink") lists any segments that have free space you can reclaim.”

Oracle index rebuild advisor (Source: Oracle Corporation)
Inside Oracle b-tree indexes
There are many myths and legends surrounding the use of Oracle indexes, especially the ongoing passionate debate about rebuilding of indexes for improving performance.  Some experts claim that periodic rebuilding of Oracle b-tree indexes greatly improves space usage and access speed, while other experts maintain that Oracle indexes should “rarely” be rebuilt.  Interestingly, Oracle reports that the new Oracle10g Automatic Maintenance Tasks (AMT) will automatically detect indexes that are in need of re-building.  Here are the pros and cons of this highly emotional issue:
·    Arguments for Index Rebuilding – Many Oracle shops schedule periodic index rebuilding, and report measurable speed improvements after they rebuild their Oracle b-tree indexes.  In an OracleWorld 2003 presentation titled Oracle Database 10 g: The Self-Managing Database by Sushil Kumar of Oracle Corporation, Kumar states that the Automatic Maintenance Tasks (AMT) Oracle10g feature will automatically detect and rebuild sub-optimal indexes.  “AWR provides the Oracle Database 10g a very good 'knowledge' of how it is being used. By analyzing the information stored in AWR, the database can identify the need of performing routine maintenance tasks, such as optimizer statistics refresh, rebuilding indexes, etc. The Automated Maintenance Tasks infrastructure enables the Oracle Database to automatically perform those operations.”
 
·    Arguments against Index Rebuilding – Some Oracle in-house experts maintain that Oracle indexes are super-efficient at space re-use and access speed and that a b-tree index rarely needs rebuilding.  They claim that a reduction in Logical I/O (LIO) should be measurable, and if there were any benefit to index rebuilding, someone would have come up with “provable” rules.
So who is right?  I suspect that they both are correct.  There is a huge body of evidence that index rebuilding makes the end-users report faster response time, and I have to wonder if this is only a Placebo Effect, with no scientific basis.  Some experts suspect a Placebo Effect may be at work here, and the end-users, knowing that they have new index trees, report a performance gain when none exists.
Because of their extreme flexibility, Oracle b-tree indexes are quite complex, and to really gather scientific evidence we must examine all of the index metrics.  Getting a meaningful measure of the benefit of an index rebuild in a production environment is very difficult because the system is under heavy load and usage patterns change constantly.  Plus, many IT managers require periodic index re-building because it is a low-risk job and it curries favor from the end-user community. 
Where are the index details?
Most Oracle professionals are aware of the dba_indexes view, which is populated with index statistics when indexes are analyzed.  The dba_indexes view contains a great deal of important information for the SQL optimizer, but there is still more to see.  Oracle provides an analyze index xxx validate structure command that provides additional statistics into a temporary tables called index_stats, which, sadly, is overlaid after each command.
To get the full picture, we must devise a table structure that will collect data from both sources.  Here is a method that will do the job:
1.    Create a temporary table to hold data from dba_indexes and index_stats
2.    Verify quality of dbms_stats analysis
3.    Populate temporary table from dba_indexes
4.    Validate indexes and send output into temp table
IMPORTANT:  Collecting the index_stats information is very time consuming and expensive and will introduce serious locking issues on production databases. It is strongly recommended that you perform these checks during scheduled downtime, or on a representative test database.

10g note:  Tested against Oracle10g 10.2.0.1.0 on Windows XP SP2, and found

IOT_REDUNDANT_PKEY_ELIM VARCHAR2(3)
DROPPED                 VARCHAR2(3)
NUM_KEYS                NUMBER

Need to be defined in the table index_details, and this crashes the subsequent table population and procedure.

 Let’s start by creating a table to hold our index data.  I call this table index_details:
drop table index_details;

Create table index_details
(
-- *********  The following is from dba_indexes ******************
OWNER_NAME                                         VARCHAR2(30),
INDEX_NAME                                         VARCHAR2(30),
INDEX_TYPE                                         VARCHAR2(27),
TABLE_OWNER                                        VARCHAR2(30),
TABLE_NAME                                         VARCHAR2(30),
TABLE_TYPE                                         VARCHAR2(11),
UNIQUENESS                                         VARCHAR2(9),
COMPRESSION                                        VARCHAR2(8),
PREFIX_LENGTH                                      NUMBER,
TABLESPACE_NAME                                    VARCHAR2(30),
INI_TRANS                                          NUMBER,
MAX_TRANS                                          NUMBER,
INITIAL_EXTENT                                     NUMBER,
NEXT_EXTENT                                        NUMBER,
MIN_EXTENTS                                        NUMBER,
MAX_EXTENTS                                        NUMBER,
PCT_INCREASE                                       NUMBER,
PCT_THRESHOLD                                      NUMBER,
INCLUDE_COLUMN                                     NUMBER,
FREELISTS                                          NUMBER,
FREELIST_GROUPS                                    NUMBER,
PCT_FREE                                           NUMBER,
LOGGING                                            VARCHAR2(3),
BLEVEL                                             NUMBER,
LEAF_BLOCKS                                        NUMBER,
DISTINCT_KEYS                                      NUMBER,
AVG_LEAF_BLOCKS_PER_KEY                            NUMBER,
AVG_DATA_BLOCKS_PER_KEY                            NUMBER,
CLUSTERING_FACTOR                                  NUMBER,
STATUS                                             VARCHAR2(8),
NUM_ROWS                                           NUMBER,
SAMPLE_SIZE                                        NUMBER,
LAST_ANALYZED                                      DATE,
DEGREE                                             VARCHAR2(40),
INSTANCES                                          VARCHAR2(40),
PARTITIONED                                        VARCHAR2(3),
TEMPORARY                                          VARCHAR2(1),
GENERATED                                          VARCHAR2(1),
SECONDARY                                          VARCHAR2(1),
BUFFER_POOL                                        VARCHAR2(7),
USER_STATS                                         VARCHAR2(3),
DURATION                                           VARCHAR2(15),
PCT_DIRECT_ACCESS                                  NUMBER,
ITYP_OWNER                                         VARCHAR2(30),
ITYP_NAME                                          VARCHAR2(30),
PARAMETERS                                         VARCHAR2(1000),
GLOBAL_STATS                                       VARCHAR2(3),
DOMIDX_STATUS                                      VARCHAR2(12),
DOMIDX_OPSTATUS                                    VARCHAR2(6),
FUNCIDX_STATUS                                     VARCHAR2(8),
JOIN_INDEX                                         VARCHAR2(3),
-- *********  The following is from index_stats  ******************
HEIGHT                                             NUMBER,
BLOCKS                                             NUMBER,
NAMEx                                              VARCHAR2(30),
PARTITION_NAME                                     VARCHAR2(30),
LF_ROWS                                            NUMBER,
LF_BLKS                                            NUMBER,
LF_ROWS_LEN                                        NUMBER,
LF_BLK_LEN                                         NUMBER,
BR_ROWS                                            NUMBER,
BR_BLKS                                            NUMBER,
BR_ROWS_LEN                                        NUMBER,
BR_BLK_LEN                                         NUMBER,
DEL_LF_ROWS                                        NUMBER,
DEL_LF_ROWS_LEN                                    NUMBER,
DISTINCT_KEYSx                                     NUMBER,
MOST_REPEATED_KEY                                  NUMBER,
BTREE_SPACE                                        NUMBER,
USED_SPACE                                         NUMBER,
PCT_USED                                           NUMBER,
ROWS_PER_KEY                                       NUMBER,
BLKS_GETS_PER_ACCESS                               NUMBER,
PRE_ROWS                                           NUMBER,
PRE_ROWS_LEN                                       NUMBER,
OPT_CMPR_COUNT                                     NUMBER,
OPT_CMPR_PCTSAVE                                   NUMBER
)
tablespace tools
storage (initial 5k next 5k maxextents unlimited);
(Note: the index_stats table has a column named PCT_USED even though Oracle indexes do not allow changes to this value.)
Now that we have a table that will hold all of the index details, the next step is to populate the table with data from freshly-analyzed indexes.  Remember, you should always run dbms_stats to get current index statistics.  Here is the script.
insert into index_details
(
OWNER_NAME,
INDEX_NAME,
INDEX_TYPE,
TABLE_OWNER,
TABLE_NAME,
TABLE_TYPE,
UNIQUENESS,
COMPRESSION,
PREFIX_LENGTH,
TABLESPACE_NAME,
INI_TRANS,
MAX_TRANS,
INITIAL_EXTENT,
NEXT_EXTENT,
MIN_EXTENTS,
MAX_EXTENTS,
PCT_INCREASE,
PCT_THRESHOLD,
INCLUDE_COLUMN,
FREELISTS,
FREELIST_GROUPS,
PCT_FREE,
LOGGING,
BLEVEL,
LEAF_BLOCKS,
DISTINCT_KEYS,
AVG_LEAF_BLOCKS_PER_KEY,
AVG_DATA_BLOCKS_PER_KEY,
CLUSTERING_FACTOR,
STATUS,
NUM_ROWS,
SAMPLE_SIZE,
LAST_ANALYZED,
DEGREE,
INSTANCES,
PARTITIONED,
TEMPORARY,
GENERATED,
SECONDARY,
BUFFER_POOL,
USER_STATS,
DURATION,
PCT_DIRECT_ACCESS,
ITYP_OWNER,
ITYP_NAME,
PARAMETERS,
GLOBAL_STATS,
DOMIDX_STATUS,
DOMIDX_OPSTATUS,
FUNCIDX_STATUS,
JOIN_INDEX
)
    select * from dba_indexes
    where owner not like 'SYS%'
;

Now that we have gathered the index details from dba_indexes, we must loop through iterations of the analyze index xxx validate structure command to populate our table with other statistics.  Here is the script that I use to get all index details.

/*  INDEX.STATS contains 1 row from last execution  */
/*  of ANALYZE INDEX ... VALIDATE STRUCTURE         */
/*  We need to loop through validates for each      */
/*  index and populate the table.                   */

DECLARE
v_dynam     varchar2(100);
cursor idx_cursor is
  select owner_name, index_name from index_details;

BEGIN
for c_row in idx_cursor loop
   v_dynam := 'analyze index '||c_row.owner_name||'.'||c_row.index_name||
             ' validate structure';
   execute immediate v_dynam;
   update index_details set
              (HEIGHT, BLOCKS, NAMEx, PARTITION_NAME, LF_ROWS, LF_BLKS,
               LF_ROWS_LEN, LF_BLK_LEN, BR_ROWS, BR_BLKS, BR_ROWS_LEN,
               BR_BLK_LEN, DEL_LF_ROWS, DEL_LF_ROWS_LEN, DISTINCT_KEYSx,
               MOST_REPEATED_KEY, BTREE_SPACE, USED_SPACE, PCT_USED,
               ROWS_PER_KEY, BLKS_GETS_PER_ACCESS, PRE_ROWS, PRE_ROWS_LEN,
               OPT_CMPR_COUNT, OPT_CMPR_PCTSAVE)
       = (select * from index_stats)
     where index_details.owner_name = c_row.owner_name
       and index_details.index_name = c_row.index_name;
   if mod(idx_cursor%rowcount,50)=0 then
     commit;
   end if;
end loop;
commit;

END;
/

update
   index_details a
set
   num_keys =
   (select
      count(*)
    from
      dba_ind_columns b
    where
       a.owner_name = b.table_owner
    and
       a.index_name = b.index_name
   )
;

After running the script from listing 3, we should now have complete index details for any index that we desire.  However with 70 different metrics for each index, it can be quite confusing about which columns are the most important.  To make queries easy, I create a view that only displays the columns that I find the most interesting.  Here is my view.

drop view indx_stats;

Create view idx_stats
as
select
OWNER_NAME                 ,
INDEX_NAME                 ,
INDEX_TYPE                ,
UNIQUENESS                ,
PREFIX_LENGTH             ,
BLEVEL                    ,
LEAF_BLOCKS               ,
DISTINCT_KEYS             ,
AVG_LEAF_BLOCKS_PER_KEY    ,
AVG_DATA_BLOCKS_PER_KEY    ,
CLUSTERING_FACTOR          ,
NUM_ROWS                  ,
PCT_DIRECT_ACCESS         ,
HEIGHT                    ,
BLOCKS                    ,
NAMEx                     ,
PARTITION_NAME            ,
LF_ROWS                   ,
LF_BLKS                   ,
LF_ROWS_LEN               ,
LF_BLK_LEN                ,
BR_ROWS                   ,
BR_BLKS                   ,
BR_ROWS_LEN               ,
BR_BLK_LEN                ,
DEL_LF_ROWS               ,
DEL_LF_ROWS_LEN           ,
DISTINCT_KEYSx            ,
MOST_REPEATED_KEY         ,
BTREE_SPACE               ,
USED_SPACE                ,
PCT_USED                  ,
ROWS_PER_KEY              ,
BLKS_GETS_PER_ACCESS      ,
PRE_ROWS                  ,
PRE_ROWS_LEN              ,
num_keys,
sum_key_len
from
  Index_details;
While most of these column descriptions are self-evident, there are some that are especially important:
·    CLUSTERING_FACTOR – This is one of the most important index statistics because it indicates how well sequenced the index columns are to the table rows.  If clustering_factor is low (about the same as the number of dba_segments.blocks in the table segment) then the index key is in the same order as the table rows and index range scans will be very efficient, with minimal disk I/O.  As clustering_factor increases (up to dba_tables.num_rows), the index key is increasingly out of sequence with the table rows.  Oracle’s cost-based SQL optimizer relies heavily upon clustering_factor to decide whether to use the index to access the table.                
 
·    HEIGHT  - As an index accepts new rows, the index blocks split.  Once the index  nodes have split to a predetermined maximum level the index will “spawn” into a new level. 
 
·    BLOCKS – This is the number of blocks consumed by the index.  This is dependent on the db_block_size.  In Oracle9i and beyond, many DBAs create b-tree indexes in very large blocksizes (db_32k_cache_size) because the index will spawn less. Robin Schumacher has noted in his book Oracle Performance Troubleshooting notes “As you can see, the amount of logical reads has been reduced in half simply by using the new 16K tablespace and accompanying 16K data cache.  Clearly, the benefits of properly using the new data caches and multi-block tablespace feature of Oracle9i and above are worth your investigation and trials in your own database.“                       
 
·    PCT_USED – This metric is very misleading because it looks identical to the dba_tables pct_used column, but has a different meaning.  Normally, the pct_used threshold is the freelist unlink threshold, while in index_stats pct_used is the percentage of space allocated in the b-tree that is being used.
 Is there a criterion for index rebuilding?
If we believe the anecdotal reports that index rebuilding improved end-user reported performance, how can we analyze this data and see what the criteria (if any) might be for an index rebuild? 
For example, here are the criteria used by a fellow Oracle DBA who swears that rebuilding indexes with these criteria has a positive effect on his system performance:

-- ***  Only consider when space used is more than 1 block    ***
   btree_space > 8192
   and
-- ***  The number of index levels is > 3  ***
   (height > 3
-- ***  The % being used is < 75%          ***
    or pct_used < 75
-- ***  Deleted > 20% of total             ***
    or (del_lf_rows/(decode(lf_rows,0,1,lf_rows)) *100) > 20)
;

In reality I suspect that the rules are far more complicated than this simple formula.  To see the commonality between indexes of similar nature you can use the data from your new index_details table to write summary queries (Listing 5).  Here we see the average number of index blocks, leaf rows and leaf blocks for indexes of different heights.
This gives us a high-level idea of Oracle threshold for spawning an index onto new levels.  We can take this same approach and attempt to answer the following questions:
1 - At what point does an index spawn to another level (height)?  It should be a function of blocksize, key length and the number of keys.
2 - The number of deleted leaf nodes may not be enough to trigger an index rebuild.  This is because if clustering_factor is low (dba_indexes.clustering_factor ~= dba_segments.blocks), then the rows are added in order, and the index is likely to reuse the deleted leaf nodes.  On the other hand, if clustering_factor is high (dba_indexes.clustering_factor ~= dba_tables.num_rows), and the majority of queries use the index with fast-full scans or index range scans, then a rebuild of the underlying table (to resequence the rows) may be beneficial. 

To illustrate, assume I have an index on the last_name column of a 1,000,000 row table and the clustering_factor is close to the number of blocks, indicating that the rows are in the same sequence as the index. In this case, a bulk delete of all people whose last_name begins with the letter “K” would leave a dense cluster of deleted leaf nodes on adjacent data blocks within the index tablespace.  This large section of space is more likely to be reused than many tiny chunks.
We can also use the data from our detail table to compute our own metrics.  In the example query below, we create a meta-rule for indexes:
·     Dense Full Block Space - This is the index key space (number of table rows * index key length) as a function of the blocksize and free index space.
 
·     Percent of Free Blocks - This is the estimated number of free blocks within the index.
Using these metrics, we can analyze the system and produce some very interesting reports of index internals:
col c1 heading 'Average|Height'            format 99
col c2 heading 'Average|Blocks'            format 999,999
col c3 heading 'Average|Leaf|Rows'         format 9,999,999,999
col c4 heading 'Average|Leaf Row|Length'   format 999,999,999
col c5 heading 'Average|Leaf Blocks'       format 9,999,999
col c6 heading 'Average|Leaf Block|Length' format 9,999,999
select
   height           c1,
   avg(blocks)      c2,
   avg(lf_rows)     c3,
   avg(lf_rows_len) c4,
   avg(lf_blks)     c5,
   avg(lf_blk_len)  c6
from
   index_details
group by
   height
;
        
                        Average      Average                Average            
Average  Average           Leaf     Leaf Row     Average Leaf Block            
 Height   Blocks           Rows       Length Leaf Blocks     Length            
------- -------- -------------- ------------ ----------- ----------            
      1      236             12          234           1      7,996            
      2      317         33,804      691,504         106      7,915            
      3    8,207      1,706,685   41,498,749       7,901      7,583            
      4  114,613     12,506,040  538,468,239     113,628      7,988 
As we see, we can compute and spin this data in an almost infinite variety of ways.
Reader Comments on index rebuilding:
It has been my experience that the percentage of deletes required to affect performance actually goes down as the table gets larger, perhaps because the I/O effect is magnified with more data. 
At my previous job, our observation was that on 20 million row tables and upward as little as 5% deletes, regardless of other factors, would cause sufficient performance degradation to justify an index rebuild.  Admittedly this was a site with fairly tight budget constraints, so we couldn't afford absolute, top of the line hardware.  We also didn't have the luxury of making sure every tablespace was ideally placed for performance, but we did a pretty good job with what we had.  I actually wrote a script that would calculate the % deletes and generate the index rebuild command.

Also, there is the rebuild online option for indices, which does work but it will cause noticeable performance degradation if you try and do it during busy times.  I believe it was available in 8i, but it would generate ora-600 errors when used on busy tables in 8i.

No comments:

Post a Comment