Virtual Indexes are another undocumented feature used by Oracle. Virtual indexes, as the name suggests are pseudo-indexes that will not behave the same way that normal indexes behave, and are meant for a very specific purpose.
A virtual index is created in a slightly different manner than the normal indexes. A virtual index has no segment pegged to it, i.e., the DBA_SEGMENTS view will not show an entry for this. Oracle handles such indexes internally and few required dictionary tables are updated so that the optimizer can be made aware of its presence and generate an execution plan considering such indexes.
As per Oracle, this functionality is not intended for standalone usage. It is part of the Oracle Enterprise Manger Tuning Pack (Virtual Index Wizard). The virtual index wizard functionality allows the user to test a potential new index prior to actually building the new index in the database. It allows the CBO to evaluate the potential new index for a selected SQL statement by building an explain plan that is aware of the potential new index. This allows the user to determine if the optimizer would use the index, once implemented.
Therefore, the feature is here to be supported from Enterprise Manager and not for standalone usage. I went a bit further and actually tested it using SQL*Plus, basically, trying to use the same feature but without the enterprise manager.
I do not see much use of Virtual Indexes in a development area where we can create and drop indexes while testing. However, this feature could prove handy if a query or group of queries have to be tested in production (for want of simulation or urgency!), to determine if a new index will improve the performance, without impacting existing or new sessions.
Below are some attributes of the Virtual Indexes.
1. These are permanent and continue to exist unless we drop them.
2. Their creation will not affect existing and new sessions. Only sessions marked for Virtual Index usage will become aware of their existence.
3. Such indexes will be used only when the hidden parameter "_use_nosegment_indexes" is set to true.
4. The Rule based optimizer did not recognize Virtual Indexes when I
tested, however, CBO recognizes them. In all of my examples, I have used CBO. However, I did not carry out intensive testing in RBO and you may come across exceptions to this view.
5. Dictionary view DBA_SEGMENTS will not show an entry for Virtual Indexes. The table DBA_INDEXES and DBA_OBJECTS will have an entry for them in Oracle 8i; in Oracle 9i onwards, DBA_INDEXES no longer show Virtual Indexes.
6. Virtual Indexes cannot be altered and throw a "fake index" error!
7. Virtual Indexes can be analyzed, using the ANALYZE command or DBMS_STATS package, but the statistics cannot be viewed (in Oracle 8i, DBA_INDEXES will not show this either). Oracle may be generating artificial statistics and storing it somewhere for referring it later.
Creating Virtual Index
Creating a Virtual Index can be achieved by using the NOSEGMENT clause with the CREATE INDEX command.
e.g.:
SQL> create unique index am304_u1 on am304(col2) nosegment;
Index created.
Parameter _USE_NOSEGMENT_INDEXES
This is a hidden/internal parameter and therefore undocumented. Such parameters should not be altered for Oracle databases unless Oracle Support either advises or recommends that you do so. In our case, we make an exception (!), but only to be set at session level. Do not set it for the complete instance.
Setting the "_use_nosegment_indexes" parameter enables the optimizer to use virtual indexes.
Examples:
Creating the virtual index:
SQL> create index am301_n1 on am301(col1) nosegment;
Index created.
Checking some dictionary tables:
SQL> select segment_name, segment_type, bytes
2 from dba_segments
3 where segment_name = 'AM301_N1';
no rows selected
SQL> select object_name, object_type, status
2 from dba_objects
3 where object_name = 'AM301_N1';
OBJECT_NAME |OBJECT_TYPE |STATUS
------------------|-----------------|---------------
AM301_N1 |INDEX |VALID
SQL> select index_name, index_type, status
2 from dba_indexes
3 where index_name = 'AM301_N1';
INDEX_NAME |INDEX_TYPE |STATUS
------------------------------|------------|---------------
AM301_N1 |NORMAL |VALID
Working with the Virtual indexes:
SQL> create table am301
2 (col1 number, col2 varchar2(20));
Table created.
SQL> insert into am301 values(dbms_random.random, dbms_random.string('A', 20));
1 row created.
SQL> insert into am301 values(dbms_random.random, dbms_random.string('A', 20));
1 row created.
SQL> insert into am301 values(dbms_random.random, dbms_random.string('A', 20));
1 row created.
SQL> insert into am301 values(dbms_random.random, dbms_random.string('A', 20));
1 row created.
SQL> select * from am301;
COL1 COL2
---------- --------------------
-512599724 aCR_PdFVdSGJLCOLCjJQ
-2.049E+09 qiVUlonc^p^R_X_W_^Xn
-189009085 prTNPqPUod^miAnLXrMA
2082093299 Bq_icbmcpNFNUKDRdMi]
--Though inserting alpha-numeric, Oracle also inserted
--special characters in col2. This one is a bug and is
--resolved in 9i+.
SQL> insert into am301 select * from am301;
4 rows created.
SQL> insert into am301 select * from am301;
8 rows created.
SQL> insert into am301 select * from am301;
16 rows created.
SQL> alter session set optimizer_mode=first_rows;
Session altered.
SQL> create index am301_n1 on am301(col1) nosegment;
Index created.
SQL> analyze table am301 compute statistics;
Table analyzed.
SQL> analyze index am301_n1 compute statistics;
Index analyzed.
--It is recommended to use dbms_stats package to
--generate statistics. Analyze is used here as an
--example only.
SQL> set autotrace on
SQL> select * from am301 where col1 = 2082093299;
COL1 COL2
---------- --------------------
2082093299 Bq_icbmcpNFNUKDRdMi]
2082093299 Bq_icbmcpNFNUKDRdMi]
..
32 rows selected.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=FIRST_ROWS (Cost=1 Card=32 Bytes=864)
1 0 TABLE ACCESS (FULL) OF 'AM301' (Cost=1 Card=32 Bytes=864)
SQL> alter session set "_use_nosegment_indexes" = true;
--set only for the session testing it.
Session altered.
SQL> select * from am301 where col1 = 2082093299;
COL1 COL2
---------- --------------------
2082093299 Bq_icbmcpNFNUKDRdMi]
2082093299 Bq_icbmcpNFNUKDRdMi]
32 rows selected.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=FIRST_ROWS (Cost=207 Card=32 Bytes=864)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'AM301' (Cost=207 Card=32 Bytes=864)
2 1 INDEX (RANGE SCAN) OF 'AM301_N1' (NON-UNIQUE) (Cost=7 Card=32)
Trying to alter the virual index:
SQL> alter index am301_n1 rebuild;
alter index am301_n1 rebuild
*
ERROR at line 1:
ORA-08114: can not alter a fake index
SQL> alter index am301_n1 storage(pctincrease 0);
alter index am301_n1 storage(pctincrease 0)
*
ERROR at line 1:
ORA-08114: can not alter a fake index
Trying to re-create the index as a real one:
SQL> create index am301_n1 on am301(col1);
create index am301_n1 on am301(col1)
*
ERROR at line 1:
ORA-00955: name is already used by an existing object
As the Virtual Index has an entry in some of the dictionary tables, it will prevent the creation of an object with the same name. The alternative is to drop and recreate the Virtual Index as a real index.
Dropping and re-creating again as real index:
SQL> drop index am301_n1;
Index dropped.
SQL> create index am301_n1 on am301(col1);
Index created.
However, a Virtual Index will not prevent the creation of an index with the same column(s).
In the example below, a Virtual Index is created with name DUMMY, afterwards a new index with a different name is created with the same column and structure. Both of the indexes will show in the DBA_OBJECTS listing.
SQL> create index dummy on am310(col1, col2, col3) nosegment;
Index created.
SQL> create index am310_n1 on am310(col1, col2, col3);
Index created.
Tuesday, October 6, 2009
Tuning PL/SQL Applications for Performance
How PL/SQL Optimizes Your Programs
In releases prior to 10g, the PL/SQL compiler translated your code to machine code without applying many changes for performance. Now, PL/SQL uses an optimizing compiler that can rearrange code for better performance.
You do not need to do anything to get the benefits of this new optimizer. It is enabled by default. In rare cases, if the overhead of the optimizer makes compilation of very large applications take too long, you might lower the optimization by setting the initialization parameter PLSQL_OPTIMIZE_LEVEL=1 instead of its default value 2. In even rarer cases, you might see a change in exception behavior, either an exception that is not raised at all, or one that is raised earlier than expected. Setting PL_SQL_OPTIMIZE_LEVEL=0 prevents the code from being rearranged at all.
When to Tune PL/SQL Code
The information in this chapter is especially valuable if you are responsible for:
" Programs that do a lot of mathematical calculations. You will want to investigate the datatypes PLS_INTEGER, BINARY_FLOAT, and BINARY_DOUBLE.
" Functions that are called from PL/SQL queries, where the functions might be executed millions of times. You will want to look at all performance features to make the function as efficient as possible, and perhaps a function-based index to precompute the results for each row and save on query time.
" Programs that spend a lot of time processing INSERT, UPDATE, or DELETE statements, or looping through query results. You will want to investigate the FORALL statement for issuing DML, and the BULK COLLECT INTO and RETURNING BULK COLLECT INTO clauses for queries.
" Older code that does not take advantage of recent PL/SQL language features. (With the many performance improvements in Oracle Database 10g, any code from earlier releases is a candidate for tuning.)
" Any program that spends a lot of time doing PL/SQL processing, as opposed to issuing DDL statements like CREATE TABLE that are just passed directly to SQL. You will want to investigate native compilation. Because many built-in database features use PL/SQL, you can apply this tuning feature to an entire database to improve performance in many areas, not just your own code.
Before starting any tuning effort, benchmark the current system and measure how long particular subprograms take. PL/SQL in Oracle Database 10g includes many automatic optimizations, so you might see performance improvements without doing any tuning.
Guidelines for Avoiding PL/SQL Performance Problems
When a PL/SQL-based application performs poorly, it is often due to badly written SQL statements, poor programming practices, inattention to PL/SQL basics, or misuse of shared memory.
Avoiding CPU Overhead in PL/SQL Code
Make SQL Statements as Efficient as Possible
PL/SQL programs look relatively simple because most of the work is done by SQL statements. Slow SQL statements are the main reason for slow execution.
If SQL statements are slowing down your program:
" Make sure you have appropriate indexes. There are different kinds of indexes for different situations. Your index strategy might be different depending on the sizes of various tables in a query, the distribution of data in each query, and the columns used in the WHERE clauses.
" Make sure you have up-to-date statistics on all the tables, using the subprograms in the DBMS_STATS package.
" Analyze the execution plans and performance of the SQL statements, using:
" EXPLAIN PLAN statement
" SQL Trace facility with TKPROF utility
" Oracle Trace facility
" Rewrite the SQL statements if necessary. For example, query hints can avoid problems such as unnecessary full-table scans.
Some PL/SQL features also help improve the performance of SQL statements:
" If you are running SQL statements inside a PL/SQL loop, look at the FORALL statement as a way to replace loops of INSERT, UPDATE, and DELETE statements.
" If you are looping through the result set of a query, look at the BULK COLLECT clause of the SELECT INTO statement as a way to bring the entire result set into memory in a single operation.
Make Function Calls as Efficient as Possible
Badly written subprograms (for example, a slow sort or search function) can harm performance. Avoid unnecessary calls to subprograms, and optimize their code:
" If a function is called within a SQL query, you can cache the function value for each row by creating a function-based index on the table in the query. The CREATE INDEX statement might take a while, but queries can be much faster.
" If a column is passed to a function within an SQL query, the query cannot use regular indexes on that column, and the function might be called for every row in a (potentially very large) table. Consider nesting the query so that the inner query filters the results to a small number of rows, and the outer query calls the function only a few times:
BEGIN
-- Inefficient, calls my_function for every row.
FOR item IN (SELECT DISTINCT(SQRT(department_id)) col_alias FROM employees)
LOOP
dbms_output.put_line(item.col_alias);
END LOOP;
-- Efficient, only calls function once for each distinct value.
FOR item IN
( SELECT SQRT(department_id) col_alias FROM
( SELECT DISTINCT department_id FROM employees)
)
LOOP
dbms_output.put_line(item.col_alias);
END LOOP;
END;
/
If you use OUT or IN OUT parameters, PL/SQL adds some performance overhead to ensure correct behavior in case of exceptions (assigning a value to the OUT parameter, then exiting the subprogram because of an unhandled exception, so that the OUT parameter keeps its original value).
If your program does not depend on OUT parameters keeping their values in such situations, you can add the NOCOPY keyword to the parameter declarations, so the parameters are declared OUT NOCOPY or IN OUT NOCOPY.
This technique can give significant speedup if you are passing back large amounts of data in OUT parameters, such as collections, big VARCHAR2 values, or LOBs.
This technique also applies to member subprograms of object types. If these subprograms modify attributes of the object type, all the attributes are copied when the subprogram ends. To avoid this overhead, you can explicitly declare the first parameter of the member subprogram as SELF IN OUT NOCOPY, instead of relying on PL/SQL's implicit declaration SELF IN OUT.
Make Loops as Efficient as Possible
Because PL/SQL applications are often built around loops, it is important to optimize the loop itself and the code inside the loop:
" Move initializations or computations outside the loop if possible.
" To issue a series of DML statements, replace loop constructs with FORALL statements.
" To loop through a result set and store the values, use the BULK COLLECT clause on the query to bring the query results into memory in one operation.
" If you have to loop through a result set more than once, or issue other queries as you loop through a result set, you can probably enhance the original query to give you exactly the results you want. Some query operators to explore include UNION, INTERSECT, MINUS, and CONNECT BY.
" You can also nest one query inside another (known as a subselect) to do the filtering and sorting in multiple stages. For example, instead of calling a PL/SQL function in the inner WHERE clause (which might call the function once for each row of the table), you can filter the result set to a small set of rows in the inner query, and call the function in the outer query.
Don't Duplicate Built-in String Functions
PL/SQL provides many highly optimized string functions such as REPLACE, TRANSLATE, SUBSTR, INSTR, RPAD, and LTRIM. The built-in functions use low-level code that is more efficient than regular PL/SQL.
If you use PL/SQL string functions to search for regular expressions, consider using the built-in regular expression functions, such as REGEXP_SUBSTR.
Reorder Conditional Tests to Put the Least Expensive First
PL/SQL stops evaluating a logical expression as soon as the result can be determined (known as short-circuit evaluation).
When evaluating multiple conditions separated by AND or OR, put the least expensive ones first. For example, check the values of PL/SQL variables before testing function return values, because PL/SQL might be able to skip calling the functions.
Minimize Datatype Conversions
At run time, PL/SQL converts between different datatypes automatically. For example, assigning a PLS_INTEGER variable to a NUMBER variable results in a conversion because their internal representations are different.
Avoiding implicit conversions can improve performance. Use literals of the appropriate types: character literals in character expressions, decimal numbers in number expressions, and so on.
In the example below, the integer literal 15 must be converted to an Oracle NUMBER before the addition. The floating-point literal 15.0 is represented as a NUMBER, avoiding the need for a conversion.
DECLARE
n NUMBER;
c CHAR(5);
BEGIN
n := n + 15; -- converted implicitly; slow
n := n + 15.0; -- not converted; fast
c := 25; -- converted implicitly; slow
c := TO_CHAR(25); -- converted explicitly; still slow
c := '25'; -- not converted; fast
END;
/
Minimizing conversions might mean changing the types of your variables, or even working backward and designing your tables with different datatypes. Or, you might convert data once (such as from an INTEGER column to a PLS_INTEGER variable) and use the PL/SQL type consistently after that.
Use PLS_INTEGER or BINARY_INTEGER for Integer Arithmetic
When you need to declare a local integer variable, use the datatype PLS_INTEGER, which is the most efficient integer type. PLS_INTEGER values require less storage than INTEGER or NUMBER values, and PLS_INTEGER operations use machine arithmetic.
The BINARY_INTEGER datatype is just as efficient as PLS_INTEGER for any new code, but if you are running the same code on Oracle9i or Oracle8i databases, PLS_INTEGER is faster.
The datatype NUMBER and its subtypes are represented in a special internal format, designed for portability and arbitrary scale and precision, not performance. Even the subtype INTEGER is treated as a floating-point number with nothing after the decimal point. Operations on NUMBER or INTEGER variables require calls to library routines.
Avoid constrained subtypes such as INTEGER, NATURAL, NATURALN, POSITIVE, POSITIVEN, and SIGNTYPE in performance-critical code. Variables of these types require extra checking at run time, each time they are used in a calculation.
Use BINARY_FLOAT and BINARY_DOUBLE for Floating-Point Arithmetic
The datatype NUMBER and its subtypes are represented in a special internal format, designed for portability and arbitrary scale and precision, not performance. Operations on NUMBER or INTEGER variables require calls to library routines.
The BINARY_FLOAT and BINARY_DOUBLE types can use native machine arithmetic instructions, and are more efficient for number-crunching applications such as scientific processing. They also require less space in the database.
These types do not always represent fractional values precisely, and handle rounding differently than the NUMBER types. These types are less suitable for financial code where accuracy is critical.
Avoiding Memory Overhead in PL/SQL Code
Be Generous When Declaring Sizes for VARCHAR2 Variables
You might need to allocate large VARCHAR2 variables when you are not sure how big an expression result will be. You can actually conserve memory by declaring VARCHAR2 variables with large sizes, such as 32000, rather than estimating just a little on the high side, such as by specifying a size such as 256 or 1000. PL/SQL has an optimization that makes it easy to avoid overflow problems and still conserve memory. Specify a size of 2000 or more characters for the VARCHAR2 variable; PL/SQL waits until you assign the variable, then only allocates as much storage as needed.
Group Related Subprograms into Packages
When you call a packaged subprogram for the first time, the whole package is loaded into the shared memory pool. Subsequent calls to related subprograms in the package require no disk I/O, and your code executes faster. If the package is aged out of memory, it must be reloaded if you reference it again.
You can improve performance by sizing the shared memory pool correctly. Make sure it is large enough to hold all frequently used packages but not so large that memory is wasted.
Pin Packages in the Shared Memory Pool
You can "pin" frequently accessed packages in the shared memory pool, using the supplied package DBMS_SHARED_POOL. When a package is pinned, it is not aged out by the least recently used (LRU) algorithm that Oracle normally uses. The package remains in memory no matter how full the pool gets or how frequently you access the package.
Profiling and Tracing PL/SQL Programs
As you develop larger and larger PL/SQL applications, it becomes more difficult to isolate performance problems. PL/SQL provides a Profiler API to profile run-time behavior and to help you identify performance bottlenecks. PL/SQL also provides a Trace API for tracing the execution of programs on the server. You can use Trace to trace the execution by subprogram or exception.
Using The Profiler API: Package DBMS_PROFILER
The Profiler API is implemented as PL/SQL package DBMS_PROFILER, which provides services for gathering and saving run-time statistics. The information is stored in database tables, which you can query later. For example, you can learn how much time was spent executing each PL/SQL line and subprogram.
To use the Profiler, you start the profiling session, run your application long enough to get adequate code coverage, flush the collected data to the database, then stop the profiling session.
The Profiler traces the execution of your program, computing the time spent at each line and in each subprogram. You can use the collected data to improve performance. For instance, you might focus on subprograms that run slowly.
Analyzing the Collected Performance Data
The next step is to determine why more time was spent executing certain code segments or accessing certain data structures. Find the problem areas by querying the performance data. Focus on the subprograms and packages that use up the most execution time, inspecting possible performance bottlenecks such as SQL statements, loops, and recursive functions.
Using Trace Data to Improve Performance
Use the results of your analysis to rework slow algorithms. For example, due to an exponential growth in data, you might need to replace a linear search with a binary search. Also, look for inefficiencies caused by inappropriate data structures, and, if necessary, replace those data structures.
Using The Trace API: Package DBMS_TRACE
With large, complex applications, it becomes difficult to keep track of calls between subprograms. By tracing your code with the Trace API, you can see the order in which subprograms execute. The Trace API is implemented as PL/SQL package DBMS_TRACE, which provides services for tracing execution by subprogram or exception.
To use Trace, you start the tracing session, run your application, then stop the tracing session. As the program executes, trace data is collected and stored in database tables.
Controlling the Trace
Tracing large applications can produce huge amounts of data that are difficult to manage. Before starting Trace, you can optionally limit the volume of data collected by selecting specific subprograms for trace data collection.
In addition, you can choose a tracing level. For example, you can choose to trace all subprograms and exceptions, or you can choose to trace selected subprograms and exceptions.
Reducing Loop Overhead for DML Statements and Queries (FORALL, BULK COLLECT)
PL/SQL sends SQL statements such as DML and queries to the SQL engine for execution, and SQL returns the result data to PL/SQL. You can minimize the performance overhead of this communication between PL/SQL and SQL by using the PL/SQL language features known collectively as bulk SQL. The FORALL statement sends INSERT, UPDATE, or DELETE statements in batches, rather than one at a time. The BULK COLLECT clause brings back batches of results from SQL. If the DML statement affects four or more database rows, the use of bulk SQL can improve performance considerably.
The assigning of values to PL/SQL variables in SQL statements is called binding. PL/SQL binding operations fall into three categories:
" in-bind When a PL/SQL variable or host variable is stored in the database by an INSERT or UPDATE statement.
" out-bind When a database value is assigned to a PL/SQL variable or a host variable by the RETURNING clause of an INSERT, UPDATE, or DELETE statement.
" define When a database value is assigned to a PL/SQL variable or a host variable by a SELECT or FETCH statement.
Bulk SQL uses PL/SQL collections, such as varrays or nested tables, to pass large amounts of data back and forth in a single operation. This process is known as bulk binding. If the collection has 20 elements, bulk binding lets you perform the equivalent of 20 SELECT, INSERT, UPDATE, or DELETE statements using a single operation. Queries can pass back any number of results, without requiring a FETCH statement for each row.
To speed up INSERT, UPDATE, and DELETE statements, enclose the SQL statement within a PL/SQL FORALL statement instead of a loop construct.
To speed up SELECT statements, include the BULK COLLECT INTO clause in the SELECT statement instead of using INTO.
Using the FORALL Statement
The keyword FORALL lets you run multiple DML statements very efficiently. It can only repeat a single DML statement, unlike a general-purpose FOR loop.
The SQL statement can reference more than one collection, but FORALL only improves performance where the index value is used as a subscript.
Usually, the bounds specify a range of consecutive index numbers. If the index numbers are not consecutive, such as after you delete collection elements, you can use the INDICES OF or VALUES OF clause to iterate over just those index values that really exist.
The INDICES OF clause iterates over all of the index values in the specified collection, or only those between a lower and upper bound.
The VALUES OF clause refers to a collection that is indexed by BINARY_INTEGER or PLS_INTEGER and whose elements are of type BINARY_INTEGER or PLS_INTEGER. The FORALL statement iterates over the index values specified by the elements of this collection.
Example 11-1 Issuing DELETE Statements in a Loop
This FORALL statement sends all three DELETE statements to the SQL engine at once:
CREATE TABLE employees2 AS SELECT * FROM employees;
DECLARE
TYPE NumList IS VARRAY(20) OF NUMBER;
depts NumList := NumList(10, 30, 70); -- department numbers
BEGIN
FORALL i IN depts.FIRST..depts.LAST
DELETE FROM employees2 WHERE department_id = depts(i);
COMMIT;
END;
/
DROP TABLE employees2;
Example 11-2 Issuing INSERT Statements in a Loop
The following example loads some data into PL/SQL collections. Then it inserts the collection elements into a database table twice: first using a FOR loop, then using a FORALL statement. The FORALL version is much faster.
CREATE TABLE parts1 (pnum INTEGER, pname VARCHAR2(15));
CREATE TABLE parts2 (pnum INTEGER, pname VARCHAR2(15));
DECLARE
TYPE NumTab IS TABLE OF parts1.pnum%TYPE INDEX BY PLS_INTEGER;
TYPE NameTab IS TABLE OF parts1.pname%TYPE INDEX BY PLS_INTEGER;
pnums NumTab;
pnames NameTab;
iterations CONSTANT PLS_INTEGER := 500;
t1 INTEGER; t2 INTEGER; t3 INTEGER;
BEGIN
FOR j IN 1..iterations LOOP -- load index-by tables
pnums(j) := j;
pnames(j) := 'Part No. ' || TO_CHAR(j);
END LOOP;
t1 := dbms_utility.get_time;
FOR i IN 1..iterations LOOP -- use FOR loop
INSERT INTO parts1 VALUES (pnums(i), pnames(i));
END LOOP;
t2 := dbms_utility.get_time;
FORALL i IN 1..iterations -- use FORALL statement
INSERT INTO parts2 VALUES (pnums(i), pnames(i));
t3 := dbms_utility.get_time;
dbms_output.put_line('Execution Time (secs)');
dbms_output.put_line('---------------------');
dbms_output.put_line('FOR loop: ' || TO_CHAR((t2 - t1)/100));
dbms_output.put_line('FORALL: ' || TO_CHAR((t3 - t2)/100));
COMMIT;
END;
/
DROP TABLE parts1;
DROP TABLE parts2;
Executing this block should show that the loop using FORALL is much faster.
Example 11-3 Using FORALL with Part of a Collection
The bounds of the FORALL loop can apply to part of a collection, not necessarily all the elements:
CREATE TABLE employees2 AS SELECT * FROM employees;
DECLARE
TYPE NumList IS VARRAY(10) OF NUMBER;
depts NumList := NumList(5,10,20,30,50,55,57,60,70,75);
BEGIN
FORALL j IN 4..7 -- use only part of varray
DELETE FROM employees2 WHERE department_id = depts(j);
COMMIT;
END;
/
DROP TABLE employees2;
Example 11-4 Using FORALL with Non-Consecutive Index Values
You might need to delete some elements from a collection before using the collection in a FORALL statement. The INDICES OF clause processes sparse collections by iterating through only the remaining elements.
You might also want to leave the original collection alone, but process only some elements, process the elements in a different order, or process some elements more than once. Instead of copying the entire elements into new collections, which might use up substantial amounts of memory, the VALUES OF clause lets you set up simple collections whose elements serve as "pointers" to elements in the original collection.
The following example creates a collection holding some arbitrary data, a set of table names. Deleting some of the elements makes it a sparse collection that would not work in a default FORALL statement. The program uses a FORALL statement with the INDICES OF clause to insert the data into a table. It then sets up two more collections, pointing to certain elements from the original collection. The program stores each set of names in a different database table using FORALL statements with the VALUES OF clause.
-- Create empty tables to hold order details
CREATE TABLE valid_orders (cust_name VARCHAR2(32), amount NUMBER(10,2));
CREATE TABLE big_orders AS SELECT * FROM valid_orders WHERE 1 = 0;
CREATE TABLE rejected_orders AS SELECT * FROM valid_orders WHERE 1 = 0;
DECLARE
-- Make collections to hold a set of customer names and order amounts.
SUBTYPE cust_name IS valid_orders.cust_name%TYPE;
TYPE cust_typ IS TABLe OF cust_name;
cust_tab cust_typ;
SUBTYPE order_amount IS valid_orders.amount%TYPE;
TYPE amount_typ IS TABLE OF NUMBER;
amount_tab amount_typ;
-- Make other collections to point into the CUST_TAB collection.
TYPE index_pointer_t IS TABLE OF PLS_INTEGER;
big_order_tab index_pointer_t := index_pointer_t();
rejected_order_tab index_pointer_t := index_pointer_t();
PROCEDURE setup_data IS BEGIN
-- Set up sample order data, including some invalid orders and some 'big' orders.
cust_tab := cust_typ('Company 1','Company 2','Company 3','Company 4', 'Company 5');
amount_tab := amount_typ(5000.01, 0, 150.25, 4000.00, NULL);
END;
BEGIN
setup_data();
dbms_output.put_line('--- Original order data ---');
FOR i IN 1..cust_tab.LAST LOOP
dbms_output.put_line('Customer #' || i || ', ' || cust_tab(i) || ': $' || amount_tab(i));
END LOOP;
-- Delete invalid orders (where amount is null or 0).
FOR i IN 1..cust_tab.LAST LOOP
IF amount_tab(i) is null or amount_tab(i) = 0 THEN
cust_tab.delete(i);
amount_tab.delete(i);
END IF;
END LOOP;
dbms_output.put_line('--- Data with invalid orders deleted ---');
FOR i IN 1..cust_tab.LAST LOOP
IF cust_tab.EXISTS(i) THEN
dbms_output.put_line('Customer #' || i || ', ' || cust_tab(i) || ': $' || amount_tab(i));
END IF;
END LOOP;
-- Since the subscripts of our collections are not consecutive, we use
-- FORALL...INDICES OF to iterate through the actual subscripts, rather than 1..COUNT.
FORALL i IN INDICES OF cust_tab
INSERT INTO valid_orders(cust_name, amount) VALUES(cust_tab(i), amount_tab(i));
-- Now let's process the order data differently. We'll extract 2 subsets
-- and store each subset in a different table.
setup_data(); -- Initialize the CUST_TAB and AMOUNT_TAB collections again.
FOR i IN cust_tab.FIRST .. cust_tab.LAST LOOP
IF amount_tab(i) IS NULL OR amount_tab(i) = 0 THEN
rejected_order_tab.EXTEND; -- Add a new element to this collection.
rejected_order_tab(rejected_order_tab.LAST) := i; -- And record the subscript from the original collection.
END IF;
IF amount_tab(i) > 2000 THEN
big_order_tab.EXTEND; -- Add a new element to this collection.
big_order_tab(big_order_tab.LAST) := i; -- And record the subscript from the original collection.
END IF;
END LOOP;
-- Now it's easy to run one DML statement on one subset of elements, and another DML statement on a different subset.
FORALL i IN VALUES OF rejected_order_tab
INSERT INTO rejected_orders VALUES (cust_tab(i), amount_tab(i));
FORALL i IN VALUES OF big_order_tab
INSERT INTO big_orders VALUES (cust_tab(i), amount_tab(i));
COMMIT;
END;
/
-- Verify that the correct order details were stored.
SELECT cust_name "Customer", amount "Valid order amount" FROM valid_orders;
SELECT cust_name "Customer", amount "Big order amount" FROM big_orders;
SELECT cust_name "Customer", amount "Rejected order amount" FROM rejected_orders;
DROP TABLE valid_orders;
DROP TABLE big_orders;
DROP TABLE rejected_orders;
How FORALL Affects Rollbacks
In a FORALL statement, if any execution of the SQL statement raises an unhandled exception, all database changes made during previous executions are rolled back. However, if a raised exception is caught and handled, changes are rolled back to an implicit savepoint marked before each execution of the SQL statement. Changes made during previous executions are not rolled back. For example, suppose you create a database table that stores department numbers and job titles, as follows. Then, you change the job titles so that they are longer. The second UPDATE fails because the new value is too long for the column. Because we handle the exception, the first UPDATE is not rolled back and we can commit that change.
CREATE TABLE emp2 (deptno NUMBER(2), job VARCHAR2(18));
DECLARE
TYPE NumList IS TABLE OF NUMBER;
depts NumList := NumList(10, 20, 30);
BEGIN
INSERT INTO emp2 VALUES(10, 'Clerk');
INSERT INTO emp2 VALUES(20, 'Bookkeeper'); -- Lengthening this job title causes an exception.
INSERT INTO emp2 VALUES(30, 'Analyst');
COMMIT;
FORALL j IN depts.FIRST..depts.LAST -- Run 3 UPDATE statements.
UPDATE emp2 SET job = job || ' (Senior)' WHERE deptno = depts(j);
-- raises a "value too large" exception
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line('Problem in the FORALL statement.');
COMMIT; -- Commit results of successful updates.
END;
/
DROP TABLE emp2;
Counting Rows Affected by FORALL with the %BULK_ROWCOUNT Attribute
The cursor attributes SQL%FOUND, SQL%ISOPEN, SQL%NOTFOUND, and SQL%ROWCOUNT, return useful information about the most recently executed DML statement.
The SQL cursor has one composite attribute, %BULK_ROWCOUNT, for use with the FORALL statement. This attribute works like an associative array: SQL%BULK_ROWCOUNT(i) stores the number of rows processed by the ith execution of an INSERT, UPDATE or DELETE statement. For example:
CREATE TABLE emp2 AS SELECT * FROM employees;
DECLARE
TYPE NumList IS TABLE OF NUMBER;
depts NumList := NumList(30, 50, 60);
BEGIN
FORALL j IN depts.FIRST..depts.LAST
DELETE FROM emp2 WHERE department_id = depts(j);
-- How many rows were affected by each DELETE statement?
FOR i IN depts.FIRST..depts.LAST
LOOP
dbms_output.put_line('Iteration #' || i || ' deleted ' ||
SQL%BULK_ROWCOUNT(i) || ' rows.');
END LOOP;
END;
/
DROP TABLE emp2;
The FORALL statement and %BULK_ROWCOUNT attribute use the same subscripts. For example, if FORALL uses the range 5..10, so does %BULK_ROWCOUNT. If the FORALL tatement uses the INDICES OF clause to process a sparse collection, %BULK_ROWCOUNT has corresponding sparse subscripts. If the FORALL statement uses the VALUES OF clause to process a subset of elements, %BULK_ROWCOUNT has subscripts corresponding to the values of the elements in the index collection. If the index collection contains duplicate elements, so that some DML statements are issued multiple times using the same subscript, then the corresponding elements of %BULK_ROWCOUNT represent the sum of all rows affected by the DML statement using that subscript. (For examples showing how to interpret %BULK_ROWCOUNT when using the INDICES OF and VALUES OF clauses, see the PL/SQL sample programs at http://otn.oracle.com/tech/pl_sql/.)
%BULK_ROWCOUNT is usually equal to 1 for inserts, because a typical insert operation affects only a single row. For the INSERT ... SELECT construct, %BULK_ROWCOUNT might be greater than 1. For example, the FORALL statement below inserts an arbitrary number of rows for each iteration. After each iteration, %BULK_ROWCOUNT returns the number of items inserted:
CREATE TABLE emp_by_dept AS SELECT employee_id, department_id
FROM employees WHERE 1 = 0;
DECLARE
TYPE dept_tab IS TABLE OF departments.department_id%TYPE;
deptnums dept_tab;
BEGIN
SELECT department_id BULK COLLECT INTO deptnums FROM departments;
FORALL i IN 1..deptnums.COUNT
INSERT INTO emp_by_dept
SELECT employee_id, department_id FROM employees
WHERE department_id = deptnums(i);
FOR i IN 1..deptnums.COUNT LOOP
-- Count how many rows were inserted for each department; that is,
-- how many employees are in each department.
dbms_output.put_line('Dept '||deptnums(i)||': inserted '||
SQL%BULK_ROWCOUNT(i)||' records');
END LOOP;
dbms_output.put_line('Total records inserted =' || SQL%ROWCOUNT);
END;
/
DROP TABLE emp_by_dept;
You can also use the scalar attributes %FOUND, %NOTFOUND, and %ROWCOUNT after running a FORALL statement. For example, %ROWCOUNT returns the total number of rows processed by all executions of the SQL statement.
%FOUND and %NOTFOUND refer only to the last execution of the SQL statement. You can use %BULK_ROWCOUNT to infer their values for individual executions. For example, when %BULK_ROWCOUNT(i) is zero, %FOUND and %NOTFOUND are FALSE and TRUE, respectively.
Handling FORALL Exceptions with the %BULK_EXCEPTIONS Attribute
PL/SQL provides a mechanism to handle exceptions raised during the execution of a FORALL statement. This mechanism enables a bulk-bind operation to save information about exceptions and continue processing.
To have a bulk bind complete despite errors, add the keywords SAVE EXCEPTIONS to your FORALL statement after the bounds, before the DML statement.
All exceptions raised during the execution are saved in the cursor attribute %BULK_EXCEPTIONS, which stores a collection of records. Each record has two fields:
" %BULK_EXCEPTIONS(i).ERROR_INDEX holds the "iteration" of the FORALL statement during which the exception was raised.
" %BULK_EXCEPTIONS(i).ERROR_CODE holds the corresponding Oracle error code.
The values stored by %BULK_EXCEPTIONS always refer to the most recently executed FORALL statement. The number of exceptions is saved in %BULK_EXCEPTIONS.COUNT. Its subscripts range from 1 to COUNT.
You might need to work backward to determine which collection element was used in the iteration that caused an exception. For example, if you use the INDICES OF clause to process a sparse collection, you must step through the elements one by one to find the one corresponding to %BULK_EXCEPTIONS(i).ERROR_INDEX. If you use the VALUES OF clause to process a subset of elements, you must find the element in the index collection whose subscript matches %BULK_EXCEPTIONS(i).ERROR_INDEX, and then use that element's value as the subscript to find the erroneous element in the original collection. (For examples showing how to find the erroneous elements when using the INDICES OF and VALUES OF clauses, see the PL/SQL sample programs at http://otn.oracle.com/tech/pl_sql/.)
If you omit the keywords SAVE EXCEPTIONS, execution of the FORALL statement stops when an exception is raised. In that case, SQL%BULK_EXCEPTIONS.COUNT returns 1, and SQL%BULK_EXCEPTIONS contains just one record. If no exception is raised during execution, SQL%BULK_EXCEPTIONS.COUNT returns 0.
Example 11-5 Bulk Operation That Continues Despite Exceptions
The following example shows how you can perform a number of DML operations, without stopping if some operations encounter errors:
CREATE TABLE emp2 AS SELECT * FROM employees;
DECLARE
TYPE NumList IS TABLE OF NUMBER;
-- The zeros in this list will cause divide-by-zero errors.
num_tab NumList := NumList(10,0,11,12,30,0,20,199,2,0,9,1);
errors NUMBER;
dml_errors EXCEPTION;
PRAGMA exception_init(dml_errors, -24381);
BEGIN
-- SAVE EXCEPTIONS means don't stop if some DELETEs fail.
FORALL i IN num_tab.FIRST..num_tab.LAST SAVE EXCEPTIONS
DELETE FROM emp2 WHERE salary > 500000/num_tab(i);
-- If any errors occurred during the FORALL SAVE EXCEPTIONS,
-- a single exception is raised when the statement completes.
EXCEPTION
WHEN dml_errors THEN -- Now we figure out what failed and why.
errors := SQL%BULK_EXCEPTIONS.COUNT;
dbms_output.put_line('Number of DELETE statements that failed: ' || errors);
FOR i IN 1..errors LOOP
dbms_output.put_line('Error #' || i || ' occurred during '||
'iteration #' || SQL%BULK_EXCEPTIONS(i).ERROR_INDEX);
dbms_output.put_line('Error message is ' ||
SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE));
END LOOP;
END;
/
DROP TABLE emp2;
In this example, PL/SQL raised the predefined exception ZERO_DIVIDE when i equaled 2, 6, 10. After the FORALL statement, SQL%BULK_EXCEPTIONS.COUNT returned 3, and the contents of SQL%BULK_EXCEPTIONS were (2,1476), (6,1476), and (10,1476). To get the Oracle error message (which includes the code), we negated the value of SQL%BULK_EXCEPTIONS(i).ERROR_CODE and passed the result to the error-reporting function SQLERRM, which expects a negative number. Here is the output:
Number of errors is 3
Error 1 occurred during iteration 2
Oracle error is ORA-01476: divisor is equal to zero
Error 2 occurred during iteration 6
Oracle error is ORA-01476: divisor is equal to zero
Error 3 occurred during iteration 10
Oracle error is ORA-01476: divisor is equal to zero
Retrieving Query Results into Collections with the BULK COLLECT Clause
Using the keywords BULK COLLECT with a query is a very efficient way to retrieve the result set. Instead of looping through each row, you store the results in one or more collections, in a single operation. You can use these keywords in the SELECT INTO and FETCH INTO statements, and the RETURNING INTO clause.
With the BULK COLLECT clause, all the variables in the INTO list must be collections. The table columns can hold scalar or composite values, including object types. The following example loads two entire database columns into nested tables:
DECLARE
TYPE NumTab IS TABLE OF employees.employee_id%TYPE;
TYPE NameTab IS TABLE OF employees.last_name%TYPE;
enums NumTab; -- No need to initialize the collections.
names NameTab; -- Values will be filled in by the SELECT INTO.
PROCEDURE print_results IS
BEGIN
dbms_output.put_line('Results:');
FOR i IN enums.FIRST .. enums.LAST
LOOP
dbms_output.put_line(' Employee #' || enums(i) || ': ' ||
names(i));
END LOOP;
END;
BEGIN
SELECT employee_id, last_name -- Retrieve data for 10 arbitrary employees.
BULK COLLECT INTO enums, names
FROM employees WHERE ROWNUM < 11; -- The data has all been brought into memory by BULK COLLECT. -- No need to FETCH each row from the result set. print_results; SELECT employee_id, last_name -- Retrieve approximately 20% of all rows BULK COLLECT INTO enums, names FROM employees SAMPLE (20); print_results; END; / The collections are initialized automatically. Nested tables and associative arrays are extended to hold as many elements as needed. If you use varrays, all the return values must fit in the varray's declared size. Elements are inserted starting at index 1, overwriting any existing elements. Since the processing of the BULK COLLECT INTO clause is similar to a FETCH loop, it does not raise a NO_DATA_FOUND exception if no rows match the query. You must check whether the resulting nested table or varray is null, or if the resulting associative array has no elements. To prevent the resulting collections from expanding without limit, you can use the pseudocolumn ROWNUM to limit the number of rows processed. Or, you can use the SAMPLE clause to retrieve a random sample of rows. DECLARE TYPE SalList IS TABLE OF emp.sal%TYPE; sals SalList; BEGIN -- Limit the number of rows to 100. SELECT sal BULK COLLECT INTO sals FROM emp WHERE ROWNUM <= 100; -- Retrieve 10% (approximately) of the rows in the table. SELECT sal BULK COLLECT INTO sals FROM emp SAMPLE 10; END; / You can process very large result sets by fetching a specified number of rows at a time from a cursor, as shown in the following sections. Examples of Bulk-Fetching from a Cursor Example 11-6 Bulk-Fetching from a Cursor Into One or More Collections You can fetch from a cursor into one or more collections: DECLARE TYPE NameList IS TABLE OF employees.last_name%TYPE; TYPE SalList IS TABLE OF employees.salary%TYPE; CURSOR c1 IS SELECT last_name, salary FROM employees WHERE salary > 10000;
names NameList;
sals SalList;
TYPE RecList IS TABLE OF c1%ROWTYPE;
recs RecList;
PROCEDURE print_results IS
BEGIN
dbms_output.put_line('Results:');
IF names IS NULL OR names.COUNT = 0 THEN
RETURN; -- Don't print anything if collections are empty.
END IF;
FOR i IN names.FIRST .. names.LAST
LOOP
dbms_output.put_line(' Employee ' || names(i) || ': $' ||
sals(i));
END LOOP;
END;
BEGIN
dbms_output.put_line('--- Processing all results at once ---');
OPEN c1;
FETCH c1 BULK COLLECT INTO names, sals;
CLOSE c1;
print_results;
dbms_output.put_line('--- Processing 7 rows at a time ---');
OPEN c1;
LOOP
FETCH c1 BULK COLLECT INTO names, sals LIMIT 7;
EXIT WHEN c1%NOTFOUND;
print_results;
END LOOP;
-- Loop exits when fewer than 7 rows are fetched. Have to
-- process the last few. Need extra checking inside PRINT_RESULTS
-- in case it is called when the collection is empty.
print_results;
CLOSE c1;
dbms_output.put_line('--- Fetching records rather than columns ---');
OPEN c1;
FETCH c1 BULK COLLECT INTO recs;
FOR i IN recs.FIRST .. recs.LAST
LOOP
-- Now all the columns from the result set come from a single record.
dbms_output.put_line(' Employee ' || recs(i).last_name || ': $'
|| recs(i).salary);
END LOOP;
END;
/
Example 11-7 Bulk-Fetching from a Cursor Into a Collection of Records
You can fetch from a cursor into a collection of records:
DECLARE
TYPE DeptRecTab IS TABLE OF dept%ROWTYPE;
dept_recs DeptRecTab;
CURSOR c1 IS
SELECT deptno, dname, loc FROM dept WHERE deptno > 10;
BEGIN
OPEN c1;
FETCH c1 BULK COLLECT INTO dept_recs;
END;
/
Limiting the Rows for a Bulk FETCH Operation with the LIMIT Clause
The optional LIMIT clause, allowed only in bulk FETCH statements, limits the number of rows fetched from the database.
In the example below, with each iteration of the loop, the FETCH statement fetches ten rows (or less) into index-by table empnos. The previous values are overwritten.
DECLARE
TYPE NumTab IS TABLE OF NUMBER INDEX BY BINARY_INTEGER;
CURSOR c1 IS SELECT empno FROM emp;
empnos NumTab;
rows NATURAL := 10;
BEGIN
OPEN c1;
LOOP
/* The following statement fetches 10 rows (or less). */
FETCH c1 BULK COLLECT INTO empnos LIMIT rows;
EXIT WHEN c1%NOTFOUND;
...
END LOOP;
CLOSE c1;
END;
/
Retrieving DML Results into a Collection with the RETURNING INTO Clause
You can use the BULK COLLECT clause in the RETURNING INTO clause of an INSERT, UPDATE, or DELETE statement:
CREATE TABLE emp2 AS SELECT * FROM employees;
DECLARE
TYPE NumList IS TABLE OF employees.employee_id%TYPE;
enums NumList;
TYPE NameList IS TABLE OF employees.last_name%TYPE;
names NameList;
BEGIN
DELETE FROM emp2 WHERE department_id = 30
RETURNING employee_id, last_name BULK COLLECT INTO enums, names;
dbms_output.put_line('Deleted ' || SQL%ROWCOUNT || ' rows:');
FOR i IN enums.FIRST .. enums.LAST
LOOP
dbms_output.put_line('Employee #' || enums(i) || ': ' || names(i));
END LOOP;
END;
/
DROP TABLE emp2;
Using FORALL and BULK COLLECT Together
You can combine the BULK COLLECT clause with a FORALL statement. The output collections are built up as the FORALL statement iterates.
In the following example, the EMPNO value of each deleted row is stored in the collection ENUMS. The collection DEPTS has 3 elements, so the FORALL statement iterates 3 times. If each DELETE issued by the FORALL statement deletes 5 rows, then the collection ENUMS, which stores values from the deleted rows, has 15 elements when the statement completes:
CREATE TABLE emp2 AS SELECT * FROM employees;
DECLARE
TYPE NumList IS TABLE OF NUMBER;
depts NumList := NumList(10,20,30);
TYPE enum_t IS TABLE OF employees.employee_id%TYPE;
TYPE dept_t IS TABLE OF employees.department_id%TYPE;
e_ids enum_t;
d_ids dept_t;
BEGIN
FORALL j IN depts.FIRST..depts.LAST
DELETE FROM emp2 WHERE department_id = depts(j)
RETURNING employee_id, department_id BULK COLLECT INTO e_ids, d_ids;
dbms_output.put_line('Deleted ' || SQL%ROWCOUNT || ' rows:');
FOR i IN e_ids.FIRST .. e_ids.LAST
LOOP
dbms_output.put_line('Employee #' || e_ids(i) || ' from dept #' || d_ids(i));
END LOOP;
END;
/
DROP TABLE emp2;
The column values returned by each execution are added to the values returned previously. If you use a FOR loop instead of the FORALL statement, the set of returned values is overwritten by each DELETE statement.
You cannot use the SELECT ... BULK COLLECT statement in a FORALL statement.
Using Host Arrays with Bulk Binds
Client-side programs can use anonymous PL/SQL blocks to bulk-bind input and output host arrays. This is the most efficient way to pass collections to and from the database server.
Host arrays are declared in a host environment such as an OCI or a Pro*C program and must be prefixed with a colon to distinguish them from PL/SQL collections. In the example below, an input host array is used in a DELETE statement. At run time, the anonymous PL/SQL block is sent to the database server for execution.
DECLARE
...
BEGIN
-- assume that values were assigned to the host array
-- and host variables in the host environment
FORALL i IN :lower..:upper
DELETE FROM employees WHERE department_id = :depts(i);
COMMIT;
END;
/
Writing Computation-Intensive Programs in PL/SQL
The BINARY_FLOAT and BINARY_DOUBLE datatypes make it practical to write PL/SQL programs to do number-crunching, for scientific applications involving floating-point calculations. These datatypes behave much like the native floating-point types on many hardware systems, with semantics derived from the IEEE-754 floating-point standard.
The way these datatypes represent decimal data make them less suitable for financial applications, where precise representation of fractional amounts is more important than pure performance.
The PLS_INTEGER and BINARY_INTEGER datatypes are PL/SQL-only datatypes that are more efficient than the SQL datatypes NUMBER or INTEGER for integer arithmetic. You can use PLS_INTEGER to write pure PL/SQL code for integer arithmetic, or convert NUMBER or INTEGER values to PLS_INTEGER for manipulation by PL/SQL.
In previous releases, PLS_INTEGER was more efficient than BINARY_INTEGER. Now, they have similar performance, but you might still prefer PLS_INTEGER if your code might be run under older database releases.
Within a package, you can write overloaded versions of procedures and functions that accept different numeric parameters. The math routines can be optimized for each kind of parameter (BINARY_FLOAT, BINARY_DOUBLE, NUMBER, PLS_INTEGER), avoiding unnecessary conversions.
The built-in math functions such as SQRT, SIN, COS, and so on already have fast overloaded versions that accept BINARY_FLOAT and BINARY_DOUBLE parameters. You can speed up math-intensive code by passing variables of these types to such functions, and by calling the TO_BINARY_FLOAT or TO_BINARY_DOUBLE functions when passing expressions to such functions.
Tuning Dynamic SQL with EXECUTE IMMEDIATE and Cursor Variables
Some programs (a general-purpose report writer for example) must build and process a variety of SQL statements, where the exact text of the statement is unknown until run time. Such statements probably change from execution to execution. They are called dynamic SQL statements.
Formerly, to execute dynamic SQL statements, you had to use the supplied package DBMS_SQL. Now, within PL/SQL, you can execute any kind of dynamic SQL statement using an interface called native dynamic SQL. The main PL/SQL features involved are the EXECUTE IMMEDIATE statement and cursor variables (also known as REF CURSORs).
Native dynamic SQL code is more compact and much faster than calling the DBMS_SQL package. The following example declares a cursor variable, then associates it with a dynamic SELECT statement:
DECLARE
TYPE EmpCurTyp IS REF CURSOR;
emp_cv EmpCurTyp;
my_ename VARCHAR2(15);
my_sal NUMBER := 1000;
table_name VARCHAR2(30) := 'employees';
BEGIN
OPEN emp_cv FOR 'SELECT last_name, salary FROM ' || table_name ||
' WHERE salary > :s' USING my_sal;
CLOSE emp_cv;
END;
/
Tuning PL/SQL Procedure Calls with the NOCOPY Compiler Hint
By default, OUT and IN OUT parameters are passed by value. The values of any IN OUT parameters are copied before the subprogram is executed. During subprogram execution, temporary variables hold the output parameter values. If the subprogram exits normally, these values are copied to the actual parameters. If the subprogram exits with an unhandled exception, the original parameters are unchanged.
When the parameters represent large data structures such as collections, records, and instances of object types, this copying slows down execution and uses up memory. In particular, this overhead applies to each call to an object method: temporary copies are made of all the attributes, so that any changes made by the method are only applied if the method exits normally.
To avoid this overhead, you can specify the NOCOPY hint, which allows the PL/SQL compiler to pass OUT and IN OUT parameters by reference. If the subprogram exits normally, the behavior is the same as normal. If the subprogram exits early with an exception, the values of OUT and IN OUT parameters (or object attributes) might still change. To use this technique, ensure that the subprogram handles all exceptions.
The following example asks the compiler to pass IN OUT parameter MY_STAFF by reference, to avoid copying the varray on entry to and exit from the subprogram:
DECLARE
TYPE Staff IS VARRAY(200) OF Employee;
PROCEDURE reorganize (my_staff IN OUT NOCOPY Staff) IS ...
BEGIN
NULL;
END;
/
The following example loads 25,000 records into a local nested table, which is passed to two local procedures that do nothing. A call to the procedure that uses NOCOPY takes much less time.
DECLARE
TYPE EmpTabTyp IS TABLE OF employees%ROWTYPE;
emp_tab EmpTabTyp := EmpTabTyp(NULL); -- initialize
t1 NUMBER;
t2 NUMBER;
t3 NUMBER;
PROCEDURE get_time (t OUT NUMBER) IS
BEGIN t := dbms_utility.get_time; END;
PROCEDURE do_nothing1 (tab IN OUT EmpTabTyp) IS
BEGIN NULL; END;
PROCEDURE do_nothing2 (tab IN OUT NOCOPY EmpTabTyp) IS
BEGIN NULL; END;
BEGIN
SELECT * INTO emp_tab(1) FROM employees WHERE employee_id = 100;
emp_tab.EXTEND(49999, 1); -- copy element 1 into 2..50000
get_time(t1);
do_nothing1(emp_tab); -- pass IN OUT parameter
get_time(t2);
do_nothing2(emp_tab); -- pass IN OUT NOCOPY parameter
get_time(t3);
dbms_output.put_line('Call Duration (secs)');
dbms_output.put_line('--------------------');
dbms_output.put_line('Just IN OUT: ' || TO_CHAR((t2 - t1)/100.0));
dbms_output.put_line('With NOCOPY: ' || TO_CHAR((t3 - t2))/100.0);
END;
/
Restrictions on NOCOPY
The use of NOCOPY increases the likelihood of parameter aliasing. For more information, see "Understanding Subprogram Parameter Aliasing".
Remember, NOCOPY is a hint, not a directive. In the following cases, the PL/SQL compiler ignores the NOCOPY hint and uses the by-value parameter-passing method; no error is generated:
" The actual parameter is an element of an associative array. This restriction does not apply if the parameter is an entire associative array.
" The actual parameter is constrained, such as by scale or NOT NULL. This restriction does not apply to size-constrained character strings. This restriction does not extend to constrained elements or attributes of composite types.
" The actual and formal parameters are records, one or both records were declared using %ROWTYPE or %TYPE, and constraints on corresponding fields in the records differ.
" The actual and formal parameters are records, the actual parameter was declared (implicitly) as the index of a cursor FOR loop, and constraints on corresponding fields in the records differ.
" Passing the actual parameter requires an implicit datatype conversion.
" The subprogram is called through a database link or as an external procedure.
Compiling PL/SQL Code for Native Execution
You can speed up PL/SQL procedures by compiling them into native code residing in shared libraries. The procedures are translated into C code, then compiled with your usual C compiler and linked into the Oracle process.
You can use this technique with both the supplied Oracle packages, and procedures you write yourself. Procedures compiled this way work in all server environments, such as the shared server configuration (formerly known as multi-threaded server) and Oracle Real Application Clusters.
Before You Begin
If you are a first-time user of native PL/SQL compilation, try it first with a test database, before proceeding to a production environment.
Always back up your database before configuring the database for PL/SQL native compilation. If you find that the performance benefit is outweighed by extra compilation time, it might be faster to restore from a backup than to recompile everything in interpreted mode.
Some of the setup steps require DBA authority. You must change the values of some initialization parameters, and create a new directory on the database server, preferably near the data files for the instance. The database server also needs a C compiler; on a cluster, the compiler is needed on each node. Even if you can test out these steps yourself on a development machine, you will generally need to consult with a DBA and enlist their help to use native compilation on a production server.
Contact your system administrator to ensure that you have the required C compiler on your operating system, and find the path for its location. Use a text editor such as vi to open the file $ORACLE_HOME/plsql/spnc_commands, and make sure the command templates are correct. Generally, you should not need to make any changes here, just confirm that the setup is correct.
Determining Whether to Use PL/SQL Native Compilation
PL/SQL native compilation provides the greatest performance gains for computation-intensive procedural operations. Examples of such operations are data warehouse applications, and applications with extensive server-side transformations of data for display. In such cases, expect speed increases of up to 30%.
Because this technique cannot do much to speed up SQL statements called from PL/SQL, it is most effective for compute-intensive PL/SQL procedures that do not spend most of their time executing SQL. You can test to see how much performance gain you can get by enabling PL/SQL native compilation.
It takes longer to compile program units with native compilation than to use the default interpreted mode. You might turn off native compilation during the busiest parts of the development cycle, where code is being frequently recompiled.
When you have decided that you will have significant performance gains in database operations using PL/SQL native compilation, Oracle Corporation recommends that you compile the whole database using the NATIVE setting. Compiling all the PL/SQL code in the database means you see the speedup in your own code, and in calls to all the built-in PL/SQL packages.
How PL/SQL Native Compilation Works
If you do not use native compilation, each PL/SQL program unit is compiled into an intermediate form, machine-readable code (m-code). The m-code is stored in the database dictionary and interpreted at run time.
With PL/SQL native compilation, the PL/SQL statements are turned into C code that bypasses all the runtime interpretation, giving faster runtime performance.
PL/SQL uses the command file $ORACLE_HOME/plsql/spnc_commands, and the supported operating system C compiler and linker, to compile and link the resulting C code into shared libraries. The shared libraries are stored inside the data dictionary, so that they can be backed up automatically and are protected from being deleted. These shared library files are copied to the filesystem and are loaded and run when the PL/SQL subprogram is invoked. If the files are deleted from the filesystem while the database is shut down, or if you change the directory that holds the libraries, they are extracted again automatically.
Although PL/SQL program units that just call SQL statements might see little or no speedup, natively compiled PL/SQL is always at least as fast as the corresponding interpreted code. The compiled code makes the same library calls as the interpreted code would, so its behavior is exactly the same.
Format of the spnc_commands File
The spnc_commands file, in the $ORACLE_HOME/plsql directory, contains the templates for the commands to compile and link each program. Some special names such as %(src) are predefined, and are replaced by the corresponding filename. The variable $(ORACLE_HOME) is replaced by the location of the Oracle home directory. You can include comment lines, starting with a # character. The file contains comments that explain all the special notation.
The spnc_commands file contains a predefined path for the C compiler, depending on the particular operating system. (One specific compiler is supported on each operating system.) In most cases, you should not need to change this path, but you might if you the system administrator has installed it in another location.
System-Level Initialization Parameters for PL/SQL Native Compilation
The following table lists the initialization parameters you must set before using PL/SQL native compilation. They can be set only at the system level, not by an ALTER SESSION command. You cannot use variables such as ORACLE_HOME in the values; use the full path instead.
Note:
The examples in this section for setting system parameters for PL/SQL native compilation assume a system using a server parameter file (SPFILE).
If you use a text initialization parameter file (PFILE, or initsid.ora), ensure that you change parameters in your initialization parameter file, as indicated in the following table.
Parameter Characteristics
PLSQL_NATIVE_LIBRARY_DIR The full path and directory name used to store the shared libraries that contain natively compiled PL/SQL code.
In accordance with optimal flexible architecture (OFA) rules, Oracle Corporation recommends that you create the shared library directory as a subdirectory where the data files are located.
For security reasons, only the users oracle and root should have write privileges for this directory.
PLSQL_NATIVE_LIBRARY_SUBDIR_COUNT The number of subdirectories in the directory specified by the parameter PLSQL_NATIVE_LIBRARY_DIR.
Optional; use if the number of natively compiled program units exceeds 15000. If you need to set this option, Oracle Database Reference for complete details about the initialization parameters and data dictionary views.
"
Setting Up and Using PL/SQL Native Compilation
To speed up one or more subprograms through native compilation:
Set up the PLSQL_NATIVE_LIBRARY_DIR initialization parameter, and optionally the PLSQL_NATIVE_LIBRARY_SUBDIR_COUNT initialization parameter, as described above.
Use the ALTER SYSTEM or ALTER SESSION command, or update your initialization file, to set the parameter PLSQL_CODE_TYPE to the value NATIVE.
Compile one or more subprograms, using one of these methods:
" Use CREATE OR REPLACE to create or recompile the subprogram.
" Use the ALTER PROCEDURE, ALTER FUNCTION, or ALTER PACKAGE command with the COMPILE option to recompile the subprogram or the entire package. (You can also use the clause PLSQL_CODE_TYPE = NATIVE with the ALTER statements to affect specific subprograms without changing the initialization parameter for the whole session.)
" Drop the subprogram and create it again.
" Run one of the SQL*Plus scripts that creates a set of Oracle-supplied packages.
" Create a database using a preconfigured initialization file with PLSQL_CODE_TYPE=NATIVE. During database creation, the UTLIRP script is run to compile all the Oracle-supplied packages.
To be sure that the process worked, you can query the data dictionary to see that a procedure is compiled for native execution. To check whether an existing procedure is compiled for native execution or not, you can query the data dictionary views USER_PLSQL_OBJECT_SETTINGS, DBA_PLSQL_OBJECT_SETTINGS, and ALL_PLSQL_OBJECT_SETTINGS. For example, to check the status of the procedure MY_PROC, you could enter:
CREATE OR REPLACE PROCEDURE my_proc AS BEGIN NULL; END;
/
SELECT plsql_code_type FROM user_plsql_object_settings WHERE name = 'MY_PROC';
DROP PROCEDURE my_proc;
The CODE_TYPE column has a value of NATIVE for procedures that are compiled for native execution, and INTERPRETED otherwise.
After the procedures are compiled and turned into shared libraries, they are automatically linked into the Oracle process. You do not need to restart the database, or move the shared libraries to a different location. You can call back and forth between stored procedures, whether they are all interpreted, all compiled for native execution, or a mixture of both.
Dependencies, Invalidation and Revalidation
This recompilation happens automatically invalidated, such as when a table that it depends on is re-created.
If an object on which a natively compiled PL/SQL subprogram depends changes, the subprogram is invalidated. The next time the same subprogram is called, the database recompiles the subprogram automatically. Because the PLSQL_CODE_TYPE setting is stored inside the library unit for each subprogram, the automatic recompilation uses this stored setting for code type.
The stored settings are only used when recompiling as part of revalidation. If a PL/SQL subprogram is explicitly compiled through the SQL commands "create or replace" or "alter...compile", the current session setting is used.
The generated shared libraries are stored in the database, in the SYSTEM tablespace. The first time a natively compiled procedure is executed, the corresponding shared library is copied from the database to the directory specified by the initialization parameter PLSQL_NATIVE_LIBRARY_DIR.
Setting Up Databases for PL/SQL Native Compilation
Use the procedures in this section to set up an entire database for PL/SQL native compilation. The performance benefits apply to all the built-in PL/SQL packages, which are used for many database operations.
Creating a New Database for PL/SQL Native Compilation
If you use Database Configuration Assistant, use it to set the initialization parameters required for PL/SQL native compilation, as described in the preceding section, "System-Level Initialization Parameters for PL/SQL Native Compilation".
To find the supported C compiler on your operating system. refer to the table "Precompilers and Tools Restrictions and Requirements" in the installation guide for your operating system. Determine from your system administrator where it is located on your system. You will need to check that this path is correct in the spnc_commands file.
Determine if you have so many PL/SQL program units that you need to set the initialization parameter PLSQL_NATIVE_DIR_SUBDIR_COUNT, and create PL/SQL native library subdirectories if necessary. By default, PL/SQL program units are kept in one directory. If the number of program units exceeds 15,000, the operating system begins to impose performance limits. To work around this problem, Oracle Corporation recommends that you spread the PL/SQL program units among subdirectories.
If you have set up a test database, use this SQL query to determine how many PL/SQL program units you are using:
select count (*) from DBA_PLSQL_OBJECTS;
If the count returned by this query is greater than 15,000, complete the procedure described in the section "Setting Up PL/SQL Native Library Subdirectories".
Modifying an Existing Database for PL/SQL Native Compilation
To natively compile an existing database, complete the following procedure:
Contact your system administrator to ensure that you have the required C compiler on your operating system, and find the path for its location. Use a text editor such as vi to open the file spnc_commands, and make sure the command templates are correct.
As the oracle user, create the PL/SQL native library directory for each Oracle database.
Note:
You must set up PL/SQL libraries for each Oracle database. Shared libraries (.so and .dll files) are logically connected to the database. They cannot be shared between databases. If you set up PL/SQL libraries to be shared, the databases will be corrupted.
Create a directory in a secure place, in accordance with OFA rules, to prevent .so and .dll files from unauthorized access.
In addition, ensure that the compiler executables used for PL/SQL native compilation are writable only by a properly secured user.
The original copies of the shared libraries are stored inside the database, so they are backed up automatically with the database.
Using SQL, set the initialization parameter PLSQL_NATIVE_LIBRARY_DIR to the full path to the PL/SQL native library.
For example, if the path to the PL/SQL native library directory is /oracle/oradata/mydb/natlib, enter the following:
alter system set plsql_native_library_dir='/oracle/oradata/mydb/natlib'
Determine if you need to set the initialization parameter PLSQL_NATIVE_DIR_SUBDIR_COUNT, and create PL/SQL native library subdirectories if necessary.
By default, PL/SQL program units are kept in one directory. However, if the number of program units exceeds 15000, then the operating system begins to impose performance limits. To work around this problem, Oracle Corporation recommends that you spread the PL/SQL program units in subdirectories.
If you have an existing database that you will migrate to the new installation, or if you have set up a test database, use the following SQL query to determine how many PL/SQL program units you are using:
select count (*) from DBA_PLSQL_OBJECTS;
Create the following stored procedure to confirm that PL/SQL native compilation is enabled:
CREATE OR REPLACE PROCEDURE Hello AS
BEGIN
dbms_output.put_line ( 'This output is from a natively compiled procedure.' );
END Hello;
/
Run the stored procedure:
CALL Hello();
If the program does not return the expected output, contact Oracle Support for assistance. (Remember to SET SERVEROUTPUT ON in SQL*Plus before running the procedure.)
Recompile all the PL/SQL subprograms in the database. The script $ORACLE_HOME/admin/utlirp.sql is typically used here.
Setting Up PL/SQL Native Library Subdirectories
If you need to set up PL/SQL native library subdirectories, use the following procedure:
Create subdirectories sequentially in the form of d0, d1, d2, d3...dx, where x is the total number of directories. Oracle Corporation recommends that you use a script for this task. For example, you might run a PL/SQL block like the following, save its output to a file, then run that file as a shell script:
BEGIN
FOR j IN 0..999
LOOP
dbms_output.put_line ( 'mkdir d' || TO_CHAR(j) );
END LOOP;
END;
/
Set the initialization parameter PLSQL_NATIVE_DIR_COUNT to the number of subdirectories you have created. For example, if you created 1000 subdirectories, enter the following SQL statement in SQL*Plus:
alter system set plsql_native_library_subdir_count=1000;
Example 11-8 Compiling a PL/SQL Procedure for Native Execution
alter session set plsql_code_type='NATIVE';
CREATE OR REPLACE PROCEDURE hello_native
AS
BEGIN
dbms_output.put_line('Hello world.');
dbms_output.put_line('Today is ' || TO_CHAR(SYSDATE) || '.');
END;
/
select plsql_code_type from user_plsql_object_settings
where name = 'HELLO_NATIVE';
alter session set plsql_code_type='INTERPRETED';
The procedure is immediately available to call, and runs as a shared library directly within the Oracle process. If any errors occur during compilation, you can see them using the USER_ERRORS view or the SHOW ERRORS command in SQL*Plus.
Limitations of Native Compilation
" Debugging tools for PL/SQL do not handle procedures compiled for native execution.
" When many procedures and packages (typically, over 15000) are compiled for native execution, the large number of shared objects in a single directory might affect system performance.
Real Application Clusters and PL/SQL Native Compilation
Because any node might need to compile a PL/SQL subprogram, each node in the cluster needs a C compiler and correct settings and paths in the $ORACLE_HOME/plsql/spnc_commands file.
When you use PLSQL native compilation in a Real Application Clusters (RAC) environment, the original copies of the shared library files are stored in the databases, and these files are automatically propagated to all nodes in the cluster. You do not need to do any copying of libraries for this feature to work.
The reason for using a server parameter file (SPFILE) in the examples in this section, is to make sure that all nodes of a RAC cluster use the same settings for the parameters that control PL/SQL native compilation.
Setting Up Transformation Pipelines with Table Functions
This section describes how to chain together special kinds of functions known as table functions. These functions are used in situations such as data warehousing to apply multiple transformations to data.
Overview of Table Functions
Table functions are functions that produce a collection of rows (either a nested table or a varray) that can be queried like a physical database table or assigned to a PL/SQL collection variable. You can use a table function like the name of a database table, in the FROM clause of a query, or like a column name in the SELECT list of a query.
A table function can take a collection of rows as input. An input collection parameter can be either a collection type (such as a VARRAY or a PL/SQL table) or a REF CURSOR.
Execution of a table function can be parallelized, and returned rows can be streamed directly to the next process without intermediate staging. Rows from a collection returned by a table function can also be pipelined-that is, iteratively returned as they are produced instead of in a batch after all processing of the table function's input is completed.
Streaming, pipelining, and parallel execution of table functions can improve performance:
" By enabling multi-threaded, concurrent execution of table functions
" By eliminating intermediate staging between processes
" By improving query response time: With non-pipelined table functions, the entire collection returned by a table function must be constructed and returned to the server before the query can return a single result row. Pipelining enables rows to be returned iteratively, as they are produced. This also reduces the memory that a table function requires, as the object cache does not need to materialize the entire collection.
" By iteratively providing result rows from the collection returned by a table function as the rows are produced instead of waiting until the entire collection is staged in tables or memory and then returning the entire collection.
Example 11-9 Example: Querying a Table Function
The following example shows a table function GetBooks that takes a CLOB as input and returns an instance of the collection type BookSet_t. The CLOB column stores a catalog listing of books in some format (either proprietary or following a standard such as XML). The table function returns all the catalogs and their corresponding book listings.
The collection type BookSet_t is defined as:
CREATE TYPE Book_t AS OBJECT ( name VARCHAR2(100), author VARCHAR2(30), abstract VARCHAR2(1000));
/
CREATE TYPE BookSet_t AS TABLE OF Book_t;
/
-- The CLOBs are stored in a table Catalogs:
CREATE TABLE Catalogs ( name VARCHAR2(30), cat CLOB );
Function GetBooks is defined as follows:
CREATE FUNCTION GetBooks(a CLOB) RETURN BookSet_t;
/
The query below returns all the catalogs and their corresponding book listings.
SELECT c.name, Book.name, Book.author, Book.abstract
FROM Catalogs c, TABLE(GetBooks(c.cat)) Book;
Example 11-10 Example: Assigning the Result of a Table Function
The following example shows how you can assign the result of a table function to a PL/SQL collection variable. Because the table function is called from the SELECT list of the query, you do not need the TABLE keyword.
CREATE TYPE numset_t AS TABLE OF NUMBER;
/
CREATE FUNCTION f1(x number) RETURN numset_t PIPELINED IS
BEGIN
FOR i IN 1..x LOOP
PIPE ROW(i);
END LOOP;
RETURN;
END;
/
-- pipelined function in from clause
select * from table(f1(3));
Using Pipelined Table Functions for Transformations
A pipelined table function can accept any argument that regular functions accept. A table function that accepts a REF CURSOR as an argument can serve as a transformation function. That is, it can use the REF CURSOR to fetch the input rows, perform some transformation on them, and then pipeline the results out.
For example, the following code sketches the declarations that define a StockPivot function. This function converts a row of the type (Ticker, OpenPrice, ClosePrice) into two rows of the form (Ticker, PriceType, Price). Calling StockPivot for the row ("ORCL", 41, 42) generates two rows: ("ORCL", "O", 41) and ("ORCL", "C", 42).
Input data for the table function might come from a source such as table StockTable:
CREATE TABLE StockTable (
ticker VARCHAR(4),
open_price NUMBER,
close_price NUMBER
);
-- Create the types for the table function's output collection
-- and collection elements
CREATE TYPE TickerType AS OBJECT
(
ticker VARCHAR2(4),
PriceType VARCHAR2(1),
price NUMBER
);
/
CREATE TYPE TickerTypeSet AS TABLE OF TickerType;
/
-- Define the ref cursor type
CREATE PACKAGE refcur_pkg IS
TYPE refcur_t IS REF CURSOR RETURN StockTable%ROWTYPE;
END refcur_pkg;
/
-- Create the table function
CREATE FUNCTION StockPivot(p refcur_pkg.refcur_t) RETURN TickerTypeSet
PIPELINED ... ;
/
Here is an example of a query that uses the StockPivot table function:
SELECT * FROM TABLE(StockPivot(CURSOR(SELECT * FROM StockTable)));
In the query above, the pipelined table function StockPivot fetches rows from the CURSOR subquery SELECT * FROM StockTable, performs the transformation, and pipelines the results back to the user as a table. The function produces two output rows (collection elements) for each input row.
Note that when a CURSOR subquery is passed from SQL to a REF CURSOR function argument as in the example above, the referenced cursor is already open when the function begins executing.
Writing a Pipelined Table Function
You declare a pipelined table function by specifying the PIPELINED keyword. This keyword indicates that the function returns rows iteratively. The return type of the pipelined table function must be a collection type, such as a nested table or a varray. You can declare this collection at the schema level or inside a package. Inside the function, you return individual elements of the collection type.
For example, here are declarations for two pipelined table functions. (The function bodies are shown in later examples.)
CREATE FUNCTION GetBooks(cat CLOB) RETURN BookSet_t
PIPELINED IS ...;
/
CREATE FUNCTION StockPivot(p refcur_pkg.refcur_t) RETURN TickerTypeSet
PIPELINED IS...;
/
Returning Results from Table Functions
In PL/SQL, the PIPE ROW statement causes a table function to pipe a row and continue processing. The statement enables a PL/SQL table function to return rows as soon as they are produced. (For performance, the PL/SQL runtime system provides the rows to the consumer in batches.) For example:
CREATE FUNCTION StockPivot(p refcur_pkg.refcur_t) RETURN TickerTypeSet
PIPELINED IS
out_rec TickerType := TickerType(NULL,NULL,NULL);
in_rec p%ROWTYPE;
BEGIN
LOOP
FETCH p INTO in_rec;
EXIT WHEN p%NOTFOUND;
-- first row
out_rec.ticker := in_rec.Ticker;
out_rec.PriceType := 'O';
out_rec.price := in_rec.OpenPrice;
PIPE ROW(out_rec);
-- second row
out_rec.PriceType := 'C';
out_rec.Price := in_rec.ClosePrice;
PIPE ROW(out_rec);
END LOOP;
CLOSE p;
RETURN;
END;
/
In the example, the PIPE ROW(out_rec) statement pipelines data out of the PL/SQL table function. out_rec is a record, and its type matches the type of an element of the output collection.
The PIPE ROW statement may be used only in the body of pipelined table functions; an error is raised if it is used anywhere else. The PIPE ROW statement can be omitted for a pipelined table function that returns no rows.
A pipelined table function must have a RETURN statement that does not return a value. The RETURN statement transfers the control back to the consumer and ensures that the next fetch gets a NO_DATA_FOUND exception.
Because table functions pass control back and forth to a calling routine as rows areproduced, there is a restriction on combining table functions and PRAGMA AUTONOMOUS_TRANSACTION. If a table function is part of an autonomous transaction, it must COMMIT or ROLLBACK before each PIPE ROW statement, to avoid an error in the calling subprogram.
Oracle has three special SQL datatypes that enable you to dynamically encapsulate and access type descriptions, data instances, and sets of data instances of any other SQL type, including object and collection types. You can also use these three special types to create anonymous (that is, unnamed) types, including anonymous collection types. The types are SYS.ANYTYPE, SYS.ANYDATA, and SYS.ANYDATASET. The SYS.ANYDATA type can be useful in some situations as a return value from table functions.
Pipelining Data Between PL/SQL Table Functions
With serial execution, results are pipelined from one PL/SQL table function to another using an approach similar to co-routine execution. For example, the following statement pipelines results from function g to function f:
SELECT * FROM TABLE(f(CURSOR(SELECT * FROM TABLE(g()))));
Parallel execution works similarly except that each function executes in a different process (or set of processes).
Querying Table Functions
Pipelined table functions are used in the FROM clause of SELECT statements. The result rows are retrieved by Oracle iteratively from the table function implementation. For example:
SELECT x.Ticker, x.Price
FROM TABLE(StockPivot( CURSOR(SELECT * FROM StockTable))) x
WHERE x.PriceType='C';
Note:
A table function returns a collection. In some cases, such as when the top-level query uses SELECT * and the query refers to a PL/SQL variable or a bind variable, you may need a CAST operator around the table function to specify the exact return type.
Optimizing Multiple Calls to Table Functions
Multiple invocations of a table function, either within the same query or in separate queries result in multiple executions of the underlying implementation. By default, there is no buffering or reuse of rows.
For example,
SELECT * FROM TABLE(f(...)) t1, TABLE(f(...)) t2
WHERE t1.id = t2.id;
SELECT * FROM TABLE(f());
SELECT * FROM TABLE(f());
If the function always produces the same result value for each combination of values passed in, you can declare the function DETERMINISTIC, and Oracle automatically buffers rows for it. If the function is not really deterministic, results are unpredictable.
Fetching from the Results of Table Functions
PL/SQL cursors and ref cursors can be defined for queries over table functions. For example:
OPEN c FOR SELECT * FROM TABLE(f(...));
Cursors over table functions have the same fetch semantics as ordinary cursors. REF CURSOR assignments based on table functions do not have any special semantics.
However, the SQL optimizer will not optimize across PL/SQL statements. For example:
DECLARE
r SYS_REFCURSOR;
BEGIN
OPEN r FOR SELECT * FROM TABLE(f(CURSOR(SELECT * FROM tab)));
SELECT * BULK COLLECT INTO rec_tab FROM TABLE(g(r));
END;
/
does not execute as well as:
SELECT * FROM TABLE(g(CURSOR(SELECT * FROM
TABLE(f(CURSOR(SELECT * FROM tab))))));
This is so even ignoring the overhead associated with executing two SQL statements and assuming that the results can be pipelined between the two statements.
Passing Data with Cursor Variables
You can pass a set of rows to a PL/SQL function in a REF CURSOR parameter. For example, this function is declared to accept an argument of the predefined weakly typed REF CURSOR type SYS_REFCURSOR:
FUNCTION f(p1 IN SYS_REFCURSOR) RETURN ... ;
Results of a subquery can be passed to a function directly:
SELECT * FROM TABLE(f(CURSOR(SELECT empno FROM tab)));
In the example above, the CURSOR keyword is required to indicate that the results of a subquery should be passed as a REF CURSOR parameter.
A predefined weak REF CURSOR type SYS_REFCURSOR is also supported. With SYS_REFCURSOR, you do not need to first create a REF CURSOR type in a package before you can use it.
To use a strong REF CURSOR type, you still must create a PL/SQL package and declare a strong REF CURSOR type in it. Also, if you are using a strong REF CURSOR type as an argument to a table function, then the actual type of the REF CURSOR argument must match the column type, or an error is generated. Weak REF CURSOR arguments to table functions can only be partitioned using the PARTITION BY ANY clause. You cannot use range or hash partitioning for weak REF CURSOR arguments.
Example 11-11 Example: Using Multiple REF CURSOR Input Variables
PL/SQL functions can accept multiple REF CURSOR input variables:
CREATE FUNCTION g(p1 pkg.refcur_t1, p2 pkg.refcur_t2) RETURN...
PIPELINED ... ;
/
Function g can be invoked as follows:
SELECT * FROM TABLE(g(CURSOR(SELECT employee_id FROM tab),
CURSOR(SELECT * FROM employees));
You can pass table function return values to other table functions by creating a REF CURSOR that iterates over the returned data:
SELECT * FROM TABLE(f(CURSOR(SELECT * FROM TABLE(g(...)))));
Example 11-12 Example: Explicitly Opening a REF CURSOR for a Query
You can explicitly open a REF CURSOR for a query and pass it as a parameter to a table function:
DECLARE
r SYS_REFCURSOR;
rec ...;
BEGIN
OPEN r FOR SELECT * FROM TABLE(f(...));
-- Must return a single row result set.
SELECT * INTO rec FROM TABLE(g(r));
END;
/
In this case, the table function closes the cursor when it completes, so your program should not explicitly try to close the cursor.
Example 11-13 Example: Using a Pipelined Table Function as an Aggregate Function
A table function can compute aggregate results using the input ref cursor. The following example computes a weighted average by iterating over a set of input rows.
DROP TABLE gradereport;
CREATE TABLE gradereport (student VARCHAR2(30), subject VARCHAR2(30), weight NUMBER, grade NUMBER);
INSERT INTO gradereport VALUES('Mark', 'Physics', 4, 4);
INSERT INTO gradereport VALUES('Mark','Chemistry', 4,3);
INSERT INTO gradereport VALUES('Mark','Maths', 3,3);
INSERT INTO gradereport VALUES('Mark','Economics', 3,4);
CREATE OR replace TYPE gpa AS TABLE OF NUMBER;
/
CREATE OR replace FUNCTION weighted_average(input_values
sys_refcursor)
RETURN gpa PIPELINED IS
grade NUMBER;
total NUMBER := 0;
total_weight NUMBER := 0;
weight NUMBER := 0;
BEGIN
-- The function accepts a ref cursor and loops through all the input rows.
LOOP
FETCH input_values INTO weight, grade;
EXIT WHEN input_values%NOTFOUND;
-- Accumulate the weighted average.
total_weight := total_weight + weight;
total := total + grade*weight;
END LOOP;
PIPE ROW (total / total_weight);
-- The function returns a single result.
RETURN;
END;
/
show errors;
-- The result comes back as a nested table with a single row.
-- COLUMN_VALUE is a keyword that returns the contents of a nested table.
select weighted_result.column_value from
table( weighted_average( cursor(select weight,grade from gradereport) ) ) weighted_result;
Performing DML Operations Inside Table Functions
To execute DML statements, declare a table function with the AUTONOMOUS_TRANSACTION pragma, which causes the function to execute in a new transaction not shared by other processes:
CREATE FUNCTION f(p SYS_REFCURSOR) return CollType PIPELINED IS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN NULL; END;
/
During parallel execution, each instance of the table function creates an independent transaction.
Performing DML Operations on Table Functions
Table functions cannot be the target table in UPDATE, INSERT, or DELETE statements. For example, the following statements will raise an error:
UPDATE F(CURSOR(SELECT * FROM tab)) SET col = value;
INSERT INTO f(...) VALUES ('any', 'thing');
However, you can create a view over a table function and use INSTEAD OF triggers to update it. For example:
CREATE VIEW BookTable AS
SELECT x.Name, x.Author
FROM TABLE(GetBooks('data.txt')) x;
The following INSTEAD OF trigger is fired when the user inserts a row into the BookTable view:
CREATE TRIGGER BookTable_insert
INSTEAD OF INSERT ON BookTable
REFERENCING NEW AS n
FOR EACH ROW
BEGIN
...
END;
/
INSERT INTO BookTable VALUES (...);
INSTEAD OF triggers can be defined for all DML operations on a view built on a table function.
Handling Exceptions in Table Functions
Exception handling in table functions works just as it does with regular functions.
Some languages, such as C and Java, provide a mechanism for user-supplied exception handling. If an exception raised within a table function is handled, the table function executes the exception handler and continues processing. Exiting the exception handler takes control to the enclosing scope. If the exception is cleared, execution proceeds normally.
An unhandled exception in a table function causes the parent transaction to roll back.
In releases prior to 10g, the PL/SQL compiler translated your code to machine code without applying many changes for performance. Now, PL/SQL uses an optimizing compiler that can rearrange code for better performance.
You do not need to do anything to get the benefits of this new optimizer. It is enabled by default. In rare cases, if the overhead of the optimizer makes compilation of very large applications take too long, you might lower the optimization by setting the initialization parameter PLSQL_OPTIMIZE_LEVEL=1 instead of its default value 2. In even rarer cases, you might see a change in exception behavior, either an exception that is not raised at all, or one that is raised earlier than expected. Setting PL_SQL_OPTIMIZE_LEVEL=0 prevents the code from being rearranged at all.
When to Tune PL/SQL Code
The information in this chapter is especially valuable if you are responsible for:
" Programs that do a lot of mathematical calculations. You will want to investigate the datatypes PLS_INTEGER, BINARY_FLOAT, and BINARY_DOUBLE.
" Functions that are called from PL/SQL queries, where the functions might be executed millions of times. You will want to look at all performance features to make the function as efficient as possible, and perhaps a function-based index to precompute the results for each row and save on query time.
" Programs that spend a lot of time processing INSERT, UPDATE, or DELETE statements, or looping through query results. You will want to investigate the FORALL statement for issuing DML, and the BULK COLLECT INTO and RETURNING BULK COLLECT INTO clauses for queries.
" Older code that does not take advantage of recent PL/SQL language features. (With the many performance improvements in Oracle Database 10g, any code from earlier releases is a candidate for tuning.)
" Any program that spends a lot of time doing PL/SQL processing, as opposed to issuing DDL statements like CREATE TABLE that are just passed directly to SQL. You will want to investigate native compilation. Because many built-in database features use PL/SQL, you can apply this tuning feature to an entire database to improve performance in many areas, not just your own code.
Before starting any tuning effort, benchmark the current system and measure how long particular subprograms take. PL/SQL in Oracle Database 10g includes many automatic optimizations, so you might see performance improvements without doing any tuning.
Guidelines for Avoiding PL/SQL Performance Problems
When a PL/SQL-based application performs poorly, it is often due to badly written SQL statements, poor programming practices, inattention to PL/SQL basics, or misuse of shared memory.
Avoiding CPU Overhead in PL/SQL Code
Make SQL Statements as Efficient as Possible
PL/SQL programs look relatively simple because most of the work is done by SQL statements. Slow SQL statements are the main reason for slow execution.
If SQL statements are slowing down your program:
" Make sure you have appropriate indexes. There are different kinds of indexes for different situations. Your index strategy might be different depending on the sizes of various tables in a query, the distribution of data in each query, and the columns used in the WHERE clauses.
" Make sure you have up-to-date statistics on all the tables, using the subprograms in the DBMS_STATS package.
" Analyze the execution plans and performance of the SQL statements, using:
" EXPLAIN PLAN statement
" SQL Trace facility with TKPROF utility
" Oracle Trace facility
" Rewrite the SQL statements if necessary. For example, query hints can avoid problems such as unnecessary full-table scans.
Some PL/SQL features also help improve the performance of SQL statements:
" If you are running SQL statements inside a PL/SQL loop, look at the FORALL statement as a way to replace loops of INSERT, UPDATE, and DELETE statements.
" If you are looping through the result set of a query, look at the BULK COLLECT clause of the SELECT INTO statement as a way to bring the entire result set into memory in a single operation.
Make Function Calls as Efficient as Possible
Badly written subprograms (for example, a slow sort or search function) can harm performance. Avoid unnecessary calls to subprograms, and optimize their code:
" If a function is called within a SQL query, you can cache the function value for each row by creating a function-based index on the table in the query. The CREATE INDEX statement might take a while, but queries can be much faster.
" If a column is passed to a function within an SQL query, the query cannot use regular indexes on that column, and the function might be called for every row in a (potentially very large) table. Consider nesting the query so that the inner query filters the results to a small number of rows, and the outer query calls the function only a few times:
BEGIN
-- Inefficient, calls my_function for every row.
FOR item IN (SELECT DISTINCT(SQRT(department_id)) col_alias FROM employees)
LOOP
dbms_output.put_line(item.col_alias);
END LOOP;
-- Efficient, only calls function once for each distinct value.
FOR item IN
( SELECT SQRT(department_id) col_alias FROM
( SELECT DISTINCT department_id FROM employees)
)
LOOP
dbms_output.put_line(item.col_alias);
END LOOP;
END;
/
If you use OUT or IN OUT parameters, PL/SQL adds some performance overhead to ensure correct behavior in case of exceptions (assigning a value to the OUT parameter, then exiting the subprogram because of an unhandled exception, so that the OUT parameter keeps its original value).
If your program does not depend on OUT parameters keeping their values in such situations, you can add the NOCOPY keyword to the parameter declarations, so the parameters are declared OUT NOCOPY or IN OUT NOCOPY.
This technique can give significant speedup if you are passing back large amounts of data in OUT parameters, such as collections, big VARCHAR2 values, or LOBs.
This technique also applies to member subprograms of object types. If these subprograms modify attributes of the object type, all the attributes are copied when the subprogram ends. To avoid this overhead, you can explicitly declare the first parameter of the member subprogram as SELF IN OUT NOCOPY, instead of relying on PL/SQL's implicit declaration SELF IN OUT.
Make Loops as Efficient as Possible
Because PL/SQL applications are often built around loops, it is important to optimize the loop itself and the code inside the loop:
" Move initializations or computations outside the loop if possible.
" To issue a series of DML statements, replace loop constructs with FORALL statements.
" To loop through a result set and store the values, use the BULK COLLECT clause on the query to bring the query results into memory in one operation.
" If you have to loop through a result set more than once, or issue other queries as you loop through a result set, you can probably enhance the original query to give you exactly the results you want. Some query operators to explore include UNION, INTERSECT, MINUS, and CONNECT BY.
" You can also nest one query inside another (known as a subselect) to do the filtering and sorting in multiple stages. For example, instead of calling a PL/SQL function in the inner WHERE clause (which might call the function once for each row of the table), you can filter the result set to a small set of rows in the inner query, and call the function in the outer query.
Don't Duplicate Built-in String Functions
PL/SQL provides many highly optimized string functions such as REPLACE, TRANSLATE, SUBSTR, INSTR, RPAD, and LTRIM. The built-in functions use low-level code that is more efficient than regular PL/SQL.
If you use PL/SQL string functions to search for regular expressions, consider using the built-in regular expression functions, such as REGEXP_SUBSTR.
Reorder Conditional Tests to Put the Least Expensive First
PL/SQL stops evaluating a logical expression as soon as the result can be determined (known as short-circuit evaluation).
When evaluating multiple conditions separated by AND or OR, put the least expensive ones first. For example, check the values of PL/SQL variables before testing function return values, because PL/SQL might be able to skip calling the functions.
Minimize Datatype Conversions
At run time, PL/SQL converts between different datatypes automatically. For example, assigning a PLS_INTEGER variable to a NUMBER variable results in a conversion because their internal representations are different.
Avoiding implicit conversions can improve performance. Use literals of the appropriate types: character literals in character expressions, decimal numbers in number expressions, and so on.
In the example below, the integer literal 15 must be converted to an Oracle NUMBER before the addition. The floating-point literal 15.0 is represented as a NUMBER, avoiding the need for a conversion.
DECLARE
n NUMBER;
c CHAR(5);
BEGIN
n := n + 15; -- converted implicitly; slow
n := n + 15.0; -- not converted; fast
c := 25; -- converted implicitly; slow
c := TO_CHAR(25); -- converted explicitly; still slow
c := '25'; -- not converted; fast
END;
/
Minimizing conversions might mean changing the types of your variables, or even working backward and designing your tables with different datatypes. Or, you might convert data once (such as from an INTEGER column to a PLS_INTEGER variable) and use the PL/SQL type consistently after that.
Use PLS_INTEGER or BINARY_INTEGER for Integer Arithmetic
When you need to declare a local integer variable, use the datatype PLS_INTEGER, which is the most efficient integer type. PLS_INTEGER values require less storage than INTEGER or NUMBER values, and PLS_INTEGER operations use machine arithmetic.
The BINARY_INTEGER datatype is just as efficient as PLS_INTEGER for any new code, but if you are running the same code on Oracle9i or Oracle8i databases, PLS_INTEGER is faster.
The datatype NUMBER and its subtypes are represented in a special internal format, designed for portability and arbitrary scale and precision, not performance. Even the subtype INTEGER is treated as a floating-point number with nothing after the decimal point. Operations on NUMBER or INTEGER variables require calls to library routines.
Avoid constrained subtypes such as INTEGER, NATURAL, NATURALN, POSITIVE, POSITIVEN, and SIGNTYPE in performance-critical code. Variables of these types require extra checking at run time, each time they are used in a calculation.
Use BINARY_FLOAT and BINARY_DOUBLE for Floating-Point Arithmetic
The datatype NUMBER and its subtypes are represented in a special internal format, designed for portability and arbitrary scale and precision, not performance. Operations on NUMBER or INTEGER variables require calls to library routines.
The BINARY_FLOAT and BINARY_DOUBLE types can use native machine arithmetic instructions, and are more efficient for number-crunching applications such as scientific processing. They also require less space in the database.
These types do not always represent fractional values precisely, and handle rounding differently than the NUMBER types. These types are less suitable for financial code where accuracy is critical.
Avoiding Memory Overhead in PL/SQL Code
Be Generous When Declaring Sizes for VARCHAR2 Variables
You might need to allocate large VARCHAR2 variables when you are not sure how big an expression result will be. You can actually conserve memory by declaring VARCHAR2 variables with large sizes, such as 32000, rather than estimating just a little on the high side, such as by specifying a size such as 256 or 1000. PL/SQL has an optimization that makes it easy to avoid overflow problems and still conserve memory. Specify a size of 2000 or more characters for the VARCHAR2 variable; PL/SQL waits until you assign the variable, then only allocates as much storage as needed.
Group Related Subprograms into Packages
When you call a packaged subprogram for the first time, the whole package is loaded into the shared memory pool. Subsequent calls to related subprograms in the package require no disk I/O, and your code executes faster. If the package is aged out of memory, it must be reloaded if you reference it again.
You can improve performance by sizing the shared memory pool correctly. Make sure it is large enough to hold all frequently used packages but not so large that memory is wasted.
Pin Packages in the Shared Memory Pool
You can "pin" frequently accessed packages in the shared memory pool, using the supplied package DBMS_SHARED_POOL. When a package is pinned, it is not aged out by the least recently used (LRU) algorithm that Oracle normally uses. The package remains in memory no matter how full the pool gets or how frequently you access the package.
Profiling and Tracing PL/SQL Programs
As you develop larger and larger PL/SQL applications, it becomes more difficult to isolate performance problems. PL/SQL provides a Profiler API to profile run-time behavior and to help you identify performance bottlenecks. PL/SQL also provides a Trace API for tracing the execution of programs on the server. You can use Trace to trace the execution by subprogram or exception.
Using The Profiler API: Package DBMS_PROFILER
The Profiler API is implemented as PL/SQL package DBMS_PROFILER, which provides services for gathering and saving run-time statistics. The information is stored in database tables, which you can query later. For example, you can learn how much time was spent executing each PL/SQL line and subprogram.
To use the Profiler, you start the profiling session, run your application long enough to get adequate code coverage, flush the collected data to the database, then stop the profiling session.
The Profiler traces the execution of your program, computing the time spent at each line and in each subprogram. You can use the collected data to improve performance. For instance, you might focus on subprograms that run slowly.
Analyzing the Collected Performance Data
The next step is to determine why more time was spent executing certain code segments or accessing certain data structures. Find the problem areas by querying the performance data. Focus on the subprograms and packages that use up the most execution time, inspecting possible performance bottlenecks such as SQL statements, loops, and recursive functions.
Using Trace Data to Improve Performance
Use the results of your analysis to rework slow algorithms. For example, due to an exponential growth in data, you might need to replace a linear search with a binary search. Also, look for inefficiencies caused by inappropriate data structures, and, if necessary, replace those data structures.
Using The Trace API: Package DBMS_TRACE
With large, complex applications, it becomes difficult to keep track of calls between subprograms. By tracing your code with the Trace API, you can see the order in which subprograms execute. The Trace API is implemented as PL/SQL package DBMS_TRACE, which provides services for tracing execution by subprogram or exception.
To use Trace, you start the tracing session, run your application, then stop the tracing session. As the program executes, trace data is collected and stored in database tables.
Controlling the Trace
Tracing large applications can produce huge amounts of data that are difficult to manage. Before starting Trace, you can optionally limit the volume of data collected by selecting specific subprograms for trace data collection.
In addition, you can choose a tracing level. For example, you can choose to trace all subprograms and exceptions, or you can choose to trace selected subprograms and exceptions.
Reducing Loop Overhead for DML Statements and Queries (FORALL, BULK COLLECT)
PL/SQL sends SQL statements such as DML and queries to the SQL engine for execution, and SQL returns the result data to PL/SQL. You can minimize the performance overhead of this communication between PL/SQL and SQL by using the PL/SQL language features known collectively as bulk SQL. The FORALL statement sends INSERT, UPDATE, or DELETE statements in batches, rather than one at a time. The BULK COLLECT clause brings back batches of results from SQL. If the DML statement affects four or more database rows, the use of bulk SQL can improve performance considerably.
The assigning of values to PL/SQL variables in SQL statements is called binding. PL/SQL binding operations fall into three categories:
" in-bind When a PL/SQL variable or host variable is stored in the database by an INSERT or UPDATE statement.
" out-bind When a database value is assigned to a PL/SQL variable or a host variable by the RETURNING clause of an INSERT, UPDATE, or DELETE statement.
" define When a database value is assigned to a PL/SQL variable or a host variable by a SELECT or FETCH statement.
Bulk SQL uses PL/SQL collections, such as varrays or nested tables, to pass large amounts of data back and forth in a single operation. This process is known as bulk binding. If the collection has 20 elements, bulk binding lets you perform the equivalent of 20 SELECT, INSERT, UPDATE, or DELETE statements using a single operation. Queries can pass back any number of results, without requiring a FETCH statement for each row.
To speed up INSERT, UPDATE, and DELETE statements, enclose the SQL statement within a PL/SQL FORALL statement instead of a loop construct.
To speed up SELECT statements, include the BULK COLLECT INTO clause in the SELECT statement instead of using INTO.
Using the FORALL Statement
The keyword FORALL lets you run multiple DML statements very efficiently. It can only repeat a single DML statement, unlike a general-purpose FOR loop.
The SQL statement can reference more than one collection, but FORALL only improves performance where the index value is used as a subscript.
Usually, the bounds specify a range of consecutive index numbers. If the index numbers are not consecutive, such as after you delete collection elements, you can use the INDICES OF or VALUES OF clause to iterate over just those index values that really exist.
The INDICES OF clause iterates over all of the index values in the specified collection, or only those between a lower and upper bound.
The VALUES OF clause refers to a collection that is indexed by BINARY_INTEGER or PLS_INTEGER and whose elements are of type BINARY_INTEGER or PLS_INTEGER. The FORALL statement iterates over the index values specified by the elements of this collection.
Example 11-1 Issuing DELETE Statements in a Loop
This FORALL statement sends all three DELETE statements to the SQL engine at once:
CREATE TABLE employees2 AS SELECT * FROM employees;
DECLARE
TYPE NumList IS VARRAY(20) OF NUMBER;
depts NumList := NumList(10, 30, 70); -- department numbers
BEGIN
FORALL i IN depts.FIRST..depts.LAST
DELETE FROM employees2 WHERE department_id = depts(i);
COMMIT;
END;
/
DROP TABLE employees2;
Example 11-2 Issuing INSERT Statements in a Loop
The following example loads some data into PL/SQL collections. Then it inserts the collection elements into a database table twice: first using a FOR loop, then using a FORALL statement. The FORALL version is much faster.
CREATE TABLE parts1 (pnum INTEGER, pname VARCHAR2(15));
CREATE TABLE parts2 (pnum INTEGER, pname VARCHAR2(15));
DECLARE
TYPE NumTab IS TABLE OF parts1.pnum%TYPE INDEX BY PLS_INTEGER;
TYPE NameTab IS TABLE OF parts1.pname%TYPE INDEX BY PLS_INTEGER;
pnums NumTab;
pnames NameTab;
iterations CONSTANT PLS_INTEGER := 500;
t1 INTEGER; t2 INTEGER; t3 INTEGER;
BEGIN
FOR j IN 1..iterations LOOP -- load index-by tables
pnums(j) := j;
pnames(j) := 'Part No. ' || TO_CHAR(j);
END LOOP;
t1 := dbms_utility.get_time;
FOR i IN 1..iterations LOOP -- use FOR loop
INSERT INTO parts1 VALUES (pnums(i), pnames(i));
END LOOP;
t2 := dbms_utility.get_time;
FORALL i IN 1..iterations -- use FORALL statement
INSERT INTO parts2 VALUES (pnums(i), pnames(i));
t3 := dbms_utility.get_time;
dbms_output.put_line('Execution Time (secs)');
dbms_output.put_line('---------------------');
dbms_output.put_line('FOR loop: ' || TO_CHAR((t2 - t1)/100));
dbms_output.put_line('FORALL: ' || TO_CHAR((t3 - t2)/100));
COMMIT;
END;
/
DROP TABLE parts1;
DROP TABLE parts2;
Executing this block should show that the loop using FORALL is much faster.
Example 11-3 Using FORALL with Part of a Collection
The bounds of the FORALL loop can apply to part of a collection, not necessarily all the elements:
CREATE TABLE employees2 AS SELECT * FROM employees;
DECLARE
TYPE NumList IS VARRAY(10) OF NUMBER;
depts NumList := NumList(5,10,20,30,50,55,57,60,70,75);
BEGIN
FORALL j IN 4..7 -- use only part of varray
DELETE FROM employees2 WHERE department_id = depts(j);
COMMIT;
END;
/
DROP TABLE employees2;
Example 11-4 Using FORALL with Non-Consecutive Index Values
You might need to delete some elements from a collection before using the collection in a FORALL statement. The INDICES OF clause processes sparse collections by iterating through only the remaining elements.
You might also want to leave the original collection alone, but process only some elements, process the elements in a different order, or process some elements more than once. Instead of copying the entire elements into new collections, which might use up substantial amounts of memory, the VALUES OF clause lets you set up simple collections whose elements serve as "pointers" to elements in the original collection.
The following example creates a collection holding some arbitrary data, a set of table names. Deleting some of the elements makes it a sparse collection that would not work in a default FORALL statement. The program uses a FORALL statement with the INDICES OF clause to insert the data into a table. It then sets up two more collections, pointing to certain elements from the original collection. The program stores each set of names in a different database table using FORALL statements with the VALUES OF clause.
-- Create empty tables to hold order details
CREATE TABLE valid_orders (cust_name VARCHAR2(32), amount NUMBER(10,2));
CREATE TABLE big_orders AS SELECT * FROM valid_orders WHERE 1 = 0;
CREATE TABLE rejected_orders AS SELECT * FROM valid_orders WHERE 1 = 0;
DECLARE
-- Make collections to hold a set of customer names and order amounts.
SUBTYPE cust_name IS valid_orders.cust_name%TYPE;
TYPE cust_typ IS TABLe OF cust_name;
cust_tab cust_typ;
SUBTYPE order_amount IS valid_orders.amount%TYPE;
TYPE amount_typ IS TABLE OF NUMBER;
amount_tab amount_typ;
-- Make other collections to point into the CUST_TAB collection.
TYPE index_pointer_t IS TABLE OF PLS_INTEGER;
big_order_tab index_pointer_t := index_pointer_t();
rejected_order_tab index_pointer_t := index_pointer_t();
PROCEDURE setup_data IS BEGIN
-- Set up sample order data, including some invalid orders and some 'big' orders.
cust_tab := cust_typ('Company 1','Company 2','Company 3','Company 4', 'Company 5');
amount_tab := amount_typ(5000.01, 0, 150.25, 4000.00, NULL);
END;
BEGIN
setup_data();
dbms_output.put_line('--- Original order data ---');
FOR i IN 1..cust_tab.LAST LOOP
dbms_output.put_line('Customer #' || i || ', ' || cust_tab(i) || ': $' || amount_tab(i));
END LOOP;
-- Delete invalid orders (where amount is null or 0).
FOR i IN 1..cust_tab.LAST LOOP
IF amount_tab(i) is null or amount_tab(i) = 0 THEN
cust_tab.delete(i);
amount_tab.delete(i);
END IF;
END LOOP;
dbms_output.put_line('--- Data with invalid orders deleted ---');
FOR i IN 1..cust_tab.LAST LOOP
IF cust_tab.EXISTS(i) THEN
dbms_output.put_line('Customer #' || i || ', ' || cust_tab(i) || ': $' || amount_tab(i));
END IF;
END LOOP;
-- Since the subscripts of our collections are not consecutive, we use
-- FORALL...INDICES OF to iterate through the actual subscripts, rather than 1..COUNT.
FORALL i IN INDICES OF cust_tab
INSERT INTO valid_orders(cust_name, amount) VALUES(cust_tab(i), amount_tab(i));
-- Now let's process the order data differently. We'll extract 2 subsets
-- and store each subset in a different table.
setup_data(); -- Initialize the CUST_TAB and AMOUNT_TAB collections again.
FOR i IN cust_tab.FIRST .. cust_tab.LAST LOOP
IF amount_tab(i) IS NULL OR amount_tab(i) = 0 THEN
rejected_order_tab.EXTEND; -- Add a new element to this collection.
rejected_order_tab(rejected_order_tab.LAST) := i; -- And record the subscript from the original collection.
END IF;
IF amount_tab(i) > 2000 THEN
big_order_tab.EXTEND; -- Add a new element to this collection.
big_order_tab(big_order_tab.LAST) := i; -- And record the subscript from the original collection.
END IF;
END LOOP;
-- Now it's easy to run one DML statement on one subset of elements, and another DML statement on a different subset.
FORALL i IN VALUES OF rejected_order_tab
INSERT INTO rejected_orders VALUES (cust_tab(i), amount_tab(i));
FORALL i IN VALUES OF big_order_tab
INSERT INTO big_orders VALUES (cust_tab(i), amount_tab(i));
COMMIT;
END;
/
-- Verify that the correct order details were stored.
SELECT cust_name "Customer", amount "Valid order amount" FROM valid_orders;
SELECT cust_name "Customer", amount "Big order amount" FROM big_orders;
SELECT cust_name "Customer", amount "Rejected order amount" FROM rejected_orders;
DROP TABLE valid_orders;
DROP TABLE big_orders;
DROP TABLE rejected_orders;
How FORALL Affects Rollbacks
In a FORALL statement, if any execution of the SQL statement raises an unhandled exception, all database changes made during previous executions are rolled back. However, if a raised exception is caught and handled, changes are rolled back to an implicit savepoint marked before each execution of the SQL statement. Changes made during previous executions are not rolled back. For example, suppose you create a database table that stores department numbers and job titles, as follows. Then, you change the job titles so that they are longer. The second UPDATE fails because the new value is too long for the column. Because we handle the exception, the first UPDATE is not rolled back and we can commit that change.
CREATE TABLE emp2 (deptno NUMBER(2), job VARCHAR2(18));
DECLARE
TYPE NumList IS TABLE OF NUMBER;
depts NumList := NumList(10, 20, 30);
BEGIN
INSERT INTO emp2 VALUES(10, 'Clerk');
INSERT INTO emp2 VALUES(20, 'Bookkeeper'); -- Lengthening this job title causes an exception.
INSERT INTO emp2 VALUES(30, 'Analyst');
COMMIT;
FORALL j IN depts.FIRST..depts.LAST -- Run 3 UPDATE statements.
UPDATE emp2 SET job = job || ' (Senior)' WHERE deptno = depts(j);
-- raises a "value too large" exception
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line('Problem in the FORALL statement.');
COMMIT; -- Commit results of successful updates.
END;
/
DROP TABLE emp2;
Counting Rows Affected by FORALL with the %BULK_ROWCOUNT Attribute
The cursor attributes SQL%FOUND, SQL%ISOPEN, SQL%NOTFOUND, and SQL%ROWCOUNT, return useful information about the most recently executed DML statement.
The SQL cursor has one composite attribute, %BULK_ROWCOUNT, for use with the FORALL statement. This attribute works like an associative array: SQL%BULK_ROWCOUNT(i) stores the number of rows processed by the ith execution of an INSERT, UPDATE or DELETE statement. For example:
CREATE TABLE emp2 AS SELECT * FROM employees;
DECLARE
TYPE NumList IS TABLE OF NUMBER;
depts NumList := NumList(30, 50, 60);
BEGIN
FORALL j IN depts.FIRST..depts.LAST
DELETE FROM emp2 WHERE department_id = depts(j);
-- How many rows were affected by each DELETE statement?
FOR i IN depts.FIRST..depts.LAST
LOOP
dbms_output.put_line('Iteration #' || i || ' deleted ' ||
SQL%BULK_ROWCOUNT(i) || ' rows.');
END LOOP;
END;
/
DROP TABLE emp2;
The FORALL statement and %BULK_ROWCOUNT attribute use the same subscripts. For example, if FORALL uses the range 5..10, so does %BULK_ROWCOUNT. If the FORALL tatement uses the INDICES OF clause to process a sparse collection, %BULK_ROWCOUNT has corresponding sparse subscripts. If the FORALL statement uses the VALUES OF clause to process a subset of elements, %BULK_ROWCOUNT has subscripts corresponding to the values of the elements in the index collection. If the index collection contains duplicate elements, so that some DML statements are issued multiple times using the same subscript, then the corresponding elements of %BULK_ROWCOUNT represent the sum of all rows affected by the DML statement using that subscript. (For examples showing how to interpret %BULK_ROWCOUNT when using the INDICES OF and VALUES OF clauses, see the PL/SQL sample programs at http://otn.oracle.com/tech/pl_sql/.)
%BULK_ROWCOUNT is usually equal to 1 for inserts, because a typical insert operation affects only a single row. For the INSERT ... SELECT construct, %BULK_ROWCOUNT might be greater than 1. For example, the FORALL statement below inserts an arbitrary number of rows for each iteration. After each iteration, %BULK_ROWCOUNT returns the number of items inserted:
CREATE TABLE emp_by_dept AS SELECT employee_id, department_id
FROM employees WHERE 1 = 0;
DECLARE
TYPE dept_tab IS TABLE OF departments.department_id%TYPE;
deptnums dept_tab;
BEGIN
SELECT department_id BULK COLLECT INTO deptnums FROM departments;
FORALL i IN 1..deptnums.COUNT
INSERT INTO emp_by_dept
SELECT employee_id, department_id FROM employees
WHERE department_id = deptnums(i);
FOR i IN 1..deptnums.COUNT LOOP
-- Count how many rows were inserted for each department; that is,
-- how many employees are in each department.
dbms_output.put_line('Dept '||deptnums(i)||': inserted '||
SQL%BULK_ROWCOUNT(i)||' records');
END LOOP;
dbms_output.put_line('Total records inserted =' || SQL%ROWCOUNT);
END;
/
DROP TABLE emp_by_dept;
You can also use the scalar attributes %FOUND, %NOTFOUND, and %ROWCOUNT after running a FORALL statement. For example, %ROWCOUNT returns the total number of rows processed by all executions of the SQL statement.
%FOUND and %NOTFOUND refer only to the last execution of the SQL statement. You can use %BULK_ROWCOUNT to infer their values for individual executions. For example, when %BULK_ROWCOUNT(i) is zero, %FOUND and %NOTFOUND are FALSE and TRUE, respectively.
Handling FORALL Exceptions with the %BULK_EXCEPTIONS Attribute
PL/SQL provides a mechanism to handle exceptions raised during the execution of a FORALL statement. This mechanism enables a bulk-bind operation to save information about exceptions and continue processing.
To have a bulk bind complete despite errors, add the keywords SAVE EXCEPTIONS to your FORALL statement after the bounds, before the DML statement.
All exceptions raised during the execution are saved in the cursor attribute %BULK_EXCEPTIONS, which stores a collection of records. Each record has two fields:
" %BULK_EXCEPTIONS(i).ERROR_INDEX holds the "iteration" of the FORALL statement during which the exception was raised.
" %BULK_EXCEPTIONS(i).ERROR_CODE holds the corresponding Oracle error code.
The values stored by %BULK_EXCEPTIONS always refer to the most recently executed FORALL statement. The number of exceptions is saved in %BULK_EXCEPTIONS.COUNT. Its subscripts range from 1 to COUNT.
You might need to work backward to determine which collection element was used in the iteration that caused an exception. For example, if you use the INDICES OF clause to process a sparse collection, you must step through the elements one by one to find the one corresponding to %BULK_EXCEPTIONS(i).ERROR_INDEX. If you use the VALUES OF clause to process a subset of elements, you must find the element in the index collection whose subscript matches %BULK_EXCEPTIONS(i).ERROR_INDEX, and then use that element's value as the subscript to find the erroneous element in the original collection. (For examples showing how to find the erroneous elements when using the INDICES OF and VALUES OF clauses, see the PL/SQL sample programs at http://otn.oracle.com/tech/pl_sql/.)
If you omit the keywords SAVE EXCEPTIONS, execution of the FORALL statement stops when an exception is raised. In that case, SQL%BULK_EXCEPTIONS.COUNT returns 1, and SQL%BULK_EXCEPTIONS contains just one record. If no exception is raised during execution, SQL%BULK_EXCEPTIONS.COUNT returns 0.
Example 11-5 Bulk Operation That Continues Despite Exceptions
The following example shows how you can perform a number of DML operations, without stopping if some operations encounter errors:
CREATE TABLE emp2 AS SELECT * FROM employees;
DECLARE
TYPE NumList IS TABLE OF NUMBER;
-- The zeros in this list will cause divide-by-zero errors.
num_tab NumList := NumList(10,0,11,12,30,0,20,199,2,0,9,1);
errors NUMBER;
dml_errors EXCEPTION;
PRAGMA exception_init(dml_errors, -24381);
BEGIN
-- SAVE EXCEPTIONS means don't stop if some DELETEs fail.
FORALL i IN num_tab.FIRST..num_tab.LAST SAVE EXCEPTIONS
DELETE FROM emp2 WHERE salary > 500000/num_tab(i);
-- If any errors occurred during the FORALL SAVE EXCEPTIONS,
-- a single exception is raised when the statement completes.
EXCEPTION
WHEN dml_errors THEN -- Now we figure out what failed and why.
errors := SQL%BULK_EXCEPTIONS.COUNT;
dbms_output.put_line('Number of DELETE statements that failed: ' || errors);
FOR i IN 1..errors LOOP
dbms_output.put_line('Error #' || i || ' occurred during '||
'iteration #' || SQL%BULK_EXCEPTIONS(i).ERROR_INDEX);
dbms_output.put_line('Error message is ' ||
SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE));
END LOOP;
END;
/
DROP TABLE emp2;
In this example, PL/SQL raised the predefined exception ZERO_DIVIDE when i equaled 2, 6, 10. After the FORALL statement, SQL%BULK_EXCEPTIONS.COUNT returned 3, and the contents of SQL%BULK_EXCEPTIONS were (2,1476), (6,1476), and (10,1476). To get the Oracle error message (which includes the code), we negated the value of SQL%BULK_EXCEPTIONS(i).ERROR_CODE and passed the result to the error-reporting function SQLERRM, which expects a negative number. Here is the output:
Number of errors is 3
Error 1 occurred during iteration 2
Oracle error is ORA-01476: divisor is equal to zero
Error 2 occurred during iteration 6
Oracle error is ORA-01476: divisor is equal to zero
Error 3 occurred during iteration 10
Oracle error is ORA-01476: divisor is equal to zero
Retrieving Query Results into Collections with the BULK COLLECT Clause
Using the keywords BULK COLLECT with a query is a very efficient way to retrieve the result set. Instead of looping through each row, you store the results in one or more collections, in a single operation. You can use these keywords in the SELECT INTO and FETCH INTO statements, and the RETURNING INTO clause.
With the BULK COLLECT clause, all the variables in the INTO list must be collections. The table columns can hold scalar or composite values, including object types. The following example loads two entire database columns into nested tables:
DECLARE
TYPE NumTab IS TABLE OF employees.employee_id%TYPE;
TYPE NameTab IS TABLE OF employees.last_name%TYPE;
enums NumTab; -- No need to initialize the collections.
names NameTab; -- Values will be filled in by the SELECT INTO.
PROCEDURE print_results IS
BEGIN
dbms_output.put_line('Results:');
FOR i IN enums.FIRST .. enums.LAST
LOOP
dbms_output.put_line(' Employee #' || enums(i) || ': ' ||
names(i));
END LOOP;
END;
BEGIN
SELECT employee_id, last_name -- Retrieve data for 10 arbitrary employees.
BULK COLLECT INTO enums, names
FROM employees WHERE ROWNUM < 11; -- The data has all been brought into memory by BULK COLLECT. -- No need to FETCH each row from the result set. print_results; SELECT employee_id, last_name -- Retrieve approximately 20% of all rows BULK COLLECT INTO enums, names FROM employees SAMPLE (20); print_results; END; / The collections are initialized automatically. Nested tables and associative arrays are extended to hold as many elements as needed. If you use varrays, all the return values must fit in the varray's declared size. Elements are inserted starting at index 1, overwriting any existing elements. Since the processing of the BULK COLLECT INTO clause is similar to a FETCH loop, it does not raise a NO_DATA_FOUND exception if no rows match the query. You must check whether the resulting nested table or varray is null, or if the resulting associative array has no elements. To prevent the resulting collections from expanding without limit, you can use the pseudocolumn ROWNUM to limit the number of rows processed. Or, you can use the SAMPLE clause to retrieve a random sample of rows. DECLARE TYPE SalList IS TABLE OF emp.sal%TYPE; sals SalList; BEGIN -- Limit the number of rows to 100. SELECT sal BULK COLLECT INTO sals FROM emp WHERE ROWNUM <= 100; -- Retrieve 10% (approximately) of the rows in the table. SELECT sal BULK COLLECT INTO sals FROM emp SAMPLE 10; END; / You can process very large result sets by fetching a specified number of rows at a time from a cursor, as shown in the following sections. Examples of Bulk-Fetching from a Cursor Example 11-6 Bulk-Fetching from a Cursor Into One or More Collections You can fetch from a cursor into one or more collections: DECLARE TYPE NameList IS TABLE OF employees.last_name%TYPE; TYPE SalList IS TABLE OF employees.salary%TYPE; CURSOR c1 IS SELECT last_name, salary FROM employees WHERE salary > 10000;
names NameList;
sals SalList;
TYPE RecList IS TABLE OF c1%ROWTYPE;
recs RecList;
PROCEDURE print_results IS
BEGIN
dbms_output.put_line('Results:');
IF names IS NULL OR names.COUNT = 0 THEN
RETURN; -- Don't print anything if collections are empty.
END IF;
FOR i IN names.FIRST .. names.LAST
LOOP
dbms_output.put_line(' Employee ' || names(i) || ': $' ||
sals(i));
END LOOP;
END;
BEGIN
dbms_output.put_line('--- Processing all results at once ---');
OPEN c1;
FETCH c1 BULK COLLECT INTO names, sals;
CLOSE c1;
print_results;
dbms_output.put_line('--- Processing 7 rows at a time ---');
OPEN c1;
LOOP
FETCH c1 BULK COLLECT INTO names, sals LIMIT 7;
EXIT WHEN c1%NOTFOUND;
print_results;
END LOOP;
-- Loop exits when fewer than 7 rows are fetched. Have to
-- process the last few. Need extra checking inside PRINT_RESULTS
-- in case it is called when the collection is empty.
print_results;
CLOSE c1;
dbms_output.put_line('--- Fetching records rather than columns ---');
OPEN c1;
FETCH c1 BULK COLLECT INTO recs;
FOR i IN recs.FIRST .. recs.LAST
LOOP
-- Now all the columns from the result set come from a single record.
dbms_output.put_line(' Employee ' || recs(i).last_name || ': $'
|| recs(i).salary);
END LOOP;
END;
/
Example 11-7 Bulk-Fetching from a Cursor Into a Collection of Records
You can fetch from a cursor into a collection of records:
DECLARE
TYPE DeptRecTab IS TABLE OF dept%ROWTYPE;
dept_recs DeptRecTab;
CURSOR c1 IS
SELECT deptno, dname, loc FROM dept WHERE deptno > 10;
BEGIN
OPEN c1;
FETCH c1 BULK COLLECT INTO dept_recs;
END;
/
Limiting the Rows for a Bulk FETCH Operation with the LIMIT Clause
The optional LIMIT clause, allowed only in bulk FETCH statements, limits the number of rows fetched from the database.
In the example below, with each iteration of the loop, the FETCH statement fetches ten rows (or less) into index-by table empnos. The previous values are overwritten.
DECLARE
TYPE NumTab IS TABLE OF NUMBER INDEX BY BINARY_INTEGER;
CURSOR c1 IS SELECT empno FROM emp;
empnos NumTab;
rows NATURAL := 10;
BEGIN
OPEN c1;
LOOP
/* The following statement fetches 10 rows (or less). */
FETCH c1 BULK COLLECT INTO empnos LIMIT rows;
EXIT WHEN c1%NOTFOUND;
...
END LOOP;
CLOSE c1;
END;
/
Retrieving DML Results into a Collection with the RETURNING INTO Clause
You can use the BULK COLLECT clause in the RETURNING INTO clause of an INSERT, UPDATE, or DELETE statement:
CREATE TABLE emp2 AS SELECT * FROM employees;
DECLARE
TYPE NumList IS TABLE OF employees.employee_id%TYPE;
enums NumList;
TYPE NameList IS TABLE OF employees.last_name%TYPE;
names NameList;
BEGIN
DELETE FROM emp2 WHERE department_id = 30
RETURNING employee_id, last_name BULK COLLECT INTO enums, names;
dbms_output.put_line('Deleted ' || SQL%ROWCOUNT || ' rows:');
FOR i IN enums.FIRST .. enums.LAST
LOOP
dbms_output.put_line('Employee #' || enums(i) || ': ' || names(i));
END LOOP;
END;
/
DROP TABLE emp2;
Using FORALL and BULK COLLECT Together
You can combine the BULK COLLECT clause with a FORALL statement. The output collections are built up as the FORALL statement iterates.
In the following example, the EMPNO value of each deleted row is stored in the collection ENUMS. The collection DEPTS has 3 elements, so the FORALL statement iterates 3 times. If each DELETE issued by the FORALL statement deletes 5 rows, then the collection ENUMS, which stores values from the deleted rows, has 15 elements when the statement completes:
CREATE TABLE emp2 AS SELECT * FROM employees;
DECLARE
TYPE NumList IS TABLE OF NUMBER;
depts NumList := NumList(10,20,30);
TYPE enum_t IS TABLE OF employees.employee_id%TYPE;
TYPE dept_t IS TABLE OF employees.department_id%TYPE;
e_ids enum_t;
d_ids dept_t;
BEGIN
FORALL j IN depts.FIRST..depts.LAST
DELETE FROM emp2 WHERE department_id = depts(j)
RETURNING employee_id, department_id BULK COLLECT INTO e_ids, d_ids;
dbms_output.put_line('Deleted ' || SQL%ROWCOUNT || ' rows:');
FOR i IN e_ids.FIRST .. e_ids.LAST
LOOP
dbms_output.put_line('Employee #' || e_ids(i) || ' from dept #' || d_ids(i));
END LOOP;
END;
/
DROP TABLE emp2;
The column values returned by each execution are added to the values returned previously. If you use a FOR loop instead of the FORALL statement, the set of returned values is overwritten by each DELETE statement.
You cannot use the SELECT ... BULK COLLECT statement in a FORALL statement.
Using Host Arrays with Bulk Binds
Client-side programs can use anonymous PL/SQL blocks to bulk-bind input and output host arrays. This is the most efficient way to pass collections to and from the database server.
Host arrays are declared in a host environment such as an OCI or a Pro*C program and must be prefixed with a colon to distinguish them from PL/SQL collections. In the example below, an input host array is used in a DELETE statement. At run time, the anonymous PL/SQL block is sent to the database server for execution.
DECLARE
...
BEGIN
-- assume that values were assigned to the host array
-- and host variables in the host environment
FORALL i IN :lower..:upper
DELETE FROM employees WHERE department_id = :depts(i);
COMMIT;
END;
/
Writing Computation-Intensive Programs in PL/SQL
The BINARY_FLOAT and BINARY_DOUBLE datatypes make it practical to write PL/SQL programs to do number-crunching, for scientific applications involving floating-point calculations. These datatypes behave much like the native floating-point types on many hardware systems, with semantics derived from the IEEE-754 floating-point standard.
The way these datatypes represent decimal data make them less suitable for financial applications, where precise representation of fractional amounts is more important than pure performance.
The PLS_INTEGER and BINARY_INTEGER datatypes are PL/SQL-only datatypes that are more efficient than the SQL datatypes NUMBER or INTEGER for integer arithmetic. You can use PLS_INTEGER to write pure PL/SQL code for integer arithmetic, or convert NUMBER or INTEGER values to PLS_INTEGER for manipulation by PL/SQL.
In previous releases, PLS_INTEGER was more efficient than BINARY_INTEGER. Now, they have similar performance, but you might still prefer PLS_INTEGER if your code might be run under older database releases.
Within a package, you can write overloaded versions of procedures and functions that accept different numeric parameters. The math routines can be optimized for each kind of parameter (BINARY_FLOAT, BINARY_DOUBLE, NUMBER, PLS_INTEGER), avoiding unnecessary conversions.
The built-in math functions such as SQRT, SIN, COS, and so on already have fast overloaded versions that accept BINARY_FLOAT and BINARY_DOUBLE parameters. You can speed up math-intensive code by passing variables of these types to such functions, and by calling the TO_BINARY_FLOAT or TO_BINARY_DOUBLE functions when passing expressions to such functions.
Tuning Dynamic SQL with EXECUTE IMMEDIATE and Cursor Variables
Some programs (a general-purpose report writer for example) must build and process a variety of SQL statements, where the exact text of the statement is unknown until run time. Such statements probably change from execution to execution. They are called dynamic SQL statements.
Formerly, to execute dynamic SQL statements, you had to use the supplied package DBMS_SQL. Now, within PL/SQL, you can execute any kind of dynamic SQL statement using an interface called native dynamic SQL. The main PL/SQL features involved are the EXECUTE IMMEDIATE statement and cursor variables (also known as REF CURSORs).
Native dynamic SQL code is more compact and much faster than calling the DBMS_SQL package. The following example declares a cursor variable, then associates it with a dynamic SELECT statement:
DECLARE
TYPE EmpCurTyp IS REF CURSOR;
emp_cv EmpCurTyp;
my_ename VARCHAR2(15);
my_sal NUMBER := 1000;
table_name VARCHAR2(30) := 'employees';
BEGIN
OPEN emp_cv FOR 'SELECT last_name, salary FROM ' || table_name ||
' WHERE salary > :s' USING my_sal;
CLOSE emp_cv;
END;
/
Tuning PL/SQL Procedure Calls with the NOCOPY Compiler Hint
By default, OUT and IN OUT parameters are passed by value. The values of any IN OUT parameters are copied before the subprogram is executed. During subprogram execution, temporary variables hold the output parameter values. If the subprogram exits normally, these values are copied to the actual parameters. If the subprogram exits with an unhandled exception, the original parameters are unchanged.
When the parameters represent large data structures such as collections, records, and instances of object types, this copying slows down execution and uses up memory. In particular, this overhead applies to each call to an object method: temporary copies are made of all the attributes, so that any changes made by the method are only applied if the method exits normally.
To avoid this overhead, you can specify the NOCOPY hint, which allows the PL/SQL compiler to pass OUT and IN OUT parameters by reference. If the subprogram exits normally, the behavior is the same as normal. If the subprogram exits early with an exception, the values of OUT and IN OUT parameters (or object attributes) might still change. To use this technique, ensure that the subprogram handles all exceptions.
The following example asks the compiler to pass IN OUT parameter MY_STAFF by reference, to avoid copying the varray on entry to and exit from the subprogram:
DECLARE
TYPE Staff IS VARRAY(200) OF Employee;
PROCEDURE reorganize (my_staff IN OUT NOCOPY Staff) IS ...
BEGIN
NULL;
END;
/
The following example loads 25,000 records into a local nested table, which is passed to two local procedures that do nothing. A call to the procedure that uses NOCOPY takes much less time.
DECLARE
TYPE EmpTabTyp IS TABLE OF employees%ROWTYPE;
emp_tab EmpTabTyp := EmpTabTyp(NULL); -- initialize
t1 NUMBER;
t2 NUMBER;
t3 NUMBER;
PROCEDURE get_time (t OUT NUMBER) IS
BEGIN t := dbms_utility.get_time; END;
PROCEDURE do_nothing1 (tab IN OUT EmpTabTyp) IS
BEGIN NULL; END;
PROCEDURE do_nothing2 (tab IN OUT NOCOPY EmpTabTyp) IS
BEGIN NULL; END;
BEGIN
SELECT * INTO emp_tab(1) FROM employees WHERE employee_id = 100;
emp_tab.EXTEND(49999, 1); -- copy element 1 into 2..50000
get_time(t1);
do_nothing1(emp_tab); -- pass IN OUT parameter
get_time(t2);
do_nothing2(emp_tab); -- pass IN OUT NOCOPY parameter
get_time(t3);
dbms_output.put_line('Call Duration (secs)');
dbms_output.put_line('--------------------');
dbms_output.put_line('Just IN OUT: ' || TO_CHAR((t2 - t1)/100.0));
dbms_output.put_line('With NOCOPY: ' || TO_CHAR((t3 - t2))/100.0);
END;
/
Restrictions on NOCOPY
The use of NOCOPY increases the likelihood of parameter aliasing. For more information, see "Understanding Subprogram Parameter Aliasing".
Remember, NOCOPY is a hint, not a directive. In the following cases, the PL/SQL compiler ignores the NOCOPY hint and uses the by-value parameter-passing method; no error is generated:
" The actual parameter is an element of an associative array. This restriction does not apply if the parameter is an entire associative array.
" The actual parameter is constrained, such as by scale or NOT NULL. This restriction does not apply to size-constrained character strings. This restriction does not extend to constrained elements or attributes of composite types.
" The actual and formal parameters are records, one or both records were declared using %ROWTYPE or %TYPE, and constraints on corresponding fields in the records differ.
" The actual and formal parameters are records, the actual parameter was declared (implicitly) as the index of a cursor FOR loop, and constraints on corresponding fields in the records differ.
" Passing the actual parameter requires an implicit datatype conversion.
" The subprogram is called through a database link or as an external procedure.
Compiling PL/SQL Code for Native Execution
You can speed up PL/SQL procedures by compiling them into native code residing in shared libraries. The procedures are translated into C code, then compiled with your usual C compiler and linked into the Oracle process.
You can use this technique with both the supplied Oracle packages, and procedures you write yourself. Procedures compiled this way work in all server environments, such as the shared server configuration (formerly known as multi-threaded server) and Oracle Real Application Clusters.
Before You Begin
If you are a first-time user of native PL/SQL compilation, try it first with a test database, before proceeding to a production environment.
Always back up your database before configuring the database for PL/SQL native compilation. If you find that the performance benefit is outweighed by extra compilation time, it might be faster to restore from a backup than to recompile everything in interpreted mode.
Some of the setup steps require DBA authority. You must change the values of some initialization parameters, and create a new directory on the database server, preferably near the data files for the instance. The database server also needs a C compiler; on a cluster, the compiler is needed on each node. Even if you can test out these steps yourself on a development machine, you will generally need to consult with a DBA and enlist their help to use native compilation on a production server.
Contact your system administrator to ensure that you have the required C compiler on your operating system, and find the path for its location. Use a text editor such as vi to open the file $ORACLE_HOME/plsql/spnc_commands, and make sure the command templates are correct. Generally, you should not need to make any changes here, just confirm that the setup is correct.
Determining Whether to Use PL/SQL Native Compilation
PL/SQL native compilation provides the greatest performance gains for computation-intensive procedural operations. Examples of such operations are data warehouse applications, and applications with extensive server-side transformations of data for display. In such cases, expect speed increases of up to 30%.
Because this technique cannot do much to speed up SQL statements called from PL/SQL, it is most effective for compute-intensive PL/SQL procedures that do not spend most of their time executing SQL. You can test to see how much performance gain you can get by enabling PL/SQL native compilation.
It takes longer to compile program units with native compilation than to use the default interpreted mode. You might turn off native compilation during the busiest parts of the development cycle, where code is being frequently recompiled.
When you have decided that you will have significant performance gains in database operations using PL/SQL native compilation, Oracle Corporation recommends that you compile the whole database using the NATIVE setting. Compiling all the PL/SQL code in the database means you see the speedup in your own code, and in calls to all the built-in PL/SQL packages.
How PL/SQL Native Compilation Works
If you do not use native compilation, each PL/SQL program unit is compiled into an intermediate form, machine-readable code (m-code). The m-code is stored in the database dictionary and interpreted at run time.
With PL/SQL native compilation, the PL/SQL statements are turned into C code that bypasses all the runtime interpretation, giving faster runtime performance.
PL/SQL uses the command file $ORACLE_HOME/plsql/spnc_commands, and the supported operating system C compiler and linker, to compile and link the resulting C code into shared libraries. The shared libraries are stored inside the data dictionary, so that they can be backed up automatically and are protected from being deleted. These shared library files are copied to the filesystem and are loaded and run when the PL/SQL subprogram is invoked. If the files are deleted from the filesystem while the database is shut down, or if you change the directory that holds the libraries, they are extracted again automatically.
Although PL/SQL program units that just call SQL statements might see little or no speedup, natively compiled PL/SQL is always at least as fast as the corresponding interpreted code. The compiled code makes the same library calls as the interpreted code would, so its behavior is exactly the same.
Format of the spnc_commands File
The spnc_commands file, in the $ORACLE_HOME/plsql directory, contains the templates for the commands to compile and link each program. Some special names such as %(src) are predefined, and are replaced by the corresponding filename. The variable $(ORACLE_HOME) is replaced by the location of the Oracle home directory. You can include comment lines, starting with a # character. The file contains comments that explain all the special notation.
The spnc_commands file contains a predefined path for the C compiler, depending on the particular operating system. (One specific compiler is supported on each operating system.) In most cases, you should not need to change this path, but you might if you the system administrator has installed it in another location.
System-Level Initialization Parameters for PL/SQL Native Compilation
The following table lists the initialization parameters you must set before using PL/SQL native compilation. They can be set only at the system level, not by an ALTER SESSION command. You cannot use variables such as ORACLE_HOME in the values; use the full path instead.
Note:
The examples in this section for setting system parameters for PL/SQL native compilation assume a system using a server parameter file (SPFILE).
If you use a text initialization parameter file (PFILE, or initsid.ora), ensure that you change parameters in your initialization parameter file, as indicated in the following table.
Parameter Characteristics
PLSQL_NATIVE_LIBRARY_DIR The full path and directory name used to store the shared libraries that contain natively compiled PL/SQL code.
In accordance with optimal flexible architecture (OFA) rules, Oracle Corporation recommends that you create the shared library directory as a subdirectory where the data files are located.
For security reasons, only the users oracle and root should have write privileges for this directory.
PLSQL_NATIVE_LIBRARY_SUBDIR_COUNT The number of subdirectories in the directory specified by the parameter PLSQL_NATIVE_LIBRARY_DIR.
Optional; use if the number of natively compiled program units exceeds 15000. If you need to set this option, Oracle Database Reference for complete details about the initialization parameters and data dictionary views.
"
Setting Up and Using PL/SQL Native Compilation
To speed up one or more subprograms through native compilation:
Set up the PLSQL_NATIVE_LIBRARY_DIR initialization parameter, and optionally the PLSQL_NATIVE_LIBRARY_SUBDIR_COUNT initialization parameter, as described above.
Use the ALTER SYSTEM or ALTER SESSION command, or update your initialization file, to set the parameter PLSQL_CODE_TYPE to the value NATIVE.
Compile one or more subprograms, using one of these methods:
" Use CREATE OR REPLACE to create or recompile the subprogram.
" Use the ALTER PROCEDURE, ALTER FUNCTION, or ALTER PACKAGE command with the COMPILE option to recompile the subprogram or the entire package. (You can also use the clause PLSQL_CODE_TYPE = NATIVE with the ALTER statements to affect specific subprograms without changing the initialization parameter for the whole session.)
" Drop the subprogram and create it again.
" Run one of the SQL*Plus scripts that creates a set of Oracle-supplied packages.
" Create a database using a preconfigured initialization file with PLSQL_CODE_TYPE=NATIVE. During database creation, the UTLIRP script is run to compile all the Oracle-supplied packages.
To be sure that the process worked, you can query the data dictionary to see that a procedure is compiled for native execution. To check whether an existing procedure is compiled for native execution or not, you can query the data dictionary views USER_PLSQL_OBJECT_SETTINGS, DBA_PLSQL_OBJECT_SETTINGS, and ALL_PLSQL_OBJECT_SETTINGS. For example, to check the status of the procedure MY_PROC, you could enter:
CREATE OR REPLACE PROCEDURE my_proc AS BEGIN NULL; END;
/
SELECT plsql_code_type FROM user_plsql_object_settings WHERE name = 'MY_PROC';
DROP PROCEDURE my_proc;
The CODE_TYPE column has a value of NATIVE for procedures that are compiled for native execution, and INTERPRETED otherwise.
After the procedures are compiled and turned into shared libraries, they are automatically linked into the Oracle process. You do not need to restart the database, or move the shared libraries to a different location. You can call back and forth between stored procedures, whether they are all interpreted, all compiled for native execution, or a mixture of both.
Dependencies, Invalidation and Revalidation
This recompilation happens automatically invalidated, such as when a table that it depends on is re-created.
If an object on which a natively compiled PL/SQL subprogram depends changes, the subprogram is invalidated. The next time the same subprogram is called, the database recompiles the subprogram automatically. Because the PLSQL_CODE_TYPE setting is stored inside the library unit for each subprogram, the automatic recompilation uses this stored setting for code type.
The stored settings are only used when recompiling as part of revalidation. If a PL/SQL subprogram is explicitly compiled through the SQL commands "create or replace" or "alter...compile", the current session setting is used.
The generated shared libraries are stored in the database, in the SYSTEM tablespace. The first time a natively compiled procedure is executed, the corresponding shared library is copied from the database to the directory specified by the initialization parameter PLSQL_NATIVE_LIBRARY_DIR.
Setting Up Databases for PL/SQL Native Compilation
Use the procedures in this section to set up an entire database for PL/SQL native compilation. The performance benefits apply to all the built-in PL/SQL packages, which are used for many database operations.
Creating a New Database for PL/SQL Native Compilation
If you use Database Configuration Assistant, use it to set the initialization parameters required for PL/SQL native compilation, as described in the preceding section, "System-Level Initialization Parameters for PL/SQL Native Compilation".
To find the supported C compiler on your operating system. refer to the table "Precompilers and Tools Restrictions and Requirements" in the installation guide for your operating system. Determine from your system administrator where it is located on your system. You will need to check that this path is correct in the spnc_commands file.
Determine if you have so many PL/SQL program units that you need to set the initialization parameter PLSQL_NATIVE_DIR_SUBDIR_COUNT, and create PL/SQL native library subdirectories if necessary. By default, PL/SQL program units are kept in one directory. If the number of program units exceeds 15,000, the operating system begins to impose performance limits. To work around this problem, Oracle Corporation recommends that you spread the PL/SQL program units among subdirectories.
If you have set up a test database, use this SQL query to determine how many PL/SQL program units you are using:
select count (*) from DBA_PLSQL_OBJECTS;
If the count returned by this query is greater than 15,000, complete the procedure described in the section "Setting Up PL/SQL Native Library Subdirectories".
Modifying an Existing Database for PL/SQL Native Compilation
To natively compile an existing database, complete the following procedure:
Contact your system administrator to ensure that you have the required C compiler on your operating system, and find the path for its location. Use a text editor such as vi to open the file spnc_commands, and make sure the command templates are correct.
As the oracle user, create the PL/SQL native library directory for each Oracle database.
Note:
You must set up PL/SQL libraries for each Oracle database. Shared libraries (.so and .dll files) are logically connected to the database. They cannot be shared between databases. If you set up PL/SQL libraries to be shared, the databases will be corrupted.
Create a directory in a secure place, in accordance with OFA rules, to prevent .so and .dll files from unauthorized access.
In addition, ensure that the compiler executables used for PL/SQL native compilation are writable only by a properly secured user.
The original copies of the shared libraries are stored inside the database, so they are backed up automatically with the database.
Using SQL, set the initialization parameter PLSQL_NATIVE_LIBRARY_DIR to the full path to the PL/SQL native library.
For example, if the path to the PL/SQL native library directory is /oracle/oradata/mydb/natlib, enter the following:
alter system set plsql_native_library_dir='/oracle/oradata/mydb/natlib'
Determine if you need to set the initialization parameter PLSQL_NATIVE_DIR_SUBDIR_COUNT, and create PL/SQL native library subdirectories if necessary.
By default, PL/SQL program units are kept in one directory. However, if the number of program units exceeds 15000, then the operating system begins to impose performance limits. To work around this problem, Oracle Corporation recommends that you spread the PL/SQL program units in subdirectories.
If you have an existing database that you will migrate to the new installation, or if you have set up a test database, use the following SQL query to determine how many PL/SQL program units you are using:
select count (*) from DBA_PLSQL_OBJECTS;
Create the following stored procedure to confirm that PL/SQL native compilation is enabled:
CREATE OR REPLACE PROCEDURE Hello AS
BEGIN
dbms_output.put_line ( 'This output is from a natively compiled procedure.' );
END Hello;
/
Run the stored procedure:
CALL Hello();
If the program does not return the expected output, contact Oracle Support for assistance. (Remember to SET SERVEROUTPUT ON in SQL*Plus before running the procedure.)
Recompile all the PL/SQL subprograms in the database. The script $ORACLE_HOME/admin/utlirp.sql is typically used here.
Setting Up PL/SQL Native Library Subdirectories
If you need to set up PL/SQL native library subdirectories, use the following procedure:
Create subdirectories sequentially in the form of d0, d1, d2, d3...dx, where x is the total number of directories. Oracle Corporation recommends that you use a script for this task. For example, you might run a PL/SQL block like the following, save its output to a file, then run that file as a shell script:
BEGIN
FOR j IN 0..999
LOOP
dbms_output.put_line ( 'mkdir d' || TO_CHAR(j) );
END LOOP;
END;
/
Set the initialization parameter PLSQL_NATIVE_DIR_COUNT to the number of subdirectories you have created. For example, if you created 1000 subdirectories, enter the following SQL statement in SQL*Plus:
alter system set plsql_native_library_subdir_count=1000;
Example 11-8 Compiling a PL/SQL Procedure for Native Execution
alter session set plsql_code_type='NATIVE';
CREATE OR REPLACE PROCEDURE hello_native
AS
BEGIN
dbms_output.put_line('Hello world.');
dbms_output.put_line('Today is ' || TO_CHAR(SYSDATE) || '.');
END;
/
select plsql_code_type from user_plsql_object_settings
where name = 'HELLO_NATIVE';
alter session set plsql_code_type='INTERPRETED';
The procedure is immediately available to call, and runs as a shared library directly within the Oracle process. If any errors occur during compilation, you can see them using the USER_ERRORS view or the SHOW ERRORS command in SQL*Plus.
Limitations of Native Compilation
" Debugging tools for PL/SQL do not handle procedures compiled for native execution.
" When many procedures and packages (typically, over 15000) are compiled for native execution, the large number of shared objects in a single directory might affect system performance.
Real Application Clusters and PL/SQL Native Compilation
Because any node might need to compile a PL/SQL subprogram, each node in the cluster needs a C compiler and correct settings and paths in the $ORACLE_HOME/plsql/spnc_commands file.
When you use PLSQL native compilation in a Real Application Clusters (RAC) environment, the original copies of the shared library files are stored in the databases, and these files are automatically propagated to all nodes in the cluster. You do not need to do any copying of libraries for this feature to work.
The reason for using a server parameter file (SPFILE) in the examples in this section, is to make sure that all nodes of a RAC cluster use the same settings for the parameters that control PL/SQL native compilation.
Setting Up Transformation Pipelines with Table Functions
This section describes how to chain together special kinds of functions known as table functions. These functions are used in situations such as data warehousing to apply multiple transformations to data.
Overview of Table Functions
Table functions are functions that produce a collection of rows (either a nested table or a varray) that can be queried like a physical database table or assigned to a PL/SQL collection variable. You can use a table function like the name of a database table, in the FROM clause of a query, or like a column name in the SELECT list of a query.
A table function can take a collection of rows as input. An input collection parameter can be either a collection type (such as a VARRAY or a PL/SQL table) or a REF CURSOR.
Execution of a table function can be parallelized, and returned rows can be streamed directly to the next process without intermediate staging. Rows from a collection returned by a table function can also be pipelined-that is, iteratively returned as they are produced instead of in a batch after all processing of the table function's input is completed.
Streaming, pipelining, and parallel execution of table functions can improve performance:
" By enabling multi-threaded, concurrent execution of table functions
" By eliminating intermediate staging between processes
" By improving query response time: With non-pipelined table functions, the entire collection returned by a table function must be constructed and returned to the server before the query can return a single result row. Pipelining enables rows to be returned iteratively, as they are produced. This also reduces the memory that a table function requires, as the object cache does not need to materialize the entire collection.
" By iteratively providing result rows from the collection returned by a table function as the rows are produced instead of waiting until the entire collection is staged in tables or memory and then returning the entire collection.
Example 11-9 Example: Querying a Table Function
The following example shows a table function GetBooks that takes a CLOB as input and returns an instance of the collection type BookSet_t. The CLOB column stores a catalog listing of books in some format (either proprietary or following a standard such as XML). The table function returns all the catalogs and their corresponding book listings.
The collection type BookSet_t is defined as:
CREATE TYPE Book_t AS OBJECT ( name VARCHAR2(100), author VARCHAR2(30), abstract VARCHAR2(1000));
/
CREATE TYPE BookSet_t AS TABLE OF Book_t;
/
-- The CLOBs are stored in a table Catalogs:
CREATE TABLE Catalogs ( name VARCHAR2(30), cat CLOB );
Function GetBooks is defined as follows:
CREATE FUNCTION GetBooks(a CLOB) RETURN BookSet_t;
/
The query below returns all the catalogs and their corresponding book listings.
SELECT c.name, Book.name, Book.author, Book.abstract
FROM Catalogs c, TABLE(GetBooks(c.cat)) Book;
Example 11-10 Example: Assigning the Result of a Table Function
The following example shows how you can assign the result of a table function to a PL/SQL collection variable. Because the table function is called from the SELECT list of the query, you do not need the TABLE keyword.
CREATE TYPE numset_t AS TABLE OF NUMBER;
/
CREATE FUNCTION f1(x number) RETURN numset_t PIPELINED IS
BEGIN
FOR i IN 1..x LOOP
PIPE ROW(i);
END LOOP;
RETURN;
END;
/
-- pipelined function in from clause
select * from table(f1(3));
Using Pipelined Table Functions for Transformations
A pipelined table function can accept any argument that regular functions accept. A table function that accepts a REF CURSOR as an argument can serve as a transformation function. That is, it can use the REF CURSOR to fetch the input rows, perform some transformation on them, and then pipeline the results out.
For example, the following code sketches the declarations that define a StockPivot function. This function converts a row of the type (Ticker, OpenPrice, ClosePrice) into two rows of the form (Ticker, PriceType, Price). Calling StockPivot for the row ("ORCL", 41, 42) generates two rows: ("ORCL", "O", 41) and ("ORCL", "C", 42).
Input data for the table function might come from a source such as table StockTable:
CREATE TABLE StockTable (
ticker VARCHAR(4),
open_price NUMBER,
close_price NUMBER
);
-- Create the types for the table function's output collection
-- and collection elements
CREATE TYPE TickerType AS OBJECT
(
ticker VARCHAR2(4),
PriceType VARCHAR2(1),
price NUMBER
);
/
CREATE TYPE TickerTypeSet AS TABLE OF TickerType;
/
-- Define the ref cursor type
CREATE PACKAGE refcur_pkg IS
TYPE refcur_t IS REF CURSOR RETURN StockTable%ROWTYPE;
END refcur_pkg;
/
-- Create the table function
CREATE FUNCTION StockPivot(p refcur_pkg.refcur_t) RETURN TickerTypeSet
PIPELINED ... ;
/
Here is an example of a query that uses the StockPivot table function:
SELECT * FROM TABLE(StockPivot(CURSOR(SELECT * FROM StockTable)));
In the query above, the pipelined table function StockPivot fetches rows from the CURSOR subquery SELECT * FROM StockTable, performs the transformation, and pipelines the results back to the user as a table. The function produces two output rows (collection elements) for each input row.
Note that when a CURSOR subquery is passed from SQL to a REF CURSOR function argument as in the example above, the referenced cursor is already open when the function begins executing.
Writing a Pipelined Table Function
You declare a pipelined table function by specifying the PIPELINED keyword. This keyword indicates that the function returns rows iteratively. The return type of the pipelined table function must be a collection type, such as a nested table or a varray. You can declare this collection at the schema level or inside a package. Inside the function, you return individual elements of the collection type.
For example, here are declarations for two pipelined table functions. (The function bodies are shown in later examples.)
CREATE FUNCTION GetBooks(cat CLOB) RETURN BookSet_t
PIPELINED IS ...;
/
CREATE FUNCTION StockPivot(p refcur_pkg.refcur_t) RETURN TickerTypeSet
PIPELINED IS...;
/
Returning Results from Table Functions
In PL/SQL, the PIPE ROW statement causes a table function to pipe a row and continue processing. The statement enables a PL/SQL table function to return rows as soon as they are produced. (For performance, the PL/SQL runtime system provides the rows to the consumer in batches.) For example:
CREATE FUNCTION StockPivot(p refcur_pkg.refcur_t) RETURN TickerTypeSet
PIPELINED IS
out_rec TickerType := TickerType(NULL,NULL,NULL);
in_rec p%ROWTYPE;
BEGIN
LOOP
FETCH p INTO in_rec;
EXIT WHEN p%NOTFOUND;
-- first row
out_rec.ticker := in_rec.Ticker;
out_rec.PriceType := 'O';
out_rec.price := in_rec.OpenPrice;
PIPE ROW(out_rec);
-- second row
out_rec.PriceType := 'C';
out_rec.Price := in_rec.ClosePrice;
PIPE ROW(out_rec);
END LOOP;
CLOSE p;
RETURN;
END;
/
In the example, the PIPE ROW(out_rec) statement pipelines data out of the PL/SQL table function. out_rec is a record, and its type matches the type of an element of the output collection.
The PIPE ROW statement may be used only in the body of pipelined table functions; an error is raised if it is used anywhere else. The PIPE ROW statement can be omitted for a pipelined table function that returns no rows.
A pipelined table function must have a RETURN statement that does not return a value. The RETURN statement transfers the control back to the consumer and ensures that the next fetch gets a NO_DATA_FOUND exception.
Because table functions pass control back and forth to a calling routine as rows areproduced, there is a restriction on combining table functions and PRAGMA AUTONOMOUS_TRANSACTION. If a table function is part of an autonomous transaction, it must COMMIT or ROLLBACK before each PIPE ROW statement, to avoid an error in the calling subprogram.
Oracle has three special SQL datatypes that enable you to dynamically encapsulate and access type descriptions, data instances, and sets of data instances of any other SQL type, including object and collection types. You can also use these three special types to create anonymous (that is, unnamed) types, including anonymous collection types. The types are SYS.ANYTYPE, SYS.ANYDATA, and SYS.ANYDATASET. The SYS.ANYDATA type can be useful in some situations as a return value from table functions.
Pipelining Data Between PL/SQL Table Functions
With serial execution, results are pipelined from one PL/SQL table function to another using an approach similar to co-routine execution. For example, the following statement pipelines results from function g to function f:
SELECT * FROM TABLE(f(CURSOR(SELECT * FROM TABLE(g()))));
Parallel execution works similarly except that each function executes in a different process (or set of processes).
Querying Table Functions
Pipelined table functions are used in the FROM clause of SELECT statements. The result rows are retrieved by Oracle iteratively from the table function implementation. For example:
SELECT x.Ticker, x.Price
FROM TABLE(StockPivot( CURSOR(SELECT * FROM StockTable))) x
WHERE x.PriceType='C';
Note:
A table function returns a collection. In some cases, such as when the top-level query uses SELECT * and the query refers to a PL/SQL variable or a bind variable, you may need a CAST operator around the table function to specify the exact return type.
Optimizing Multiple Calls to Table Functions
Multiple invocations of a table function, either within the same query or in separate queries result in multiple executions of the underlying implementation. By default, there is no buffering or reuse of rows.
For example,
SELECT * FROM TABLE(f(...)) t1, TABLE(f(...)) t2
WHERE t1.id = t2.id;
SELECT * FROM TABLE(f());
SELECT * FROM TABLE(f());
If the function always produces the same result value for each combination of values passed in, you can declare the function DETERMINISTIC, and Oracle automatically buffers rows for it. If the function is not really deterministic, results are unpredictable.
Fetching from the Results of Table Functions
PL/SQL cursors and ref cursors can be defined for queries over table functions. For example:
OPEN c FOR SELECT * FROM TABLE(f(...));
Cursors over table functions have the same fetch semantics as ordinary cursors. REF CURSOR assignments based on table functions do not have any special semantics.
However, the SQL optimizer will not optimize across PL/SQL statements. For example:
DECLARE
r SYS_REFCURSOR;
BEGIN
OPEN r FOR SELECT * FROM TABLE(f(CURSOR(SELECT * FROM tab)));
SELECT * BULK COLLECT INTO rec_tab FROM TABLE(g(r));
END;
/
does not execute as well as:
SELECT * FROM TABLE(g(CURSOR(SELECT * FROM
TABLE(f(CURSOR(SELECT * FROM tab))))));
This is so even ignoring the overhead associated with executing two SQL statements and assuming that the results can be pipelined between the two statements.
Passing Data with Cursor Variables
You can pass a set of rows to a PL/SQL function in a REF CURSOR parameter. For example, this function is declared to accept an argument of the predefined weakly typed REF CURSOR type SYS_REFCURSOR:
FUNCTION f(p1 IN SYS_REFCURSOR) RETURN ... ;
Results of a subquery can be passed to a function directly:
SELECT * FROM TABLE(f(CURSOR(SELECT empno FROM tab)));
In the example above, the CURSOR keyword is required to indicate that the results of a subquery should be passed as a REF CURSOR parameter.
A predefined weak REF CURSOR type SYS_REFCURSOR is also supported. With SYS_REFCURSOR, you do not need to first create a REF CURSOR type in a package before you can use it.
To use a strong REF CURSOR type, you still must create a PL/SQL package and declare a strong REF CURSOR type in it. Also, if you are using a strong REF CURSOR type as an argument to a table function, then the actual type of the REF CURSOR argument must match the column type, or an error is generated. Weak REF CURSOR arguments to table functions can only be partitioned using the PARTITION BY ANY clause. You cannot use range or hash partitioning for weak REF CURSOR arguments.
Example 11-11 Example: Using Multiple REF CURSOR Input Variables
PL/SQL functions can accept multiple REF CURSOR input variables:
CREATE FUNCTION g(p1 pkg.refcur_t1, p2 pkg.refcur_t2) RETURN...
PIPELINED ... ;
/
Function g can be invoked as follows:
SELECT * FROM TABLE(g(CURSOR(SELECT employee_id FROM tab),
CURSOR(SELECT * FROM employees));
You can pass table function return values to other table functions by creating a REF CURSOR that iterates over the returned data:
SELECT * FROM TABLE(f(CURSOR(SELECT * FROM TABLE(g(...)))));
Example 11-12 Example: Explicitly Opening a REF CURSOR for a Query
You can explicitly open a REF CURSOR for a query and pass it as a parameter to a table function:
DECLARE
r SYS_REFCURSOR;
rec ...;
BEGIN
OPEN r FOR SELECT * FROM TABLE(f(...));
-- Must return a single row result set.
SELECT * INTO rec FROM TABLE(g(r));
END;
/
In this case, the table function closes the cursor when it completes, so your program should not explicitly try to close the cursor.
Example 11-13 Example: Using a Pipelined Table Function as an Aggregate Function
A table function can compute aggregate results using the input ref cursor. The following example computes a weighted average by iterating over a set of input rows.
DROP TABLE gradereport;
CREATE TABLE gradereport (student VARCHAR2(30), subject VARCHAR2(30), weight NUMBER, grade NUMBER);
INSERT INTO gradereport VALUES('Mark', 'Physics', 4, 4);
INSERT INTO gradereport VALUES('Mark','Chemistry', 4,3);
INSERT INTO gradereport VALUES('Mark','Maths', 3,3);
INSERT INTO gradereport VALUES('Mark','Economics', 3,4);
CREATE OR replace TYPE gpa AS TABLE OF NUMBER;
/
CREATE OR replace FUNCTION weighted_average(input_values
sys_refcursor)
RETURN gpa PIPELINED IS
grade NUMBER;
total NUMBER := 0;
total_weight NUMBER := 0;
weight NUMBER := 0;
BEGIN
-- The function accepts a ref cursor and loops through all the input rows.
LOOP
FETCH input_values INTO weight, grade;
EXIT WHEN input_values%NOTFOUND;
-- Accumulate the weighted average.
total_weight := total_weight + weight;
total := total + grade*weight;
END LOOP;
PIPE ROW (total / total_weight);
-- The function returns a single result.
RETURN;
END;
/
show errors;
-- The result comes back as a nested table with a single row.
-- COLUMN_VALUE is a keyword that returns the contents of a nested table.
select weighted_result.column_value from
table( weighted_average( cursor(select weight,grade from gradereport) ) ) weighted_result;
Performing DML Operations Inside Table Functions
To execute DML statements, declare a table function with the AUTONOMOUS_TRANSACTION pragma, which causes the function to execute in a new transaction not shared by other processes:
CREATE FUNCTION f(p SYS_REFCURSOR) return CollType PIPELINED IS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN NULL; END;
/
During parallel execution, each instance of the table function creates an independent transaction.
Performing DML Operations on Table Functions
Table functions cannot be the target table in UPDATE, INSERT, or DELETE statements. For example, the following statements will raise an error:
UPDATE F(CURSOR(SELECT * FROM tab)) SET col = value;
INSERT INTO f(...) VALUES ('any', 'thing');
However, you can create a view over a table function and use INSTEAD OF triggers to update it. For example:
CREATE VIEW BookTable AS
SELECT x.Name, x.Author
FROM TABLE(GetBooks('data.txt')) x;
The following INSTEAD OF trigger is fired when the user inserts a row into the BookTable view:
CREATE TRIGGER BookTable_insert
INSTEAD OF INSERT ON BookTable
REFERENCING NEW AS n
FOR EACH ROW
BEGIN
...
END;
/
INSERT INTO BookTable VALUES (...);
INSTEAD OF triggers can be defined for all DML operations on a view built on a table function.
Handling Exceptions in Table Functions
Exception handling in table functions works just as it does with regular functions.
Some languages, such as C and Java, provide a mechanism for user-supplied exception handling. If an exception raised within a table function is handled, the table function executes the exception handler and continues processing. Exiting the exception handler takes control to the enclosing scope. If the exception is cleared, execution proceeds normally.
An unhandled exception in a table function causes the parent transaction to roll back.
Dbms_Profiler Scripts
SELECT u.unit_name,
d.total_occur,
d.total_time / 1000 microsec,
substr(s.text, 1, 60) plsql_code
FROM plsql_profiler_units u
INNER JOIN plsql_profiler_data d ON (u.runid = d.runid AND
u.unit_number = d.unit_number)
INNER JOIN all_source s ON (s.owner = u.unit_owner AND
s.type = u.unit_type AND s.name = u.unit_name AND
s.line = d.line#)
WHERE u.runid = 53
AND u.unit_name LIKE 'PARSE_CSV%'
ORDER BY u.unit_number, d.line#;
SELECT runid, run_date, run_comment, run_total_time
FROM plsql_profiler_runs
ORDER BY runid;
SELECT u.runid,
u.unit_number,
u.unit_type,
u.unit_owner,
u.unit_name,
d.line#,
d.total_occur,
d.total_time,
d.min_time,
d.max_time
FROM plsql_profiler_units u
JOIN plsql_profiler_data d ON u.runid = d.runid
AND u.unit_number = d.unit_number
WHERE u.runid = 1
ORDER BY u.unit_number, d.line#;
SELECT line || ' : ' || text
FROM all_source
WHERE owner = 'MY_SCHEMA'
AND type = 'PROCEDURE'
AND name = 'DO_SOMETHING';
SELECT dbms_profiler.internal_version_check FROM dual;
DECLARE
i PLS_INTEGER;
BEGIN
i := dbms_profiler.flush_data;
i := dbms_profiler.pause_profiler;
i := dbms_profiler.resume_profiler;
END;
SELECT dbms_profiler.pause_profiler FROM dual;
SELECT dbms_profiler.resume_profiler FROM dual;
ROLLUP_RUN
CREATE OR REPLACE PROCEDURE proc1 IS vd VARCHAR2(5);
BEGIN
FOR i IN 1 .. 100 LOOP
SELECT dummy INTO vd FROM dual;
END LOOP;
END proc1;
/
DECLARE v_run NUMBER;
BEGIN
dbms_profiler.start_profiler('test', 'test1', v_run);
proc1;
dbms_profiler.stop_profiler;
dbms_profiler.rollup_run(v_run);
END;
/
ROLLUP_UNIT dbms_profiler.rollup_unit(run_number IN NUMBER, unit IN NUMBER);
-- executes the following code
UPDATE plsql_profiler_units
SET total_time = (SELECT SUM(total_time)
FROM plsql_profiler_data
WHERE runid = run_number
AND unit_number = unit);
SELECT * FROM plsql_profiler_units;
exec dbms_profiler.rollup_unit(8, 3);
SELECT u.runid, u.unit_type, u.unit_name, sum(d.total_time) / 1000 microsec
FROM plsql_profiler_units u
INNER JOIN plsql_profiler_data d ON (u.runid = d.runid AND
u.unit_number = d.unit_number)
WHERE u.runid between 53 and 57
AND u.unit_name LIKE 'PARSE_CSV%'
GROUP BY u.runid, u.unit_type, u.unit_name
ORDER BY u.runid, u.unit_name;
http://radino.eu/tag/dbms_profiler/
Statement_Tracer_for_Oracle : http://www.aboves.com/downloads/ -- ask IT Team to get Installed. ( free tool )
http://www.orafaq.com/wiki/Scripts
Your DBA may have to install the profiler in your database. The procedure for
installing this package is simple:
ć cd $ORACLE_HOME/rdbms/admin
ć Using SVRMGRL you would connect as SYS or INTERNAL
ć Run profload.sql
In order to actually use the profiler after that, you will need to have the profiling
tables installed. You can install these once per database but I recommend each developer
would have their own copy. Fortunately, the DBMS_PROFILER package is built with invokers
rights and unqualified tablenames so that we can install the tables in each schema and
the profiler package will use them. The reason you each want your own tables is so that
you only see the results of your profiling runs, not those of your coworkers. In order
to get a copy of the profiling tables in your schema, you would run
$ORACLE_HOME\rdbms\admin\proftab in SQLPlus. After you run proftab.sql, you¡¦ll need to
run profrep.sql as well. This script creates views and packages to operate on the
profiler tables in order to generate reports. This script is found in
$ORACLE_HOME\plsql\demo\profrep.sql. You should run this in your schema as well after
creating the tables.
I like to keep a small script around to reset these tables (clear them out). After I¡¦ve
done a run or two and have analyzed the results ¡V I run this script to reset the tables.
I have the following in a script I call profreset.sql:
delete from plsql_profiler_data;
delete from plsql_profiler_units;
delete from plsql_profiler_runs;
finally, and to answer the question, you will find $ORACLE_HOME/plsql/demo/profsum.sql.
that is what generates the report.
set echo off
set linesize 5000
set trimspool on
set serveroutput on
set termout off
column owner format a11
column unit_name format a14
column text format a21 word_wrapped
column runid format 9999
column secs format 999.99
column hsecs format 999.99
column grand_total format 9999.99
column run_comment format a11 word_wrapped
column line# format 99999
column pct format 999.9
column unit_owner format a11
spool profsum.out
/* Clean out rollup results, and recreate */
update plsql_profiler_units set total_time = 0;
execute prof_report_utilities.rollup_all_runs;
prompt =
prompt =
prompt ====================
prompt Total time
select grand_total/1000000000 as grand_total
from plsql_profiler_grand_total;
prompt =
prompt =
prompt ====================
prompt Total time spent on each run
select runid,
substr(run_comment,1, 30) as run_comment,
run_total_time/1000000000 as secs
from (select a.runid, sum(a.total_time) run_total_time, b.run_comment
from plsql_profiler_units a, plsql_profiler_runs b
where a.runid = b.runid group by a.runid, b.run_comment )
where run_total_time > 0
order by runid asc;
prompt =
prompt =
prompt ====================
prompt Percentage of time in each module, for each run separately
select p1.runid,
substr(p2.run_comment, 1, 20) as run_comment,
p1.unit_owner,
decode(p1.unit_name, '', '',
substr(p1.unit_name,1, 20)) as unit_name,
p1.total_time/1000000000 as secs,
TO_CHAR(100*p1.total_time/p2.run_total_time, '999.9') as percentage
from plsql_profiler_units p1,
(select a.runid, sum(a.total_time) run_total_time, b.run_comment
from plsql_profiler_units a, plsql_profiler_runs b
where a.runid = b.runid group by a.runid, b.run_comment ) p2
where p1.runid=p2.runid
and p1.total_time > 0
and p2.run_total_time > 0
and (p1.total_time/p2.run_total_time) >= .01
order by p1.runid asc, p1.total_time desc;
column secs form 9.99
prompt =
prompt =
prompt ====================
prompt Percentage of time in each module, summarized across runs
select p1.unit_owner,
decode(p1.unit_name, '', '', substr(p1.unit_name,1, 25)) as unit_name,
p1.total_time/1000000000 as secs,
TO_CHAR(100*p1.total_time/p2.grand_total, '99999.99') as percentage
from plsql_profiler_units_cross_run p1,
plsql_profiler_grand_total p2
order by p1.total_time DESC;
prompt =
prompt =
prompt ====================
prompt Lines taking more than 1% of the total time, each run separate
select p1.runid as runid,
p1.total_time/10000000 as Hsecs,
p1.total_time/p4.grand_total*100 as pct,
substr(p2.unit_owner, 1, 20) as owner,
decode(p2.unit_name, '', '', substr(p2.unit_name,1, 20)) as unit_name,
p1.line#,
( select p3.text
from all_source p3
where p3.owner = p2.unit_owner and
p3.line = p1.line# and
p3.name=p2.unit_name and
p3.type not in ( 'PACKAGE', 'TYPE' )) text
from plsql_profiler_data p1,
plsql_profiler_units p2,
plsql_profiler_grand_total p4
where (p1.total_time >= p4.grand_total/100)
AND p1.runID = p2.runid
and p2.unit_number=p1.unit_number
order by p1.total_time desc;
prompt =
prompt =
prompt ====================
prompt Most popular lines (more than 1%), summarize across all runs
select p1.total_time/10000000 as hsecs,
p1.total_time/p4.grand_total*100 as pct,
substr(p1.unit_owner, 1, 20) as unit_owner,
decode(p1.unit_name, '', '',
substr(p1.unit_name,1, 20)) as unit_name,
p1.line#,
( select p3.text from all_source p3
where (p3.line = p1.line#) and
(p3.owner = p1.unit_owner) AND
(p3.name = p1.unit_name) and
(p3.type not in ( 'PACKAGE', 'TYPE' ) ) ) text
from plsql_profiler_lines_cross_run p1,
plsql_profiler_grand_total p4
where (p1.total_time >= p4.grand_total/100)
order by p1.total_time desc;
execute prof_report_utilities.rollup_all_runs;
prompt =
prompt =
prompt ====================
prompt Number of lines actually executed in different units (by unit_name)
select p1.unit_owner,
p1.unit_name,
count( decode( p1.total_occur, 0, null, 0)) as lines_executed ,
count(p1.line#) as lines_present,
count( decode( p1.total_occur, 0, null, 0))/count(p1.line#) *100
as pct
from plsql_profiler_lines_cross_run p1
where (p1.unit_type in ( 'PACKAGE BODY', 'TYPE BODY',
'PROCEDURE', 'FUNCTION' ) )
group by p1.unit_owner, p1.unit_name;
prompt =
prompt =
prompt ====================
prompt Number of lines actually executed for all units
select count(p1.line#) as lines_executed
from plsql_profiler_lines_cross_run p1
where (p1.unit_type in ( 'PACKAGE BODY', 'TYPE BODY',
'PROCEDURE', 'FUNCTION' ) )
AND p1.total_occur > 0;
prompt =
prompt =
prompt ====================
prompt Total number of lines in all units
select count(p1.line#) as lines_present
from plsql_profiler_lines_cross_run p1
where (p1.unit_type in ( 'PACKAGE BODY', 'TYPE BODY',
'PROCEDURE', 'FUNCTION' ) );
spool off
set termout on
edit profsum.out
set linesize 131
SELECT s.line line,
DECODE(p.occured,
0,
'' || substr(s.text, 1, length(s.text) - 1) ||
'',
s.text) text
FROM (SELECT d.TOTAL_OCCUR occured, u.unit_type, d.line#
FROM plsql_profiler_units u, plsql_profiler_data d
WHERE d.RUNID = u.runid
AND d.UNIT_NUMBER = u.unit_number
--AND d.TOTAL_OCCUR > 0
AND u.runid = :A3
AND u.unit_name = :A5) p,
user_source s
WHERE p.line#(+) = NVL(s.line, NULL)
AND p.unit_type(+) = NVL(s.TYPE, NULL)
AND s.NAME = :A5
ORDER BY s.TYPE, s.NAME, s.line
select s.line "Line",
p.total_occur "Occur",
p.total_time "Msec",
s.text "Text"
from all_source s,
(select u.unit_owner,
u.unit_name,
u.unit_type,
d.line#,
d.total_occur,
d.total_time / 1000000 total_time
from plsql_profiler_data d, plsql_profiler_units u
where u.runid = 5
and u.runid = d.runid
and u.unit_number = d.unit_number) p
where s.owner = p.unit_owner(+)
and s.name = p.unit_name(+)
and s.type = p.unit_type(+)
and s.line = p.line#(+)
and s.name = 'DO_SOMETHING'
and s.owner = 'USER'
order by s.line;
CREATE OR REPLACE TRIGGER bri_prof_memory
BEFORE INSERT
ON plsql_profiler_data
REFERENCING NEW AS NEW OLD AS OLD
FOR EACH ROW
DECLARE
v_used_mem NUMBER;
BEGIN
SELECT p.pga_used_mem
INTO v_used_mem
FROM SYS.v_$process p, SYS.v_$session s
WHERE p.addr = s.paddr AND s.audsid = USERENV ('sessionid');
:new.spare1:=v_used_mem;
END bri_prof_memory;
d.total_occur,
d.total_time / 1000 microsec,
substr(s.text, 1, 60) plsql_code
FROM plsql_profiler_units u
INNER JOIN plsql_profiler_data d ON (u.runid = d.runid AND
u.unit_number = d.unit_number)
INNER JOIN all_source s ON (s.owner = u.unit_owner AND
s.type = u.unit_type AND s.name = u.unit_name AND
s.line = d.line#)
WHERE u.runid = 53
AND u.unit_name LIKE 'PARSE_CSV%'
ORDER BY u.unit_number, d.line#;
SELECT runid, run_date, run_comment, run_total_time
FROM plsql_profiler_runs
ORDER BY runid;
SELECT u.runid,
u.unit_number,
u.unit_type,
u.unit_owner,
u.unit_name,
d.line#,
d.total_occur,
d.total_time,
d.min_time,
d.max_time
FROM plsql_profiler_units u
JOIN plsql_profiler_data d ON u.runid = d.runid
AND u.unit_number = d.unit_number
WHERE u.runid = 1
ORDER BY u.unit_number, d.line#;
SELECT line || ' : ' || text
FROM all_source
WHERE owner = 'MY_SCHEMA'
AND type = 'PROCEDURE'
AND name = 'DO_SOMETHING';
SELECT dbms_profiler.internal_version_check FROM dual;
DECLARE
i PLS_INTEGER;
BEGIN
i := dbms_profiler.flush_data;
i := dbms_profiler.pause_profiler;
i := dbms_profiler.resume_profiler;
END;
SELECT dbms_profiler.pause_profiler FROM dual;
SELECT dbms_profiler.resume_profiler FROM dual;
ROLLUP_RUN
CREATE OR REPLACE PROCEDURE proc1 IS vd VARCHAR2(5);
BEGIN
FOR i IN 1 .. 100 LOOP
SELECT dummy INTO vd FROM dual;
END LOOP;
END proc1;
/
DECLARE v_run NUMBER;
BEGIN
dbms_profiler.start_profiler('test', 'test1', v_run);
proc1;
dbms_profiler.stop_profiler;
dbms_profiler.rollup_run(v_run);
END;
/
ROLLUP_UNIT dbms_profiler.rollup_unit(run_number IN NUMBER, unit IN NUMBER);
-- executes the following code
UPDATE plsql_profiler_units
SET total_time = (SELECT SUM(total_time)
FROM plsql_profiler_data
WHERE runid = run_number
AND unit_number = unit);
SELECT * FROM plsql_profiler_units;
exec dbms_profiler.rollup_unit(8, 3);
SELECT u.runid, u.unit_type, u.unit_name, sum(d.total_time) / 1000 microsec
FROM plsql_profiler_units u
INNER JOIN plsql_profiler_data d ON (u.runid = d.runid AND
u.unit_number = d.unit_number)
WHERE u.runid between 53 and 57
AND u.unit_name LIKE 'PARSE_CSV%'
GROUP BY u.runid, u.unit_type, u.unit_name
ORDER BY u.runid, u.unit_name;
http://radino.eu/tag/dbms_profiler/
Statement_Tracer_for_Oracle : http://www.aboves.com/downloads/ -- ask IT Team to get Installed. ( free tool )
http://www.orafaq.com/wiki/Scripts
Your DBA may have to install the profiler in your database. The procedure for
installing this package is simple:
ć cd $ORACLE_HOME/rdbms/admin
ć Using SVRMGRL you would connect as SYS or INTERNAL
ć Run profload.sql
In order to actually use the profiler after that, you will need to have the profiling
tables installed. You can install these once per database but I recommend each developer
would have their own copy. Fortunately, the DBMS_PROFILER package is built with invokers
rights and unqualified tablenames so that we can install the tables in each schema and
the profiler package will use them. The reason you each want your own tables is so that
you only see the results of your profiling runs, not those of your coworkers. In order
to get a copy of the profiling tables in your schema, you would run
$ORACLE_HOME\rdbms\admin\proftab in SQLPlus. After you run proftab.sql, you¡¦ll need to
run profrep.sql as well. This script creates views and packages to operate on the
profiler tables in order to generate reports. This script is found in
$ORACLE_HOME\plsql\demo\profrep.sql. You should run this in your schema as well after
creating the tables.
I like to keep a small script around to reset these tables (clear them out). After I¡¦ve
done a run or two and have analyzed the results ¡V I run this script to reset the tables.
I have the following in a script I call profreset.sql:
delete from plsql_profiler_data;
delete from plsql_profiler_units;
delete from plsql_profiler_runs;
finally, and to answer the question, you will find $ORACLE_HOME/plsql/demo/profsum.sql.
that is what generates the report.
set echo off
set linesize 5000
set trimspool on
set serveroutput on
set termout off
column owner format a11
column unit_name format a14
column text format a21 word_wrapped
column runid format 9999
column secs format 999.99
column hsecs format 999.99
column grand_total format 9999.99
column run_comment format a11 word_wrapped
column line# format 99999
column pct format 999.9
column unit_owner format a11
spool profsum.out
/* Clean out rollup results, and recreate */
update plsql_profiler_units set total_time = 0;
execute prof_report_utilities.rollup_all_runs;
prompt =
prompt =
prompt ====================
prompt Total time
select grand_total/1000000000 as grand_total
from plsql_profiler_grand_total;
prompt =
prompt =
prompt ====================
prompt Total time spent on each run
select runid,
substr(run_comment,1, 30) as run_comment,
run_total_time/1000000000 as secs
from (select a.runid, sum(a.total_time) run_total_time, b.run_comment
from plsql_profiler_units a, plsql_profiler_runs b
where a.runid = b.runid group by a.runid, b.run_comment )
where run_total_time > 0
order by runid asc;
prompt =
prompt =
prompt ====================
prompt Percentage of time in each module, for each run separately
select p1.runid,
substr(p2.run_comment, 1, 20) as run_comment,
p1.unit_owner,
decode(p1.unit_name, '', '
substr(p1.unit_name,1, 20)) as unit_name,
p1.total_time/1000000000 as secs,
TO_CHAR(100*p1.total_time/p2.run_total_time, '999.9') as percentage
from plsql_profiler_units p1,
(select a.runid, sum(a.total_time) run_total_time, b.run_comment
from plsql_profiler_units a, plsql_profiler_runs b
where a.runid = b.runid group by a.runid, b.run_comment ) p2
where p1.runid=p2.runid
and p1.total_time > 0
and p2.run_total_time > 0
and (p1.total_time/p2.run_total_time) >= .01
order by p1.runid asc, p1.total_time desc;
column secs form 9.99
prompt =
prompt =
prompt ====================
prompt Percentage of time in each module, summarized across runs
select p1.unit_owner,
decode(p1.unit_name, '', '
p1.total_time/1000000000 as secs,
TO_CHAR(100*p1.total_time/p2.grand_total, '99999.99') as percentage
from plsql_profiler_units_cross_run p1,
plsql_profiler_grand_total p2
order by p1.total_time DESC;
prompt =
prompt =
prompt ====================
prompt Lines taking more than 1% of the total time, each run separate
select p1.runid as runid,
p1.total_time/10000000 as Hsecs,
p1.total_time/p4.grand_total*100 as pct,
substr(p2.unit_owner, 1, 20) as owner,
decode(p2.unit_name, '', '
p1.line#,
( select p3.text
from all_source p3
where p3.owner = p2.unit_owner and
p3.line = p1.line# and
p3.name=p2.unit_name and
p3.type not in ( 'PACKAGE', 'TYPE' )) text
from plsql_profiler_data p1,
plsql_profiler_units p2,
plsql_profiler_grand_total p4
where (p1.total_time >= p4.grand_total/100)
AND p1.runID = p2.runid
and p2.unit_number=p1.unit_number
order by p1.total_time desc;
prompt =
prompt =
prompt ====================
prompt Most popular lines (more than 1%), summarize across all runs
select p1.total_time/10000000 as hsecs,
p1.total_time/p4.grand_total*100 as pct,
substr(p1.unit_owner, 1, 20) as unit_owner,
decode(p1.unit_name, '', '
substr(p1.unit_name,1, 20)) as unit_name,
p1.line#,
( select p3.text from all_source p3
where (p3.line = p1.line#) and
(p3.owner = p1.unit_owner) AND
(p3.name = p1.unit_name) and
(p3.type not in ( 'PACKAGE', 'TYPE' ) ) ) text
from plsql_profiler_lines_cross_run p1,
plsql_profiler_grand_total p4
where (p1.total_time >= p4.grand_total/100)
order by p1.total_time desc;
execute prof_report_utilities.rollup_all_runs;
prompt =
prompt =
prompt ====================
prompt Number of lines actually executed in different units (by unit_name)
select p1.unit_owner,
p1.unit_name,
count( decode( p1.total_occur, 0, null, 0)) as lines_executed ,
count(p1.line#) as lines_present,
count( decode( p1.total_occur, 0, null, 0))/count(p1.line#) *100
as pct
from plsql_profiler_lines_cross_run p1
where (p1.unit_type in ( 'PACKAGE BODY', 'TYPE BODY',
'PROCEDURE', 'FUNCTION' ) )
group by p1.unit_owner, p1.unit_name;
prompt =
prompt =
prompt ====================
prompt Number of lines actually executed for all units
select count(p1.line#) as lines_executed
from plsql_profiler_lines_cross_run p1
where (p1.unit_type in ( 'PACKAGE BODY', 'TYPE BODY',
'PROCEDURE', 'FUNCTION' ) )
AND p1.total_occur > 0;
prompt =
prompt =
prompt ====================
prompt Total number of lines in all units
select count(p1.line#) as lines_present
from plsql_profiler_lines_cross_run p1
where (p1.unit_type in ( 'PACKAGE BODY', 'TYPE BODY',
'PROCEDURE', 'FUNCTION' ) );
spool off
set termout on
edit profsum.out
set linesize 131
SELECT s.line line,
DECODE(p.occured,
0,
'' || substr(s.text, 1, length(s.text) - 1) ||
'',
s.text) text
FROM (SELECT d.TOTAL_OCCUR occured, u.unit_type, d.line#
FROM plsql_profiler_units u, plsql_profiler_data d
WHERE d.RUNID = u.runid
AND d.UNIT_NUMBER = u.unit_number
--AND d.TOTAL_OCCUR > 0
AND u.runid = :A3
AND u.unit_name = :A5) p,
user_source s
WHERE p.line#(+) = NVL(s.line, NULL)
AND p.unit_type(+) = NVL(s.TYPE, NULL)
AND s.NAME = :A5
ORDER BY s.TYPE, s.NAME, s.line
select s.line "Line",
p.total_occur "Occur",
p.total_time "Msec",
s.text "Text"
from all_source s,
(select u.unit_owner,
u.unit_name,
u.unit_type,
d.line#,
d.total_occur,
d.total_time / 1000000 total_time
from plsql_profiler_data d, plsql_profiler_units u
where u.runid = 5
and u.runid = d.runid
and u.unit_number = d.unit_number) p
where s.owner = p.unit_owner(+)
and s.name = p.unit_name(+)
and s.type = p.unit_type(+)
and s.line = p.line#(+)
and s.name = 'DO_SOMETHING'
and s.owner = 'USER'
order by s.line;
CREATE OR REPLACE TRIGGER bri_prof_memory
BEFORE INSERT
ON plsql_profiler_data
REFERENCING NEW AS NEW OLD AS OLD
FOR EACH ROW
DECLARE
v_used_mem NUMBER;
BEGIN
SELECT p.pga_used_mem
INTO v_used_mem
FROM SYS.v_$process p, SYS.v_$session s
WHERE p.addr = s.paddr AND s.audsid = USERENV ('sessionid');
:new.spare1:=v_used_mem;
END bri_prof_memory;
Subscribe to:
Posts (Atom)