The SQL Plan Management new feature of Oracle 11g completes (replaces ?) the outlines by providing a new plan stability capability to Oracle 11g Enterprise Edition.
Each time you execute this statement; Oracle will convert it to an ASCII function and apply a hashing algorithm over it; than it will check if this SQL statement is already present in the SHARED POOL.
If the statement is in the SHARED POOL, Oracle will reuse (soft parse) it together with its execution plan. If the statement is not in the SHARED POOL, Oracle will have to do a hard parse.
Oracle’ CBO can generate more optimized execution plan if the he knows upfront the values of the filter predicated, meaning if the values are literals and not bind variables.
When you execute an SQL with bind variables, the value for the filter predicate is unknown.
In Oracle 8i, CBO will generate one execution plan, regardless of the input of “:a”.
In Oracle 9i,10g CBO will wait until the cursor is opened, bind the value from the bind variable and then optimize the SQL.
In Oracle 11g CBO has a new feature called “adaptive cursor sharing” which will be discussed in another post
Bind variable peeking is when Oracle’s CBO waits until he gets the value for the bind variable and then optimizes the SQL. But, this is very important: this is done in the hard parsing phase of the SQL.
When to use bind variables
In OLTP system = YES
When you execute many statements per second = YES
Data Warehouse = NO
Data Mining = NO
End month reports = NO
PARALLEL PROCESSING
Ensure the system has enough shared memory. GoldenGate runs as an Oracle process. Each Extract or Replicat process requires upwards of 25-50 MB of system shared memory. This means less memory for the Oracle DBMS, especially the SGA.
Use parallel Replicat groups on the target system to reduce latency thru parallelism. Consider parallel Extract groups for tables that are fetch intensive (e.g., those that trigger SQL procedures).
Group tables that have R.I. to each other in the same Extract-Replicat pair.
Pair each Replicat with its own trail and corresponding Extract process.
When using parallel Replicats, configure each one to process a different portion of the overall data.
PASSTHRU PARAMETER
Consider using this parameter if there is no filtering, conversion or mapping required and you’re using DATAPUMP.
In pass-through mode, the Extract process does not look up table definitions, either from the database or from a data definitions file. Pass-through mode increases the throughput of the data pump, because all of the functionality that looks up object definitions is bypassed.
This saves database fetches to improve performance.
INSERTAPPEND
A new GoldenGate 10.4 feature.
Use for large transactions .
Puts records at end of table rather than doing a more costly insert into other areas of table.
To reduce bandwidth requirements:
Use compression options of the RMTHOST parameter to compress data
before it is sent across the network. Weigh the benefits of
compression against the CPU resources that are required to perform
the compression.
To increase the TCP/IP packet size:
Use the TCPBUFSIZE option of the RMTHOST parameter to increase the
size of the TCP socket buffer that Extract maintains. By increasing
the size of the buffer, you can send larger packets to the target
system. Consult with Network Support before setting TCPBUFSIZE.
Use SQL Arrays
The BATCHSQL parameter will increase the performance of Replicat. BATCHSQL causes Replicat to create arrays for similar SQL statements and apply them at an accelerated rate. Normally, Replicat applies one SQL statement at a time.
Use the CHECKPOINTSECS in Extract or Replicat; if increased,
less frequent checkpoints; increases data to be reprocessed if
process fails; keep transaction logs available in case of
reprocessing
Use the GROUPTRANSOPS; increases number of SQL operations in a
Replicat ; reduces I/O to checkpoint file and checkpoint
table.
Data Filtering and Conversion:
Use primary Extract for data capture only.
Use a data pump on the source to perform filtering and thereby send
less data over the network.
Alternatively, use Replicat for conversion and, if the network can
handle large amounts of data, also for filtering.
Enterprise Manager
A graphical all-purpose tool that can be used to identify when a spike occurred, drill down to the cause, and examine ADDM recommendations. The benefit of a graphical representation of performance data is visible (pun intended). Data visualizations display any skew directly.
Automatic Database Diagnostic Monitor (ADDM)
An expert system that automatically identifies and recommends solutions for many instance-wide performance problems. Best used for longer-duration performance problems (that is, problems that are continuous or ongoing for a large proportion of the snapshot interval). The symptoms and problems are available by running the ADDM report, and through Enterprise Manager.
Active Session History (ASH)
An all-purpose tool providing data that is useful when investigating system-wide problems, shorter-duration spikes, or smaller-scoped problems (for example, for a specific user, or SQL, or a module/action).The advantage of using ASH data when compared to other diagnostic information is that the data is of a finer granularity. This allows you to look at a problem to identify how the symptoms "build up," or allows you to determine exactly which resources are involved and who is using them. The ASH data can be queried directly or accessed via a targeted ASH report.
Automatic Workload Repository (AWR)
Instance-wide summary data that is used when ADDM is not able to identify the problem in the system, and the problem is of longer duration. Also used to verify the ADDM analysis. The data can be queried directly but is most often accessed via the AWR instance report.
Statspack (SP)
Instance-wide summary data used to manually diagnose performance problems. You should use SP when you are not licensed for the Diagnostics Pack, and so can't use ADDM or AWR.
SQL trace
This traces the execution flow (resource utilization, execution plan, and waits) by SQL statement. The information can be used to examine the flow and resource utilization for a specific user, feature, or SQL statement identified as problematic.
Statistics are somewhat fallible in that they are seldom 100 percent accurate, but in most cases they do sufficiently indicate what was intended. Be sure you understand what each statistic represents and the units used (there is a big difference between microseconds and centiseconds).
Time-breakdown statistics (Time Model) make it significantly easier to determine the type of operations that are consuming resources in the database.
DB time: Time spent by all user processes in the database (that is,. non-idle wait time + CPU time).
DB CPU: Time spent by all user processes on the CPU, in Oracle code. On most systems, the majority of time will be spent in DB CPU, SQL execute elapsed time, or PL/SQL execution elapsed time (and possibly Java). Time spent in parse and connection management should be low, so if the levels indicate a high percentage of DB time, a problem exists in the relevant area. You can use this data to correlate with Top 5 Timed Events and Load Profile.
Database time (DB time) is an important time-based statistic: it measures the total time spent in the database by active sessions (that is, foreground user processes either actively working or actively waiting in a database call). DB time includes CPU time, I/O time, and other non-idle wait time.
Because DB time represents the sum of the time that all sessions spend in database calls, it can easily exceed the elapsed wall-clock time.
The objective of tuning an Oracle system could be stated as reducing the time that users spend in performing actions in the database, or simply reducing DB time.
Wait time is artificially inflated when the host is CPU bound because the wait time includes the actual time spent waiting (for example, waiting for a disk I/O), as well as the time spent by the process in the OS run-queue waiting to be rescheduled.
Therefore, when the host is CPU bound, it is important to reduce CPU utilization before addressing wait-related problems, because otherwise you may be addressing the wrong problem.
You can use ASH data to estimate DB time when the actual DB time is not available—for example, if a session has exited. Because ASH samples active sessions every second, you can estimate DB time (in seconds) to be the number of ASH samples counted.
V$OSSTAT is OS-related resource utilization data that the Oracle server collects. The statistics available vary by platform. You can use V$OSSTAT to determine CPU utilization (BUSY_TICKS and IDLE_TICKS), and also compare this to the host's CPU utilization statistics. Also look for high OS_CPU_WAIT_TIME, which may indicate the host is CPU bound.
V$OSSTAT statistics can be compared with the Time Model statistics, for example to determine how much of the total CPU used on the host is attributable to this instance: DB CPU / BUSY_TICKS.
note that the units for these two statistics differ.
In 10g, each wait event (V$SYSTEM_EVENT) is classified into one of nine wait classes: Application, Commit, Concurrency, Configuration, Network, Other, System I/O, User I/O, and Idle. The class names are reasonably self-explanatory except Other, which is a catchall bucket for wait events that should not ever contribute any significant wait time.
Adaptive Cursor Sharing is a new feature starting from Oracle version 11g release 1. The idea behind is to improve the execution plans for statements with bind variables. CBO has been enhanced to allow multiple execution plans to be used for a single statement with bind variables, without hard parsing the SQL.
There is no special way to configure ACS. It is on by default and of course there is a hidden initialization parameter to turn it off if needed. Key role in the process of decision whether ACS will be used for a particular statement are:
Two new columns in V$SQL view (IS_BIND_SENSITIVE and IS_BIND_AWARE)
Three new views (V$SQL_CS_HISTOGRAM, V$SQL_CS_SELECTIVITY, V$SQL_CS_STATISTICS)
Oracle's data dictionary provides information that Oracle needs to perform its tasks. The data dictionary is stored in table structures owned by SYS. Their content is exposed through dictionary views.
Dynamic performance views (so called v$ views) are part of Oracle's data dictionary.