One obvious method of reducing
pings between instances is to isolate the transactions that use a
specific data set to a specific server in the RAC cluster.
7More
RAC Hot Block Issue - 0 views
-
Some of the good RAC practices, to put it quite frankly, waste disk and memory space to improve data sharing and dispersal characteristics.
-
An example of an efficient RAC object is one that is used by only a single instance at a time. To achieve this singularity of use, the rows-per-block (RPB) of the data object must be reduced.
- ...3 more annotations...
-
For high insert objects, pre-allocate extents to avoid dynamic space management. Assign allocated extents to specific instances. This avoids intra-instance block transfers during insert activity from multiple nodes.
-
Use reverse-key indexes for indexes that may become right-hand indexes due to high insert rates. This removes the capability to use index scans. Use only when required.
-
Design indexes such that the clustering factor is as close to the number of used blocks as is possible.
10More
Hash partitioning - 0 views
-
- instead of having a 100 gig tablespace to backup, you have 100, 1 gig tablespaces. (each tablespace spends less time in backup mode, reduces the amount of potential extra redo, reduces the amount of manual recovery you need to do if the instance failes during backup). same with restores.
-
Say you do a join on the hash partition key -- we can do parallel partition wise elimination on the join
- ...6 more annotations...
-
Hash partitioning -- all of admin features of range partitions and many of the partition elimination/query features as well.
-
With a harsh partition, we are attempting to achieve an EVEN distribution of data across all of the partitions while at the same time supporting partition elimination and other features.
-
with hashing -- all rows with the same key by definition hash to the same partition -- that is the very essence of hash partitioning
-
you would just drop/truncate the oldest partition and if you do this with the option to maintain the indexes, it'll not impose any sort of rebuild
-
with hash partitions, you want to hash on something that is almost unique (or at least has lots of values) and into powers of 2 you want 2, 4, 8, 16, 32, 64, 128, .... partitions. 50 is not going to work (you'll always get a bell shape with the partitions at the front and end having the least and the ones in the middle having the most)
10More
Oracle GoldenGate Best Practices and Tips - 0 views
-
PARALLEL PROCESSING Ensure the system has enough shared memory. GoldenGate runs as an Oracle process. Each Extract or Replicat process requires upwards of 25-50 MB of system shared memory. This means less memory for the Oracle DBMS, especially the SGA. Use parallel Replicat groups on the target system to reduce latency thru parallelism. Consider parallel Extract groups for tables that are fetch intensive (e.g., those that trigger SQL procedures). Group tables that have R.I. to each other in the same Extract-Replicat pair. Pair each Replicat with its own trail and corresponding Extract process. When using parallel Replicats, configure each one to process a different portion of the overall data.
-
PASSTHRU PARAMETER Consider using this parameter if there is no filtering, conversion or mapping required and you’re using DATAPUMP. In pass-through mode, the Extract process does not look up table definitions, either from the database or from a data definitions file. Pass-through mode increases the throughput of the data pump, because all of the functionality that looks up object definitions is bypassed. This saves database fetches to improve performance.
-
INSERTAPPEND A new GoldenGate 10.4 feature. Use for large transactions . Puts records at end of table rather than doing a more costly insert into other areas of table.
- ...6 more annotations...
-
To reduce bandwidth requirements: Use compression options of the RMTHOST parameter to compress data before it is sent across the network. Weigh the benefits of compression against the CPU resources that are required to perform the compression.
-
To increase the TCP/IP packet size: Use the TCPBUFSIZE option of the RMTHOST parameter to increase the size of the TCP socket buffer that Extract maintains. By increasing the size of the buffer, you can send larger packets to the target system. Consult with Network Support before setting TCPBUFSIZE.
-
Use SQL Arrays The BATCHSQL parameter will increase the performance of Replicat. BATCHSQL causes Replicat to create arrays for similar SQL statements and apply them at an accelerated rate. Normally, Replicat applies one SQL statement at a time.
-
Use the CHECKPOINTSECS in Extract or Replicat; if increased, less frequent checkpoints; increases data to be reprocessed if process fails; keep transaction logs available in case of reprocessing
-
Use the GROUPTRANSOPS; increases number of SQL operations in a Replicat ; reduces I/O to checkpoint file and checkpoint table.
-
Data Filtering and Conversion: Use primary Extract for data capture only. Use a data pump on the source to perform filtering and thereby send less data over the network. Alternatively, use Replicat for conversion and, if the network can handle large amounts of data, also for filtering.
1More
Setting up Oracle's Change Data Capture - 0 views
3More
Call a stored procedure over a database link - 1 views
-
In a distribute transaction -- one in which "DATABASE A" calls "DATABASE B", only "DATABASE A" may commit -- the reason -- "database B" has no mechanism for co-ordinating with "database a" on the commit. We need to do a 2 phase commit to ensure that when the transaction commits any work performed on DATABASE A is committed *as well as* any work on database b.
-
This is the reason will do no allow for ANY node other then the "parent" node of the transaction to issue a commit or rollback in a distributed transaction. The other nodes simply do not have control over the other possible nodes in the transaction.
3More
GC Buffer Busy Waits in RAC: Finding Hot Blocks - 0 views
-
Here’s a handy little query I made up the other day to quickly digest any of the segment statistics from the AWR and grab the top objects for the cluster, reporting on each instance.
-
Any time you see heavy concurrency problems during inserts on table data blocks there should always be one first place to look: space management. Since ancient versions of OPS it has been a well-known fact that freelists are the enemy of concurrency.
4More
Oracle ASSM Performance - 0 views
-
Cons of ASSM: § Slow for full-table scans: Several studies have shown that large-table full-table scans (FTS) will run longer with ASSM than standard bitmaps. ASSM FTS tablespaces are consistently slower than freelist FTS operations. This implies that ASSM may not be appropriate for decision support systems and warehouse applications unless partitioning is used with Oracle Parallel Query. § Slower for high-volume concurrent inserts: Numerous experts have conducted studies that show that tables with high volume bulk loads perform faster with traditional multiple freelists. § ASSM will influence index clustering: For row ordered tables, ASSM can adversely affect the clustering_factor for indexes. Bitmap freelists are less likely to place adjacent tows on physically adjacent data blocks, and this can lower the clustering_factor and the cost-based optimizer's propensity to favor an index range scan.
-
Pros of ASSM: § Varying row sizes: ASSM is better than a static pctused. The bitmaps make ASSM tablespaces better at handling rows with wide variations in row length. § Reducing buffer busy waits: ASSM will remove buffer busy waits better than using multiple freelists. When a table has multiple freelists, all purges must be parallelized to reload the freelists evenly, and ASSM has no such limitation. § Great for Real Application Clusters: The bitmap freelists remove the need to define multiple freelists groups for RAC and provide overall improved freelist management over traditional freelists.
-
Cons of ASSM: § Slow for full-table scans: Several studies have shown that large-table full-table scans (FTS) will run longer with ASSM than standard bitmaps. ASSM FTS tablespaces are consistently slower than freelist FTS operations. This implies that ASSM may not be appropriate for decision support systems and warehouse applications unless partitioning is used with Oracle Parallel Query. § Slower for high-volume concurrent inserts: Numerous experts have conducted studies that show that tables with high volume bulk loads perform faster with traditional multiple freelists. § ASSM will influence index clustering: For row ordered tables, ASSM can adversely affect the clustering_factor for indexes. Bitmap freelists are less likely to place adjacent tows on physically adjacent data blocks, and this can lower the clustering_factor and the cost-based optimizer's propensity to favor an index range scan.
6More
A UML Profile for Data Modeling - 0 views
-
Unfortunately data modeling is not yet covered by the Unified Modeling Language (UML), even though persistence-related issues are clearly an important aspect of object-oriented software project.
-
The good news is that the Object Management Group (OMG) issued an RFP for an official UML Data Modeling Profile in December 2005.
-
This page summarizes the data modeling profile for UML Class Diagrams, that I apply in Agile Database Techniques, The Object Primer 3rd Edition, and Refactoring Databases. First some important definitions:
- ...3 more annotations...
-
Logical data models (LDMs). LDMs are used to explore either the conceptual design of a database or the detailed data architecture of your enterprise. LDMs depict the logical data entities, typically referred to simply as data entities, the data attributes describing those entities, and the relationships between the entities.
-
Physical data models (PDMs). PDMs are used to design the internal schema of a database, depicting the data tables, the data columns of those tables, and the relationships between the tables.
-
Conceptual data models. These models are typically used to explore domain concepts with project stakeholders. Conceptual data models are often created as the precursor to LDMs or as alternatives to LDMs.
14More
Meaning of Oracle Key Statistics - 0 views
-
Statistics are somewhat fallible in that they are seldom 100 percent accurate, but in most cases they do sufficiently indicate what was intended. Be sure you understand what each statistic represents and the units used (there is a big difference between microseconds and centiseconds).
-
Time-breakdown statistics (Time Model) make it significantly easier to determine the type of operations that are consuming resources in the database.
-
DB time: Time spent by all user processes in the database (that is,. non-idle wait time + CPU time).
- ...11 more annotations...
-
DB CPU: Time spent by all user processes on the CPU, in Oracle code. On most systems, the majority of time will be spent in DB CPU, SQL execute elapsed time, or PL/SQL execution elapsed time (and possibly Java). Time spent in parse and connection management should be low, so if the levels indicate a high percentage of DB time, a problem exists in the relevant area. You can use this data to correlate with Top 5 Timed Events and Load Profile.
-
Database time (DB time) is an important time-based statistic: it measures the total time spent in the database by active sessions (that is, foreground user processes either actively working or actively waiting in a database call). DB time includes CPU time, I/O time, and other non-idle wait time.
-
Because DB time represents the sum of the time that all sessions spend in database calls, it can easily exceed the elapsed wall-clock time.
-
The objective of tuning an Oracle system could be stated as reducing the time that users spend in performing actions in the database, or simply reducing DB time.
-
Wait time is artificially inflated when the host is CPU bound because the wait time includes the actual time spent waiting (for example, waiting for a disk I/O), as well as the time spent by the process in the OS run-queue waiting to be rescheduled.
-
Therefore, when the host is CPU bound, it is important to reduce CPU utilization before addressing wait-related problems, because otherwise you may be addressing the wrong problem.
-
You can use ASH data to estimate DB time when the actual DB time is not available—for example, if a session has exited. Because ASH samples active sessions every second, you can estimate DB time (in seconds) to be the number of ASH samples counted.
-
V$OSSTAT is OS-related resource utilization data that the Oracle server collects. The statistics available vary by platform. You can use V$OSSTAT to determine CPU utilization (BUSY_TICKS and IDLE_TICKS), and also compare this to the host's CPU utilization statistics. Also look for high OS_CPU_WAIT_TIME, which may indicate the host is CPU bound.
-
V$OSSTAT statistics can be compared with the Time Model statistics, for example to determine how much of the total CPU used on the host is attributable to this instance: DB CPU / BUSY_TICKS.
-
In 10g, each wait event (V$SYSTEM_EVENT) is classified into one of nine wait classes: Application, Commit, Concurrency, Configuration, Network, Other, System I/O, User I/O, and Idle. The class names are reasonably self-explanatory except Other, which is a catchall bucket for wait events that should not ever contribute any significant wait time.
2More
Adaptive Cursor Sharing - 0 views
-
Adaptive Cursor Sharing is a new feature starting from Oracle version 11g release 1. The idea behind is to improve the execution plans for statements with bind variables. CBO has been enhanced to allow multiple execution plans to be used for a single statement with bind variables, without hard parsing the SQL.
-
There is no special way to configure ACS. It is on by default and of course there is a hidden initialization parameter to turn it off if needed. Key role in the process of decision whether ACS will be used for a particular statement are: Two new columns in V$SQL view (IS_BIND_SENSITIVE and IS_BIND_AWARE) Three new views (V$SQL_CS_HISTOGRAM, V$SQL_CS_SELECTIVITY, V$SQL_CS_STATISTICS)
7More
Bind variables and bind variable peeking | Somewhere in between - 0 views
-
Each time you execute this statement; Oracle will convert it to an ASCII function and apply a hashing algorithm over it; than it will check if this SQL statement is already present in the SHARED POOL.
-
If the statement is in the SHARED POOL, Oracle will reuse (soft parse) it together with its execution plan. If the statement is not in the SHARED POOL, Oracle will have to do a hard parse.
-
Oracle’ CBO can generate more optimized execution plan if the he knows upfront the values of the filter predicated, meaning if the values are literals and not bind variables.
- ...4 more annotations...
-
In Oracle 8i, CBO will generate one execution plan, regardless of the input of “:a”. In Oracle 9i,10g CBO will wait until the cursor is opened, bind the value from the bind variable and then optimize the SQL. In Oracle 11g CBO has a new feature called “adaptive cursor sharing” which will be discussed in another post
-
Bind variable peeking is when Oracle’s CBO waits until he gets the value for the bind variable and then optimizes the SQL. But, this is very important: this is done in the hard parsing phase of the SQL.
-
When to use bind variables In OLTP system = YES When you execute many statements per second = YES Data Warehouse = NO Data Mining = NO End month reports = NO
10More
Top 10 Backup and Recovery Best Practices - 0 views
6More
Oracle Diagnostic Tools - 0 views
-
Enterprise Manager A graphical all-purpose tool that can be used to identify when a spike occurred, drill down to the cause, and examine ADDM recommendations. The benefit of a graphical representation of performance data is visible (pun intended). Data visualizations display any skew directly.
-
Automatic Database Diagnostic Monitor (ADDM) An expert system that automatically identifies and recommends solutions for many instance-wide performance problems. Best used for longer-duration performance problems (that is, problems that are continuous or ongoing for a large proportion of the snapshot interval). The symptoms and problems are available by running the ADDM report, and through Enterprise Manager.
-
Active Session History (ASH) An all-purpose tool providing data that is useful when investigating system-wide problems, shorter-duration spikes, or smaller-scoped problems (for example, for a specific user, or SQL, or a module/action).The advantage of using ASH data when compared to other diagnostic information is that the data is of a finer granularity. This allows you to look at a problem to identify how the symptoms "build up," or allows you to determine exactly which resources are involved and who is using them. The ASH data can be queried directly or accessed via a targeted ASH report.
- ...3 more annotations...
-
Automatic Workload Repository (AWR) Instance-wide summary data that is used when ADDM is not able to identify the problem in the system, and the problem is of longer duration. Also used to verify the ADDM analysis. The data can be queried directly but is most often accessed via the AWR instance report.
-
Statspack (SP) Instance-wide summary data used to manually diagnose performance problems. You should use SP when you are not licensed for the Diagnostics Pack, and so can't use ADDM or AWR.
-
SQL trace This traces the execution flow (resource utilization, execution plan, and waits) by SQL statement. The information can be used to examine the flow and resource utilization for a specific user, feature, or SQL statement identified as problematic.
Non-flash interface for Oracle Support web site - 0 views
1More
Tuning and Optimizing RHEL for Oracle 9i and 10g Databases (Red Hat Enterprise Linux - ... - 0 views
4More
Can not start Webcache component of OMS - 0 views
-
failed to start a managed process after the maximum retry limit Log: /home/oracle/product/10.2.0/oms10g/opmn/logs/WebCache~WebCache~1
-
failed to start a managed process after the maximum retry limit Log: /home/oracle/product/10.2.0/oms10g/opmn/logs/WebCache~WebCacheAdmin~1
-
b.Login as root user and execute following command OMS_HOME/webcache/bin/webcache_setuser.sh setroot <username> Note: In place of the <username> you should give the user name which you use to start oms.
Julian Dyke Presentations - 0 views
View AllMost Active Members
View AllTop 10 Tags
- 43dba
- 42oracle
- 9edba
- 8sql
- 5#onlinetrainings
- 4#freevideos
- 4performance
- 4rac
- 4#jobsupport
- 3computer
- 3Cloud
- 3training
- 3documentation
- 3unix
- 3fix
- 3paced
- 3#Course
- 3Details
- 3mysql
- 3#self