Skip to main content

Home/ eDBA Services - Oracle & MySQL/ Contents contributed and discussions participated by Dariusz Owczarek

Contents contributed and discussions participated by Dariusz Owczarek

Dariusz Owczarek

RAC Hot Block Issue - 0 views

  • One obvious method of reducing pings between instances is to isolate the transactions that use a specific data set to a specific server in the RAC cluster.
  • Some of the good RAC practices, to put it quite frankly, waste disk and memory space to improve data sharing and dispersal characteristics.
  • An example of an efficient RAC object is one that is used by only a single instance at a time. To achieve this singularity of use, the rows-per-block (RPB) of the data object must be reduced.
  • ...3 more annotations...
  • For high insert objects, pre-allocate extents to avoid dynamic space management. Assign allocated extents to specific instances. This avoids intra-instance block transfers during insert activity from multiple nodes.
  • Use reverse-key indexes for indexes that may become right-hand indexes due to high insert rates. This removes the capability to use index scans. Use only when required.
  • Design indexes such that the clustering factor is as close to the number of used blocks as is possible.
  •  
    Compartmenting Transactions to Specific Nodes - rows per block (RPB)
Dariusz Owczarek

Hash partitioning - 0 views

  • - instead of having a 100 gig tablespace to backup, you have 100, 1 gig tablespaces. (each tablespace spends less time in backup mode, reduces the amount of potential extra redo, reduces the amount of manual recovery you need to do if the instance failes during backup). same with restores.
  • every single admin option you do to a partition applies to a hash partition
  • Say you do a join on the hash partition key -- we can do parallel partition wise elimination on the join
  • ...6 more annotations...
  • Hash partitioning -- all of admin features of range partitions and many of the partition elimination/query features as well.
  • With a harsh partition, we are attempting to achieve an EVEN distribution of data across all of the partitions while at the same time supporting partition elimination and other features.
  • you want an ALMOST unique or unique value to hash on
  • with hashing -- all rows with the same key by definition hash to the same partition -- that is the very essence of hash partitioning
  • you would just drop/truncate the oldest partition and if you do this with the option to maintain the indexes, it'll not impose any sort of rebuild
  • with hash partitions, you want to hash on something that is almost unique (or at least has lots of values) and into powers of 2 you want 2, 4, 8, 16, 32, 64, 128, .... partitions. 50 is not going to work (you'll always get a bell shape with the partitions at the front and end having the least and the ones in the middle having the most)
  •  
    Hash partitioning
Dariusz Owczarek

Oracle GoldenGate Best Practices and Tips - 0 views

  • PARALLEL PROCESSING Ensure the system has enough shared memory. GoldenGate runs as an Oracle process. Each Extract or Replicat process requires upwards of 25-50 MB of system shared memory. This means less memory for the Oracle DBMS, especially the SGA. Use parallel Replicat groups on the target system to reduce latency thru parallelism. Consider parallel Extract groups for tables that are fetch intensive (e.g., those that trigger SQL procedures). Group tables that have R.I. to each other in the same Extract-Replicat pair. Pair each Replicat with its own trail and corresponding Extract process. When using parallel Replicats, configure each one to process a different portion of the overall data.
  • PASSTHRU PARAMETER Consider using this parameter if there is no filtering, conversion or mapping required and you’re using DATAPUMP. In pass-through mode, the Extract process does not look up table definitions, either from the database or from a data definitions file. Pass-through mode increases the throughput of the data pump, because all of the functionality that looks up object definitions is bypassed. This saves database fetches to improve performance.
  • INSERTAPPEND A new GoldenGate 10.4 feature. Use for large transactions . Puts records at end of table rather than doing a more costly insert into other areas of table.
  • ...6 more annotations...
  • To reduce bandwidth requirements: Use compression options of the RMTHOST parameter to compress data before it is sent across the network. Weigh the benefits of compression against the CPU resources that are required to perform the compression.
  • To increase the TCP/IP packet size: Use the TCPBUFSIZE option of the RMTHOST parameter to increase the size of the TCP socket buffer that Extract maintains. By increasing the size of the buffer, you can send larger packets to the target system. Consult with Network Support before setting TCPBUFSIZE.
  • Use SQL Arrays The BATCHSQL parameter will increase the performance of Replicat. BATCHSQL causes Replicat to create arrays for similar SQL statements and apply them at an accelerated rate. Normally, Replicat applies one SQL statement at a time.
  • Use the CHECKPOINTSECS in Extract or Replicat; if increased, less frequent checkpoints; increases data to be reprocessed if process fails; keep transaction logs available in case of reprocessing
  • Use the GROUPTRANSOPS; increases number of SQL operations in a Replicat ; reduces I/O to checkpoint file and checkpoint table.
  • Data Filtering and Conversion: Use primary Extract for data capture only. Use a data pump on the source to perform filtering and thereby send less data over the network. Alternatively, use Replicat for conversion and, if the network can handle large amounts of data, also for filtering.
  •  
    Oracle GoldenGate Best Practices and Tips
Dariusz Owczarek

Setting up Oracle's Change Data Capture - 0 views

  •  
    Setting up Oracle's Change Data Capture
Dariusz Owczarek

Call a stored procedure over a database link - 1 views

  • In a distribute transaction -- one in which "DATABASE A" calls "DATABASE B", only "DATABASE A" may commit -- the reason -- "database B" has no mechanism for co-ordinating with "database a" on the commit. We need to do a 2 phase commit to ensure that when the transaction commits any work performed on DATABASE A is committed *as well as* any work on database b.
  • This is the reason will do no allow for ANY node other then the "parent" node of the transaction to issue a commit or rollback in a distributed transaction. The other nodes simply do not have control over the other possible nodes in the transaction.
  •  
    Call a stored procedure over a database link
Dariusz Owczarek

Oracle ASSM Performance - 0 views

  • Cons of ASSM: §         Slow for full-table scans: Several studies have shown that large-table full-table scans (FTS) will run longer with ASSM than standard bitmaps. ASSM FTS tablespaces are consistently slower than freelist FTS operations. This implies that ASSM may not be appropriate for decision support systems and warehouse applications unless partitioning is used with Oracle Parallel Query. §        Slower for high-volume concurrent inserts: Numerous experts have conducted studies that show that tables with high volume bulk loads perform faster with traditional multiple freelists. §         ASSM will influence index clustering: For row ordered tables, ASSM can adversely affect the clustering_factor for indexes. Bitmap freelists are less likely to place adjacent tows on physically adjacent data blocks, and this can lower the clustering_factor and the cost-based optimizer's propensity to favor an index range scan.
  • Pros of ASSM: §          Varying row sizes: ASSM is better than a static pctused. The bitmaps make ASSM tablespaces better at handling rows with wide variations in row length. §         Reducing buffer busy waits: ASSM will remove buffer busy waits better than using multiple freelists. When a table has multiple freelists, all purges must be parallelized to reload the freelists evenly, and ASSM has no such limitation. §         Great for Real Application Clusters: The bitmap freelists remove the need to define multiple freelists groups for RAC and provide overall improved freelist management over traditional freelists.
  • Cons of ASSM: §         Slow for full-table scans: Several studies have shown that large-table full-table scans (FTS) will run longer with ASSM than standard bitmaps. ASSM FTS tablespaces are consistently slower than freelist FTS operations. This implies that ASSM may not be appropriate for decision support systems and warehouse applications unless partitioning is used with Oracle Parallel Query. §          Slower for high-volume concurrent inserts: Numerous experts have conducted studies that show that tables with high volume bulk loads perform faster with traditional multiple freelists. §          ASSM will influence index clustering: For row ordered tables, ASSM can adversely affect the clustering_factor for indexes. Bitmap freelists are less likely to place adjacent tows on physically adjacent data blocks, and this can lower the clustering_factor and the cost-based optimizer's propensity to favor an index range scan.
  •  
    Oracle ASSM Performance pros and cons
Dariusz Owczarek

A UML Profile for Data Modeling - 0 views

  • Unfortunately data modeling is not yet covered by the Unified Modeling Language (UML), even though persistence-related issues are clearly an important aspect of object-oriented software project.
  • The good news is that the Object Management Group (OMG) issued an RFP for an official UML Data Modeling Profile in December 2005.
  • This page summarizes the data modeling profile for UML Class Diagrams, that I apply in Agile Database Techniques, The Object Primer 3rd Edition, and Refactoring Databases.  First some important definitions:
  • ...3 more annotations...
  • Logical data models (LDMs).  LDMs are used to explore either the conceptual design of a database or the detailed data architecture of your enterprise.  LDMs depict the logical data entities, typically referred to simply as data entities, the data attributes describing those entities, and the relationships between the entities.
  • Physical data models (PDMs).  PDMs are used to design the internal schema of a database, depicting the data tables, the data columns of those tables, and the relationships between the tables.
  • Conceptual data models.  These models are typically used to explore domain concepts with project stakeholders.  Conceptual data models are often created as the precursor to LDMs or as alternatives to LDMs.
Dariusz Owczarek

GC Buffer Busy Waits in RAC: Finding Hot Blocks - 0 views

  • Here’s a handy little query I made up the other day to quickly digest any of the segment statistics from the AWR and grab the top objects for the cluster, reporting on each instance.
  • Any time you see heavy concurrency problems during inserts on table data blocks there should always be one first place to look: space management. Since ancient versions of OPS it has been a well-known fact that freelists are the enemy of concurrency.
  •  
    GC Buffer Busy Waits in RAC: Finding Hot Blocks
Dariusz Owczarek

Meaning of Oracle Key Statistics - 0 views

  • Statistics are somewhat fallible in that they are seldom 100 percent accurate, but in most cases they do sufficiently indicate what was intended. Be sure you understand what each statistic represents and the units used (there is a big difference between microseconds and centiseconds).
  • Time-breakdown statistics (Time Model) make it significantly easier to determine the type of operations that are consuming resources in the database.
  • DB time: Time spent by all user processes in the database (that is,. non-idle wait time + CPU time).
  • ...11 more annotations...
  • DB CPU: Time spent by all user processes on the CPU, in Oracle code. On most systems, the majority of time will be spent in DB CPU, SQL execute elapsed time, or PL/SQL execution elapsed time (and possibly Java). Time spent in parse and connection management should be low, so if the levels indicate a high percentage of DB time, a problem exists in the relevant area. You can use this data to correlate with Top 5 Timed Events and Load Profile.
  • Database time (DB time) is an important time-based statistic: it measures the total time spent in the database by active sessions (that is, foreground user processes either actively working or actively waiting in a database call). DB time includes CPU time, I/O time, and other non-idle wait time.
  • Because DB time represents the sum of the time that all sessions spend in database calls, it can easily exceed the elapsed wall-clock time.
  • The objective of tuning an Oracle system could be stated as reducing the time that users spend in performing actions in the database, or simply reducing DB time.
  • Wait time is artificially inflated when the host is CPU bound because the wait time includes the actual time spent waiting (for example, waiting for a disk I/O), as well as the time spent by the process in the OS run-queue waiting to be rescheduled.
  • Therefore, when the host is CPU bound, it is important to reduce CPU utilization before addressing wait-related problems, because otherwise you may be addressing the wrong problem.
  • You can use ASH data to estimate DB time when the actual DB time is not available—for example, if a session has exited. Because ASH samples active sessions every second, you can estimate DB time (in seconds) to be the number of ASH samples counted.
  • V$OSSTAT is OS-related resource utilization data that the Oracle server collects. The statistics available vary by platform. You can use V$OSSTAT to determine CPU utilization (BUSY_TICKS and IDLE_TICKS), and also compare this to the host's CPU utilization statistics. Also look for high OS_CPU_WAIT_TIME, which may indicate the host is CPU bound.
  • V$OSSTAT statistics can be compared with the Time Model statistics, for example to determine how much of the total CPU used on the host is attributable to this instance: DB CPU / BUSY_TICKS.
  • note that the units for these two statistics differ.
  • In 10g, each wait event (V$SYSTEM_EVENT) is classified into one of nine wait classes: Application, Commit, Concurrency, Configuration, Network, Other, System I/O, User I/O, and Idle. The class names are reasonably self-explanatory except Other, which is a catchall bucket for wait events that should not ever contribute any significant wait time.
Dariusz Owczarek

Bind variables and bind variable peeking | Somewhere in between - 0 views

  • Each time you execute this statement; Oracle will convert it to an ASCII function and apply a hashing algorithm over it; than it will check if this SQL statement is already present in the SHARED POOL.
  • If the statement is in the SHARED POOL, Oracle will reuse (soft parse) it together with its execution plan. If the statement is not in the SHARED POOL, Oracle will have to do a hard parse.
  • Oracle’ CBO can generate more optimized execution plan if the he knows upfront the values of the filter predicated, meaning if the values are literals and not bind variables.
  • ...4 more annotations...
  • When you execute an SQL with bind variables, the value for the filter predicate is unknown.
  • In Oracle 8i, CBO will generate one execution plan, regardless of the input of “:a”. In Oracle 9i,10g CBO will wait until the cursor is opened, bind the value from the bind variable and then optimize the SQL. In Oracle 11g CBO has a new feature called “adaptive cursor sharing” which will be discussed in another post
  • Bind variable peeking is when Oracle’s CBO waits until he gets the value for the bind variable and then optimizes the SQL. But, this is very important: this is done in the hard parsing phase of the SQL.
  • When to use bind variables In OLTP system = YES When you execute many statements per second = YES Data Warehouse = NO Data Mining = NO End month reports = NO
Dariusz Owczarek

Oracle Diagnostic Tools - 0 views

  • Enterprise Manager A graphical all-purpose tool that can be used to identify when a spike occurred, drill down to the cause, and examine ADDM recommendations. The benefit of a graphical representation of performance data is visible (pun intended). Data visualizations display any skew directly.
  • Automatic Database Diagnostic Monitor (ADDM) An expert system that automatically identifies and recommends solutions for many instance-wide performance problems. Best used for longer-duration performance problems (that is, problems that are continuous or ongoing for a large proportion of the snapshot interval). The symptoms and problems are available by running the ADDM report, and through Enterprise Manager.
  • Active Session History (ASH) An all-purpose tool providing data that is useful when investigating system-wide problems, shorter-duration spikes, or smaller-scoped problems (for example, for a specific user, or SQL, or a module/action).The advantage of using ASH data when compared to other diagnostic information is that the data is of a finer granularity. This allows you to look at a problem to identify how the symptoms "build up," or allows you to determine exactly which resources are involved and who is using them. The ASH data can be queried directly or accessed via a targeted ASH report.
  • ...3 more annotations...
  • Automatic Workload Repository (AWR) Instance-wide summary data that is used when ADDM is not able to identify the problem in the system, and the problem is of longer duration. Also used to verify the ADDM analysis. The data can be queried directly but is most often accessed via the AWR instance report.
  • Statspack (SP) Instance-wide summary data used to manually diagnose performance problems. You should use SP when you are not licensed for the Diagnostics Pack, and so can't use ADDM or AWR.
  • SQL trace This traces the execution flow (resource utilization, execution plan, and waits) by SQL statement. The information can be used to examine the flow and resource utilization for a specific user, feature, or SQL statement identified as problematic.
Dariusz Owczarek

Top 10 Backup and Recovery Best Practices - 0 views

  • 1. Turn on block checking
  • 2. Turn on block tracking when using RMAN backups (if running 10g)
  • 3. Duplex log groups and members and have more than one archive log dest
  • ...7 more annotations...
  • 4. When backing up the database, use the 'check logical' parameter
  • 5. Test your backup
  • 6. Have each datafile in a single backup piece
  • 7. Maintain your RMAN catalog/controlfile
  • 8. Prepare for loss of controlfiles
  • 9. Test your recovery
  • 10. Do not specify 'delete all input' when backing up archivelogs
Dariusz Owczarek

Adaptive Cursor Sharing - 0 views

  • Adaptive Cursor Sharing is a new feature starting from Oracle version 11g release 1. The idea behind is to improve the execution plans for statements with bind variables. CBO has been enhanced to allow multiple execution plans to be used for a single statement with bind variables, without hard parsing the SQL.
  • There is no special way to configure ACS. It is on by default and of course there is a hidden initialization parameter to turn it off if needed. Key role in the process of decision whether ACS will be used for a particular statement are: Two new columns in V$SQL view (IS_BIND_SENSITIVE and IS_BIND_AWARE) Three new views (V$SQL_CS_HISTOGRAM, V$SQL_CS_SELECTIVITY, V$SQL_CS_STATISTICS)
Dariusz Owczarek

Tuning and Optimizing RHEL for Oracle 9i and 10g Databases (Red Hat Enterprise Linux - ... - 0 views

  •  
    This article is a step by step guide for tuning and optimizing Red Hat Enterprise Linux on x86 and x86-64 platforms running Oracle 9i (32bit/64bit) and Oracle 10g (32bit/64bit) standalone and RAC databases.
Dariusz Owczarek

Can not start Webcache component of OMS - 0 views

  • failed to start a managed process after the maximum retry limit Log: /home/oracle/product/10.2.0/oms10g/opmn/logs/WebCache~WebCache~1
  • failed to start a managed process after the maximum retry limit Log: /home/oracle/product/10.2.0/oms10g/opmn/logs/WebCache~WebCacheAdmin~1
  • b.Login as root user and execute following command OMS_HOME/webcache/bin/webcache_setuser.sh setroot <username> Note: In place of the <username> you should give the user name which you use to start oms.
  •  
    Can not start WebCache component of OMS. opmnctl returns "failed to start a managed process after the maximum retry limit". Applies to Enterprise Manager Grid Control - Version: 10.2.0.1 to 10.2.0.5, can occur on any platform.
Dariusz Owczarek

SQL Plan Management - 0 views

  •  
    The SQL Plan Management new feature of Oracle 11g completes (replaces ?) the outlines by providing a new plan stability capability to Oracle 11g Enterprise Edition.
Dariusz Owczarek

RAC One Node tips - 0 views

  • This instance relocation uses a new featured dubbed Oracle Omotion.
  • This instance relocation uses a new featured dubbed Oracle Omotion.
  • This is a similar approach to instance relocation that was first introduced by Savantis Systems with their DB-Switch invention, an offshoot of the Database Area Network (DAN) approach.
  • ...1 more annotation...
  • In Oracle RAC One Node, it appears that the Omotion software component uses VMware for the high speed instance relocation.  See here for details on how Oracle instance relocation works using DAN and SAN technology.
  •  
    In Oracle 11g r2, we see a new feature dubbed "RAC One Node". RAC One Node claims to be a multiple instances of RAC running on a single node in a cluster, and has a fast "instance relocation" feature in cases of catastrophic server failure."
Dariusz Owczarek

Oracle Database Health Recommendations Catalog - 0 views

  • Oracle Database Availability Configure Install Patching Performance Security
  • Enterprise Manager Configure Patching
  •  
    This catalog contains a complete list of all health recommendations available in My Oracle Support.
1 - 20 of 43 Next › Last »
Showing 20 items per page