Pages

Subscribe:

Ads 468x60px

Wednesday, 23 January 2013

Calculating Response Time and Statistics


Calculating Response Time and Statistics for Oracle Performance Tuning




According to so many books, the response time R correspond to a SQL trace file is identified as the sum of the elapsed time exhausted in database calls (e values) at recursive call depth 0 (dep=0) plus the sum of all elapsed values from inter and recursive database call wait events. The wait time (elapsed values) collected while routing a database call is rolled up into the parameter e of the database call that stimulated the wait. The classification of wait events is pertained in the calculation of R. Time used up waiting for intra database call wait events (all calls including recursive) must not be added to R, since this would outcome in double calculating. The e values of a database call already include the wait time of all intra database call wait events. Database calls are produced to trace files upon conclusion. This is why WAIT entries for intra database call wait events become visible before the PARSE, EXEC, and FETCH entries that engendered them. Runtime statistics, such as consistent reads, physical writes, and db block gets at recursive call depths other than zero are rolled up into PARSE, EXEC, and FETCH calls at recursive call depth 0. Just like ela values of intra database call wait events, these must not be double calculated. To promote a thorough understanding of how an extended SQL trace profiler calculates a resource, we should need to understand properly its mechanism. Then and only we are able to calculate total response time matrix as well as root cause of SQL statement for performing poor.


Database call statistics at recursive call (internal calls implicit cursors) deepness other than zero are revolved up into the statistics at recursive call depth 0. To determine the total number of db block gets in the trace file, we must think only cu parameter values of PARSE, EXEC, and FETCH entries with dep=0. The database call parameter cu (for current read) keep in touches to the statistic db block gets. The fact that the total number of db block gets as determined by querying V$SESSTAT was nine confirms that database call statistics at lower levels are rolled up into statistics at recursive call depth 0. This is methodology of calculation of response time from v$sesstat and if you get trace file then you can get whole idea about response time matrix of problematic SQL.


 

Tuesday, 11 December 2012

Implicit Connection Cache



Implicit Connection Cache called as ICC in Oracle RAC. ICC is an advanced JDBC 3.0–compliant connection cache completion for DataSource, which can position to dissimilar fundamental databases. The cache is facilitated by invoking setConnectionCacheEnabled(true) on OracleDataSource. Cache is generated when the first correlation is demanded from the OracleDataSource. ICC produces and preserves physical connections to the database and enfolds them with logical connection. One cache is enough to service all connection requests, and any number of caches can be generated. Preferably, more than one cache is produced when there is necessitate accessing more than one DataSource. While the ICC creates and preserves physical connections to the database, the Connection Cache Manager creates the cache and administers the connection demands to the cache. ICC provides a number of advantages:

It can be employed with both thin and OCI drivers.

OCI clients can register to accept notifications about RAC high availability occurrences and respond when events occur.

During DOWN event giving out, OCI ceases affected connections at the client.

Eliminates connections from the OCI connection pool and the OCI session pool-the session pool maps each session to a physical connection in the connection pool, and there can be multiple sessions per connection.
Another feature is Fails over the connection if TAF has been arranged. If TAF is not configured, then the client only receives an error.

OCI does not currently handle UP events.

There is a one-to-one mapping between the OracleDataSource instance and the cache. When the application call ups the close()method to close the connection, all connections acquired through the datasource are revisited to the cache for reuse. The cache either returns an existing connection or creates a new connection.

The connection cache holds all properties specified by the JDBC 3.0 connection pool specification. The support for these properties allows the application to fine-tune the cache to maximize the performance for each application.

It also supports a mechanism to recycle and refresh stale connections. This helps refresh old physical connections.
Only one cache manager is present per virtual machine (VM) to manage all the caches. The Oracle Connection Cache Manager provides a rich set of APIs to manage the connection cache.

It provides a connection cache callback mechanism. The callback feature provides a mechanism for users to define cache actions when a connection is returned to the cache, when handling abandoned connections, and when a connection is requested but none is available in the cache.

Public boolean handleAbandonedConnection(OracleConnection oracleConnection, Object 0): This function is called when a connection is abandoned.

Public void releaseConnection(OracleConnection oracleConnection, Object o: This function is called when releasing a connection. 

This mechanism offers the capability for the application to describe the cache performance when the events occur.

It supports user-defined connection attributes that conclude which connections are retrieved from the cache. The user characteristics are a name-value pair and are not validated by the implicit connection cache.
There are two methods can retrieve connections based on these properties:
getConnection(java.util.Properties
cachedConnectionAttributes)
getConnection(java.lang.String user, java.lang.String passwd,
java.util.Properties cachedConnectionAttributes)

Thursday, 1 November 2012

What is SSD? How to get Good Performance from SSD?



Memory based disks or Solid state disks (SSD) are based on the concept of RAM disks. However, they are relatively more stable. An SSD is very similar to a standard disk drive and, for most practical purposes, behaves like one. To the host system, an SSD is a disk drive. But an SSD does not store data on magnetic disk media. Instead, it stores data on high density arrays of high speed DRAM memory chips. This eliminates the inherent mechanical delays that come with the need to spin a hard disk and position the read/write heads to execute an I/O request. By eliminating such latencies, SSDs achieve access times much faster than conventional disk drives. 

With most vendors, SSD performance is fast and reliable. An SSD has an integral battery-powered hard-disk drive and associated software continuously backing up its contents. At any moment, typically 81 percent of the data on the SSD is backed up to the hard disk. During power failures, batteries maintain power long enough to back up the rest of the data. In some implementations, backing up the contents onto disk is handled at the hardware level, enhancing performance and reliability further.

For database applications, SSDs provide a viable option for enhancing performance by eliminating variable seek times without compromising availability. Most vendors implement custom versions of the SSD concept. An instance, Sun Microsystems has PrestoServe, a high speed static memory-based storage medium that is backed up by lithium powered batteries. In typical Oracle implementations, small but heavy accessed files, such as online redo logs, Undo data files, can be placed on SSDs.

Most implementations of SSDs incorporate highly resilient fault monitoring during regular operations, which includes continuous header checking and data-retention system monitoring. However, have a chat with your vendor’s technical personnel and ensure that such checks are indeed continuous in your case. Also, have an arrangement with your vendor so that their technical personnel can visit your site and perform data-integrity checks at regular intervals, at least every three or four months. If possible, purchase tools from the vendor to conduct such tests in house, more often if necessary. For more assistance about performance tuning, kindly check our remote dba support services or directly contact us.