How to tune your Oracle database's performance
Handy 101 explainer for DBAs
Workshop It's the call that every Oracle database admin (DBA) dreads: the "too slow" call. Users, or their managers, may argue that online performance is so bad that it stops staff from doing their jobs. Batch jobs such as billing runs may run too slowly. Customers may be walking away from an Oracle-driven ecommerce site because pages are taking too long to refresh.
If your database performance isn’t passing muster, there are things that you can do to make it better. This is a handy guide to possible problems, and how to fix them.
Oracle has its own performance improvement model, which is an incremental process designed to test for different performance hits, and then measure the results. To properly use the model, an Oracle DBA will need benchmarks to measure the improvement by.
Setting performance targets for the database is crucial if a DBA is to understand what to aim for. Setting them in easily understandable technical terms (such as number of customer interactions served per second, or the length of time taken for a billing run) will give them a suitable target.
Similarly, you have to know where you're coming from, in order to know where you're going. Establishing firm baselines for existing performance is an important step. Measure database performance at both the times, and off-peak times, to get a sense of current capabilities.
Once you have established these parameters, you'll be able to begin testing for some of the common performance problems often found in Oracle databases.
To make performance tuning easier, it's often useful to talk to the database, and ask it what it's doing within a user session. It will be in one of three states: idle, processing, or waiting. You can query the database's current state using a view called V$SESSION. This can give you some clues about where the problem lies.
For example, if an application's users are complaining that it's slow, but you notice that the database is idle for large periods of the time during that application's session, then that means that the database simply has no work to do, possibly because the application itself is having performance issues.
Another useful tool is Oracle's Automatic Database Diagnostic Monitor, which is self-diagnostic software designed to highlight performance and other issues in the database. This explores data stored in the Automatic Workload Repository (AWR), which takes a regular snapshot of performance statistics from the database.
Toad for Oracle, one of the more popular - if not the most popular - third party DBA tool for Oracle, has an entire section dedicated to diagnostics, called Spotlight on Oracle. This provides real-time diagnostics in visual form, making them easy to digest. It is good for outcomes including pinpointing and resolving bottlenecks, and peering into database processes.
Whether or not you use tools such as these, there are some common problems that plague Oracle databases. Here are some things to try, which may help streamline performance:
Tune your SQL
Poorly written SQL queries can take their toll on database performance. It's possible to choose SQL manually, but Oracle also includes tools to help you with this. Its Query Optimizer includes an automatic SQL Tuning facility, accessed via the SQL Tuning Advisor.
Toad for Oracle has several SQL tuning features, including the Quest SQL optimizer, and it lets you access that, along with Oracle's Tuning advisor, from its dashboard. The SQL Optimizer analyses SQL or PL/SQL statements that are already running, and explores options for improving performance. It will also optimise your indexes.
V$SESSION can also give you insights into what the database is doing. It shows you the ID of the last SQL statement run by the database, which you can find in the V$SESSION's SQL_ID column.
Be selective here. The SQL statements to look for are high-remote ones, which are good candidates to run through the SQL tuning engine. The ADDM can help you to find these.
Improve table management
How you manage your tables can have a big effect on the performance of your Oracle database. Table partitioning is a method generally used to simplify tables in large, complex databases, by splitting them into smaller, more manageable pieces. But partitioning can also have a positive effect on database performance. The Oracle database recognizes partitions and optimizes SQL statements to focus just on both small and tables, in a process known as 'partition pruning'.
Also, keep a lookout for full table scans, in which an SQL query reads every single block in a table. While this can be useful in some batch-style applications, such as decision support, they can bring online transaction processing-focused databases to their knees. In many cases, a SQL statement may be running a full table scan when it could just as easily use a far faster index-based search.
This applies to small tables, but even more so with long tables. Long tables are not cached, and they have a high watermark greater than five blocks. A high water mark is the highest block that ever contained data, and it is relevant because a full table scan will search every block up to the high watermark – even if many of those blocks no longer contain rows.
Table space management choices can also affect database performance. In particular, if you see lots of recursive SQL statements, it could be a sign that the system is spending too long on space management activities, such as allocating extents (that is, a contiguous collection of data blocks, allocated for storing some specific information).
If you're seeing lots of recursive SQL executed by SYS (the built-in account, which owns the Oracle data dictionary), then it's a sign that this extent allocation may be taking up too much resource at the data dictionary level. Switching to locally managed table spaces, in which extent allocation is handled in the table's own header, may help to solve the problem.
Make sure that your connections are stateful
Your database might be performing well, but what if your application isn't? When they access a database, applications have to make connections. The problem emerges when they make and drop a connection each time they access a table. Connections should be maintained in between each access, but sometimes – especially if an intermediary middleware program is used – this can be overlooked. It has a catastrophic effect on performance, and represents a huge win if you find an application doing this, and correct it.
Reduce serialization where necessary
Serialization happens when transactions seem to happen as though no other users were currently modifying the database, rather than happening concurrently. A little serialization might not be a bad thing, but too much will clog your system. If lots of data blocks are being serialized in the buffer, then this might be a sign that the database needs more rollback segments relative to the number of users, so that it can support multiple changes to the database at once.
Design your database I/O properly
If your application isn't causing problems, maybe your disk-based storage is. How you spread your datafiles across different physical disks can have an important effect on performance. A common mistake is to concentrate on disk space when configuring your disk architecture, rather than looking at I/O bandwidth.
You can find some important clues here in the V$SESSION EVENT row. If a session is waiting for a disk to send data to the buffer, it'll show up as either “db file sequential read” or “db file scattered read” (depending on whether you're running an index or full table scan. If you're seeing a lot of these, check your I/O bandwidth.
Check your statistics
Migration errors are a common cause of poor database performance for Oracle DBAs. If a table schema doesn't make it from a development environment to a production environment, or if it isn't transferred properly from an older version of the database, then indexes and statistics – the information about key database parameters – can go missing.
If this happens, then it can be difficult for the database to create well-performing execution plans (these are the various methods of getting data out of the database, based on structure of an SQL query).
You can avoid this by exporting the schema statistics, using Oracle's DBMS_STATS package. This gives you the ability to review, import, and export schema statistics for a database, so that the query optimizer can do its work properly.
Ultimately, optimizing Oracle database performance is a mixture of trial, error, and some informed decision making with the help of its built-in tools. There are also other, third party tools to help shift the balance even further towards informed action. Dell's Toad includes features including PL/SQL profiling, session information and performance metric display, and the ability to test PL/SQL performance under simulated user loads. It will also help to identify the heaviest SQL loads on the system.
Armed with these tools and techniques, you should be able to stay ahead of the curve when it comes to Oracle performance. Regularly tuning the database should keep it above predefined performance metrics, and keep those panicky user phone calls at bay.
The Register is running a series of Oracle DBA workshop articles in association with Dell Software. Interested in learning more? Check out Dell whitepapers.