How not to let your Oracle database spin out of control
Growing pains? Who needs 'em!
Workshop Growing pains: How not to let your Oracle database spin out of control Oracle databases are a lot like children. They toddle along quite nicely, being polite and well behaved, until suddenly, they hit a massive growth spurt. They become teenagers and can make their administrators' lives a living hell. They double in size overnight, sprout schemas in weird places, and everything gets a lot more complex. They need a lot more intensive management. The database administrator becomes like a parent, trying to keep a monster under control.
If your business is doing well (hey! lots of customer orders) or in some cases, even if it isn't (the product is awful and customer support tickets treble) the chances are that the database is going to get bigger over time. You may not see the massive growth spurts that most teenagers encounter, but your database is still going to need some steering. Here's how to be prepared.
It's important to understand how the database size is increasing. There are various ways to do that. One is to use a tool to monitor database statistics on a regular basis. Look for issues such as memory overuse, I/O constraints, and bottlenecks in CPU usage. You can use the database monitoring screen in a system like Toad for Oracle for this. One common tactic used by savvy DBAs is to monitor tablespace sizes in Oracle.
You can do this using the statistics snapshot feature inside the database's Automated Workload Repository (AWR). That'll give you the statistics you need to monitor tablespace growth over time. You can access then from inside a script using the [DBA_HIST_SEG_STAT]() view. You can also use the historical 'wri$_optstat_tab_history' table to track table size, and v$datafile queries for database size.
Those tables are going to get pretty big after a while, and that will cause problems. You'll start seeing marked decreases in performance on full table scans (where every block of a tablespace is searched). On very large tables, certain statements such as UPDATE can put the database under severe strain, by taxing its functionality.
When you write to a database table, it flows those requests through a buffer, holding them until they've been committed to the database, and then writing that to the REDO logs, that allow you to redo those transactions in the event of a database problem. But if you're doing a parallel UPDATE, say, then that buffer will be stuffed full of requests, and could begin to lock your database up.
Once you have identified your larger and fastest-growing tablespaces (and the largest may not be the fastest growing), you can go about selectively breaking them apart, to make them more manageable. That's where partitioning comes in.
Partitioning breaks a tablespace up into several subtables, and you can do it in four ways:
Lists This breaks up the partitions into subsets based on a data category you select. An example would be breaking up a sales table by country, and then perhaps by sub-region.
Ranges This partitioning method will break up the table according to a range of data, with time-based series such as dates being the most obvious.
Hashes Hashes allow you to specify partitions with equal numbers of rows, when the size disparity between ranges or lists would be too great.
Composites These partitions break up a tablespace by one method first, and then subsequently by another. For example, you might partition a sales table by month first (range) and then by product category (list).
You can also partition table indexes, too, to make those even faster to search.
Dell's Toad, which is probably the most popular third party Oracle DBA tool, helps you to keep track of your partitions from within its Schema browser. It indicates whether a table is partitioned, and lets you inspect that partition from a dropdown menu.
Information lifecycle management
Partitions should help you to make your database faster and easier to manage from the start, but you can use them to take further measures. For example, placing them across different physical disk spindles and I/O controllers should help to improve performance by reducing read bottlenecks, although this is likely to yield better results for hash partitions, where rows aren't grouped together in the same partition by a commonly searchable key.
You can also use partitions to manage the lifecycle of that information more effectively. For example, if you range partition by date, producing a partition for your sales per month means that you can begin to move your older partitions to lower-tiered storage (slower disk drives) over time. That can help to rein in your database size.
When moved to lower-tiered storage, this data is seen to be getting 'colder' - it's still accessible, but that access will be slower. This data can also be compressed more aggressively too, on the basis that it won't be accessed as often, in another activity known as compression tiering.
A new feature in Oracle 12c, Automatic Data Optimisation, lets you set up compression policies to handle this tiering automatically.
All of this assumes that your partitions are indeed being accessed relatively infrequently. You can check those assumptions about how often partitions are accessed using Heat Map, another new 12c feature that lets you scan visually for the most-accessed data.
Effective management and optimisation
After structuring your database properly, you can take still more measures to keep a database in line. One of them is memory management. Memory can suffer when dealing with large databases and complex queries. Enable the Automatic Memory Manager (AMM).
More broadly, though, a 'single pane of glass' system will help to give you visibility into key areas of database performance, such as transaction times. Think about a dedicated third-party tool for this, such as Toad, which can present a range of real-time performance feeds in an easy-to-read visual format.
That's the thing about databases that continue to grow; even after you've set them up for success, applying proper partitioning and ILM practices, they need constant nurturing, and monitoring.
There's always the danger that a database may begin to behave unexpectedly if different underlying aspects (such as tablespace sizes) grow faster than you expected. So a watchful eye and a powerful toolset is an important complement to robust architecture and design. ®