Original URL: https://www.theregister.com/2007/11/19/database_performance_management_soa/

Tune your service oriented applications with SQL

Bottleneck, meet database

By Dan Clarke

Posted in Channel, 19th November 2007 17:02 GMT

Performance is one of the more insidious problems that a developer can face. We don't seem to have time for it with all work just getting an application's functionally correct.

The problem is that it's a non-functional requirement and as with all non-functional requirements the specification is often either vague or non-existent. It can be difficult to test performance until late in the development cycle and - trust me when I say this - late in the end of the development cycle is not a good place to discover performance problems.

With Oracle fresh in our minds thanks to OpenWorld, I want to look at the performance problem with regard to the database. Profiling and algorithmic complexity are - of course - important, but as the world becomes more distributed and service oriented we should consider if our performance dollars can be better spent improving our SQL queries. Does this sound crazy?

According to author and industry expert Gavin Powell, an order of magnitude application performance increases can be obtained simply by tuning the SQL and correctly optimizing the data access path.

This is an incredible figure and it begs the question: can developers afford to continue leaving SQL expertise to the DBA? After all, if such massive gains are regularly possible then doesn't this mean that a lot of developers are producing pretty questionable SQL? We can't argue that it's not important because it's definitely not cheap to scale the database that's for sure.

What's making it worse is that in our connected world performance bottlenecks are more likely to be discovered. In the past, barriers to integration such as obscure APIs and thick clients kept information within the application and only the most stubborn external aggregators could access it.

These days, web services and service oriented architectures (SOAs) mean we are no longer protected from scalability and performance issues. Increasing aggregation of information is pushing demands on software service infrastructure and any bottlenecks in the system will be exposed sooner or later. If the system grinds to a halt then the chances are it could spell project failure - and that does not look good on the CV!

We need to accept that application performance needs to be explicitly managed. It's good that we implement use cases and write tests that demonstrate functional correctness. Unfortunately, functional tests that use minimally populated databases do not tell us anything about the performance characteristics of a system under real world data volumes with parallel processes making simultaneous demands on the database.

If the requirements process does not provide performance information then an alternative is to perform tests with different loads and different size datasets. The goal is to identify areas where performance should be investigated and to make statements like “response in 500ms when there are 10,000 providers and 1,000 simultaneous users”.

Once the performance characteristics of the system have been established we can set to addressing the problems. Traditionally, profiling is used to identify where the CPU cycles are spent, here I suggest that the low-hanging fruit may be the SQL.

If most of the response time is being spent in the database then start by considering the following.

Is the data model appropriate for the applications needs? If the model is over normalized or inappropriate, then more costly relational joins must be used rendering the SQL more complicated and difficult to process. It's not a hard and fast rule, but chances are that if you have to do complex joins to recover basic information then you should look at your data model.

How much of the information brought back in a select is actually used? A full table scan of a large data set will hamstring the database, a real waste of both network traffic and database performance if the functional code uses only a small part of the data brought back. Where clauses should be on indexes as much as possible, this may even allow the database to calculate a result set using only information it holds in memory.

If a query can be identified as problematic then the explain function present in Oracle and many other databases allows its performance to be evaluated. The output of explain shows where the database spends its time while processing the query and gives the developer feedback so he can restructure and perhaps achieve more efficient query execution.

If the functionality is badly specified then this sort of engineering can be a losing battle. I always think of this when search engines return thousands of results: if I don't find what I want in the first few pages I try another search. For example, a search for hotels in London on Expedia takes about 10 seconds and returns 27 pages. An alternative interface and use case could have resulted in less work for their servers and better information for me.

So do you know the difference between an inner and outer join? Does your stomach turn when you see regular expressions where there should be clauses? Can you look at two queries returning the same information and know which is executed more efficiently by the database? If not, then maybe brushing up your SQL skills could save a fortune on the deployment cost of your next application.®