My struggles in understanding and learning about Object Oriented design, and the tools and knowledge I've taken from them.

Wednesday, November 14, 2012

5 Steps To Building An Optimized Application

Anyone who has built an application that uses a database and code, and is used by more than a dozen people, has probably encountered this situation: The database is built and normalized, the code is impeccable (or at least reasonably free of defects), and the application meets business requirements...BUT IT'S TOO SLOW! Unbelievably slow. The phone calls start coming in. You get emails from disgruntled users, bewildered teammates, and a boss who's about 5 minutes from kicking you to the curb.

Your palms sweat, your heart races, you think about how much of your current lifestyle can be sustained with unemployment benefits, and you wonder how this could possibly happen after all the blood, sweat, tears, and considerable brain power you put into building the masterpiece that is now rendered sludgey. You doubt yourself, your skills, and reconsider how smart you really are. You move forward in the diagnostic process, but the voices in the back of your head have already started telling you that you don't have the chops to do this job well, and you have no chance of fixing this.

Problems like these are amplified if the code is used during off-hours, while the computer programming or database professional should be sleeping (but perhaps is engaged in an online gaming session, instead). That pager goes off, or that phone rings, and any chance you had of getting a decent night's sleep has gone out the window, along with users' confidence in your skills. One thing is clear: this will not look good on your annual job performance review.

It's enough to make anyone run screaming for the hills, build a small cottage, and keep bees for the rest of their days. But there's a solution, and like anything else, requires discipline, rigor, and foresight.

Before I go into my 5 big reasons applications have performance issues, I want to say something about premature optimization. Premature optimization is an anti-pattern. It is the process of combing through code or design and changing it before the code is done. Don't do it. All premature optimization will net for you is obscure design and confusing code. Build a quality application first. Learn the code and the database, understand them thouroughly the way they were designed, know how the system performs under optimal and typical circumstances, and then go back and find optimization opportunities.

Here are the big reasons I've found for the nightmare scenario where your impeccably beautiful code and database are lousy performers:

1. No system specs or baseline - If you don't know how fast a system should respond, how fast it responded yesterday, last week, and last month, and how fast it is responding today, then you're walking blindfolded into a gun fight. You have no idea how fast it should be going, and therefore have no idea what it means to correct the problem. Even if your customer or stakeholders in the project are unable to provide insight into details about how a system performs, you can still get a baseline and use common sense. Collect data about how long it takes to perform important functions within your application, along with how long database grabs and modifications take. Use SQL Profiler or the equivalent to get specific details about these things. Understand which areas in your application are most frequently used, and which areas in your application are presently the slowest. One of the strategies I use for collecting information about my applications' performance is by using a Logger class (I've used log4net in the past, but now I just use my own custom class) that has a debug mechanism. In all places in the code where I want to collect performance metrics, I put debug statements in to collect performance metrics. When all is well in the application and all metrics have been collected and analyzed, the system's debug flag is turned off. When the application starts to perform poorly, I flip the debug flag on to determine where in the application is exceeding baseline specifications. At times I've found that a grumpy user was simply less patient than typical users, and other times I've found that performance was way out of line with baseline specs, and further diagnosis was needed.

2. Too many database roundtrips - I build my code atomically. And I normalize my database as much as reasonably appropriate. The combination of these two things tends to result in many database round trips, which can often bring performance to a crawl, even when the application is not being used by lots of people. Depending on your database connection strategy, reasonable fixes to this problem may be fairly straight forward. But sometimes, it's not. Sometimes you need to cache data, and sometimes you need to assign more responsibility to stored procedures. Sometimes you need to re-evaluate whether a collection of database modifications REALLY TRULY need to be wrapped inside a transaction. Sometimes you need to create a service that manages database interaction for clients, as opposed to having each client talking to the database.

Another thing to think about with database roundtrips is your level of normalization. Well-normalized databases look beautiful, but if you're not careful, they can result in poor performing code, especially if one is inclined to create lots of bridge tables, and then create tables that have foreign key references to those bridge tables. I used to think that was a good design, but performance problems forced me to rethink my cavalier use of bridge tables.

3. Lack of Indexes and Statistics in the database - Sometimes the problem is that simple. A missing index on a foreign key, or perhaps a missing clustered index, can really kill performance. If you don't understand indexes and statistics (FYI: I didn't used to believe that manually-created statistics could make a huge difference on database performance. Later, it turned out, I was wrong), please google them now. If you don't have any idea why indexes and statistics could impact your application's performance, you need to learn why...now. Also, consider index fragmentation when analyzing indexes. Sometimes indexes are too fragmented. Sometimes the index fill-factor is worth analyzing as well. The fill-factor is a server configuration that specifies how full an index page should be before a new page is created. If you're working on an OLAP (data warehouse), fill factor is fine at 90-95%. In a heavily used OLTP (production database), consider going below 80% (I won't make recommendations, because I've never been adequately convinced that going down to 50% fill-factor is useful, but some people would advocate it). Of course, if you've got no indexes at all in your database, it's only a matter of time (you don't have much) before you'll start suffering performance problems, so database indexes can be mapped out during the design phase. A good rule of thumb is to always have a clustered index/primary key (some people don't advocate this on bridge/link tables, but that's another article), and non-clustered indexes on foreign keys and any column that you'll use to lookup data within the database. I advocate for surrogate primary keys, as opposed to natural keys. I discourage the use of natural keys.

4. Design - Sometimes your design isn't as good as you think it is. I use Linq as my primary means of database interaction, and I recently discovered a design pattern called the "Repository" design pattern. It's better than what I had been using, and it significantly affected performance in an application I was working on. Another example of this is when I had an issue a few weeks back where I found a web ListBox databinding was slow. I dug in, and eventually found that the reason the databinding on the ListBox was slow is because databinding ListBoxes is slow. I replaced the ListBox with a DataGrid, and performance was significantly improved. That problem just goes to show the bottleneck is not always in the database.

5. Lack of server capacity - This is always something that you should be aware of before implementing. If a server can't support lots of users, it needs to be put on a server or distributed across servers that can. Be careful not to be too quick to blame server capacity. I've seen professionals blame super-servers for performance issues that were, in my opinion, clearly not server-related. But by all means, restart services or reboot before you panic too much.

Having a handle on your system's baseline, database roundtrips, indexes, design, and server capacity, you can avoid a lot of the pain that comes with being responsible for a system's performance. Being able to demonstrate which of these factors is causing the performance problem, or preferrably, being able to demonstrate that none of them are the problem, is the best way to keep the problem-solving process moving along, and moreover, to instill confidence in your customers that you are the right person for the job. That confidence goes a long way in these high-stress situations where your application is not performing.

Of course, there are other factors that play a role in poor performance, like Network issues, hardware failure, and other external forces, but these are my top 5, and most important for software and database professionals to understand. If you have other good ones, feel free to comment.

No comments:

Followers

Search This Blog

Powered by Blogger.