InfoWorld: Make performance and scalability testing continuous ... or else
Bad developer habits die hard. If you don't load test as you go, you're asking for a nasty surprise at the end of your project.
Scalability and performance are not the same thing. Developers used to say, "Hey, it works on my machine." Now they say, "Hey, it was fast on my machine." But it's different in production or under load. Unbelievably, most organizations still fail to do adequate performance or load testing -- nor have they integrated performance and scalability into their systems.
This is insane. You don't build a bridge, then try to add load-bearing capabilities at the end of the project -- but most software projects try to do exactly that. Even projects that claim to be "agile" actually treat performance and scalability as that thing they push off all the risk onto at the end. About the only thing worse than this silly behavior is going through the motions of "early optimization" for a routine or two and acting is if that meant something
It doesn't have to be this way. You can write simple integration tests with basic performance assertions. The simplest would be to compare the epoch time at the beginning and the end of the routine, then check that the difference falls within expectations. Of course this can't be very precise because all but real-time operating systems have nondeterministic performance characteristics, but you usually know the thing should return in less than a few seconds.
You can read the rest of my article "Make performance and scalability testing continuous ... or else" over at InfoWorld
Still wondering "Which Freaking Database Should I Use?" You can hear my opinion on NoSQL and more at the Triangle Java Users Group February 18 at the Red Hat headquarters in Raleigh. To sign up and learn more click here.
If you have other inquiries about this article, scalability, or Open Software Integrators generally, you should contact firstname.lastname@example.org.