Our industry has learned that if we deliver intermediate results and revisit functional requirements we can avoid delivering the wrong system late. We have learned that if we perform unit and functional tests on a regular basis, we will deliver systems with fewer bugs. And though we are concerned with the performance of our applications, we rarely test for performance until the application is nearly complete. Can the lessons of iterative, automated and continuous that we've applied to functional testing apply to performance as well?Today, we may argue that a build that completes with unit testing should be performed on an hourly, daily, or weekly basis. We may argue on 100% coverage vs. 50% coverage. We may argue and discuss and ponder about specific details of the process. But, we all pretty much agree that performing automated builds completed with unit testing on a regularly scheduled basis, is a best practice. Yet, our arguments regarding performance testing tend to be limited to, don't...