Indeed the evidence for the 8 hour day, 5 days a week has been around and in practice since 1926:
- Productivity varies over the course of the workday, with the greatest productivity occurring in the first four to six hours. After enough hours, productivity approaches zero; eventually it becomes negative.
- Productivity is hard to quantify for knowledge workers.
- Five-day weeks of eight-hour days maximize long-term output in every industry that has been studied over the past century. What makes us think that our industry is somehow exempt from this rule?
- At 60 hours per week, the loss of productivity caused by working longer hours overwhelms the extra hours worked within a couple of months.
- Continuous work reduces cognitive function 25% for every 24 hours. Multiple consecutive overnighters have a severe cumulative effect.
- Error rates climb with hours worked and especially with loss of sleep . Eventually the odds catch up with you, and catastrophe occurs. When schedules are tight and budgets are big, is this a risk you can really afford to take?
When Henry Ford famously adopted a 40-hour work week in 1926, he was bitterly criticized by members of the National Association of Manufacturers. But his experiments, which he'd been conducting for at least 12 years, showed him clearly that cutting the workday from ten hours to eight hours — and the work week from six days to five days — increased total worker output and reduced production cost. Ford spoke glowingly of the social benefits of a shorter work week, couched firmly in terms of how increased time for consumption was good for everyone. But the core of his argument was that reduced shift length meant more output.So what elements are there to this that ends up affecting the software industry so much? Commonly projects are planned on the flawed assumption that there is a fixed amount of work to be done - a common mistake named the 'lump-of-labour fallacy". Agile methodologies such as Scrum avoid making this assumption, although this doesn't avoid the end of iteration crunch, it does cap the crunch time to a percentage of the iteration. Learning often is either inadequately or not planned for at all, and can take up to 70% of time to deliver on a project (see "The Secret Sauce Of Software Development").
So, if we (as managers) know this is wrong, why does it keep happening? The author poses his viewpoint:
Managers decide to crunch because they want to be able to tell their bosses "I did everything I could." They crunch because they value the butts in the chairs more than the brains creating games. They crunch because they haven't really thought about the job being done or the people doing it. They crunch because they have learned only the importance of appearing to do their best to instead of really of doing their best. And they crunch because, back when they were programmers or artists or testers or assistant producers or associate producers, that was the way they were taught to get things done.Esther Derby has a different view point - that is we fail to plan for what could go wrong:
We go through stages of understanding the problem—we gather requirements, develop analysis models, and then design software solutions. We develop plans to build and deploy the solution. We come up with a well-ordered set of actions that will lead us logically and inevitably to the goal.Inevitably it seems that the factors that bring about crunch time are entirely human. What methods have readers to combat the crunch time phenomenon? Is it simply a human facet of engineering, or is it something that is wholly unnecessary?
And then we skip an important step. We don’t sit down and think about what could go wrong. We learn about weakness in our plan and design approach as we go. Discovering oversights by running into walls costs money, causes delays, and can compromise quality.