Key takeaways
|
Recently I had trouble with an order that I placed on Amazon. I emailed the seller to let them know that they only sent one of the two items that I had ordered. Wasn’t that big of a deal to me, I assumed that they would fix things eventually.
I got the standard automated response that they would be in touch soon to help me with my problem.
And then an hour later I received an email with profuse apologies for not getting back to me fast enough. That felt odd. An hour is not a long time to wait for the problem that I had. I figured that I’d be happy if they resolved my problem within a couple of weeks.
I’m sure there are people that would be incensed at waiting an hour, but not me. And there are plenty of people just like me that would be perfectly happy with a two-week turnaround time.
So, I ignored the email and went about my day. About an hour later I received another email from them. This time apologetic about my missing item. But, the email had an odd request, something I wasn’t expecting.
They asked me to take a picture of the product that I did receive, the package that came in, and the packaging slip.
This was a $5 purchase!
They wanted me to take the time to take three separate pictures and attach them to an email before they could assess my situation further. By this time I had already thrown away the packaging and wasn’t about to dig it out of the trash.
I was flabbergasted. Why in the world would they ask me to do this? What exactly would the picture help them do? Help determine if I’m lying?
To me this was simply a matter of trust. They had all the information on their end to lookup my order and verify it. So they either believe me and send me the missing item. Or, they need to call me a liar.
I followed up asking why they needed multiple pictures. Point-blank asked them if they thought I was lying. And told them that taking all these pictures and filling out the paperwork is a waste of my time. I’d rather just go on Amazon and order another one from another seller, it’s only $5.
We went back and forth over several emails. They insisted that they needed the pictures. Refused to explain why. They kept sending the same inquiry, time and again, almost as if a robot was on the other end. They never even acknowledged my question “why?”
Ultimately they closed my case without doing anything for me. All within a few hours of my initial complaint.
All along the way they made sure that they sent emails informing me that they would be in touch quickly. That, I wouldn’t have to wait long.
It seems to me that they were more worried about how fast they could resolve my problem. And by resolve I mean how fast they could close my complaint.
Is expediency expected?
Common sense seems to suggest that people want responses quickly. And in some cases people do care about timeliness.
But there are many cases where people don’t care about the turnaround time. And regardless the expectation of turnaround time, the one thing that people do want is to legitimately have their problem resolved.
People want people that care.
It’s tempting to fall into the trap of believing that speed matters, universally. But not always, and much less than you would think.
How many times have you had a request for an urgent change to the software that you support? How many times do you get things updated quickly, only to find that the change goes unnoticed, unused?
What’s the last feature you developed where turn-around time was a deal breaker?
How many times have you sacrificed something to be expedient?
Turn around == results?
Unfortunately, turnaround time is something that’s easy to measure. Results on the other hand aren’t. In many environments it’s tempting to measure turnaround time and use it as a substitute for measuring results. Results that are often intangible.
Turnaround time becomes a surrogate measure of success.
It seems that the company I was working with measured turnaround time. It’s very possible that they treat many of their customers exactly the same way that they treated me. Expeditiously but without a care about the outcome. I’m sure they’ve got great turnaround time.
But obviously turnaround time doesn’t tell the whole picture. According to their issue tracking system, they had amazing turnaround time resolving my problem.
How do you judge the success of your software development projects? What do you measure? What do you display in your radiators? What do you pat yourself on the back for?
Chances are you measure turn-around time. We see it extoled in many of the “modern” development practices. Just to name a few: velocity tracking, burn down charts, story points, planning poker, sprint planning, time-boxing and continuous everything. It’s all about time, and often about minimizing time.
It’s even in the Agile manifesto: “Delivering working software frequently” and “working software is the primary measure of progress.” These two ideas combined are a recipe for negligence.
We may quickly develop software, and we may quickly release it as working software. But, what impact does that software have? Have we simply delivered working software, quickly, that doesn’t provide much value?
Are our customers stuck wondering why we don’t believe them and just ship them the $5 replacement?
Offsetting the Inadequacy
Now you might be thinking it’s still important to roll up turnaround time. To measure it on an individual case-by-case basis. And then to aggregate the data and use that as a metric to pat yourself on the back. And that you’ll be safe so long as you find other metrics to put in place to complement turnaround time.
Unfortunately, there aren’t many metrics that can tell you what you would need to know. What you need to know is if people are satisfied. It’s not easy to meaningfully measure how people feel.
You could ask customers how they feel. I’ve seen the automated emails that ask me if I was happy with an inquiry. A simple yes or no answer, often a link to click to reply. It’s easy to set this up.
But then a problem arises when you aggregate the responses. How do you roll up yes and no responses? How do you aggregate how many people feel, and tie it back to anything meaningful?
A yes/no questions is often lacking context. Often, customers are irrational. An unhappy customer isn’t necessarily because of something you did wrong. So you have to figure out how you incorporate this into your measurements.
On the flip side of things, some customers won’t tell you when they’re unhappy, or even happy. They may simply not respond to your request. Or they may be intimidated for whatever reason, not wanting to upset somebody, so they don’t reply honestly.
Last year I remember calling the phone company about a problem I had with my bill. I received a survey after the call. I wasn’t happy, so I gave low ratings.
Within minutes I had a phone call from a supervisor, asking me about the situation. That’s not necessarily a bad thing. But, what was problematic was that the supervisor told me that the support rep was in tears.
I felt bad. The support rep that I talked to couldn’t do anything about my problem. My rating had nothing to do with the support rep’s performance. It had to do with the fact that I was unhappy about what had happened with my bill. And I was also unhappy that I had to call multiple times to get the problem resolved.
I answered honestly and that backfired.
And if you think this—being told that you made someone cry—wouldn’t affect how most customers respond to surveys, you might want to think again. This type of interaction does affect how people give ratings in the future.
It makes me angry that a company would not be able to factor in that I was upset with the result and not the person that was trying to help.
So, another problem is how you react to these responses.
These problems are inherent with measuring and aggregating statistics about performance. It’s so often disconnected from reality that it’s useless at best. And more likely than not leading to undesirable consequences.
Measuring turnaround time is more likely to result in a support rep being berated, for trying to help, than it is to make a customer happy.
What else do you measure as an indication of progress in developing software? Is it sufficient to remove the problems inherent in measuring time, to close the gap between measuring time and judging results?
What’s the biggest project you’re working on right now? What’s the value of the software to users? To the business? To you? Do you know? How could you find out? Will measuring speed of delivery have an impact on this value?
You Get What You Aim For
No matter what industry you’re in, no matter what type of work you’re doing, you shouldn’t measure turnaround time. Don’t put it up on the wall, as tempting as that might be.
You can calculate it if you want, on a case by case basis, perhaps use it to find things that have been neglected. But when you’re dealing with human beings you need to understand what matters to individual human beings.
Turnaround time is often not that important. If you prioritize it, it’s what you’ll focus on. You’ll end up thinking you’re doing well when you’re probably not.
When you put it up on walls in front of everybody, you foster the mentality that speed is universally important. You’ll likely find yourself incentivizing fast turnaround time at the expense of results.
It’s far more worthwhile to develop the ability to understand what matters to individuals. To develop individual relationships with individual customers. If you focus on this, you’re much more likely to make your customers happy and be successful as a business.
In the process you’re going to find out that making this a reality requires decentralizing the responsibility, and authority, to understand what customers value.
No number on the wall will outperform the mentality that value is subjective. And in business, success is predicated upon creating things that people appreciate. Do the math.
When it comes to software, do you really care how long it takes to make? How fast you roll out features? Or would you rather know that people are satisfied with the software? Software that’s providing tremendous value to the organization. Value that you’re aware of, focused on, and working to maximize.
About the Author
As a consultant, Wes helps people eradicate emotional blind spots with technology and produce rapid results. His career has been a journey. He started out as a software developer. In working closely with customers, he realized there are many needs beyond the software itself that nobody was taking care of. Those are the needs he addresses today, whether or not technology is involved. Along the journey Wes has had a passion for sharing knowledge. With 15 courses and counting, he has helped thousands of people improve. He works with both Pluralsight and O’Reilly. He’s been a speaker at countless local meetups, community organizations, webinars and conferences. And he speaks professionally to help organizations improve.