Barry, I regret not seeing this sooner, I LIKED your response. And I was forced to look back at what I had said back then. (Sometimes I enjoy reading what I wrote as much as watching myself on video, meaning not so much, but this time was not too bad.)
What I don't understand is where I have necessarily counter-argued my point. I gave a litany of examples of where you could look at performance after my personal ad-lib section on "what I would do with a survey and is a customer also like me?" (where I ask how much you really get out of surveys if you do not get full participation in them). My litany of examples is about thinking inside an organization on what to do to improve or organize, and you can do that without the conducting of a "survey, proper," a thing you send out and wait until it comes back and compile it. My examples can be things that are learned through incoming complaints from end customers - and also internal incidents that are reported. If you are counting incoming complaints as the voice of the customer who has an experience and also calling that "survey," then I catch your drift, but that is not a "survey" that you send out, that's reacting to a problem that was reported.
What I would not want is for you to gather from me that I don't think surveys mean anything, and this is why I mentioned that I didn't bring science to my answer. I have no doubt there is something I can learn from you, whether it is voluntary outbound surveys or other ways to mine information.
At the same time, I am reminded of He Said/She Said instances when what a customer had to say did not reflect fact. For example, you can have a customer report that they did not receive something. Internally it looks like it could not be that something was not received. It's not within a good customer-facing perspective to expect that what the customer is identifying as truth does not equate with the real (we're not calling a customer a LIAR). It's within the realm of possibility that the customer who reported non-receipt of something to have actually thrown that something he actually DID receive in the garbage and yet call in to complain that he did not get it. And this very thing falls within my customer service experience.
What the customer reports is indicative of SOME kind of problem, but NOT necessarily directly related to performance in any way tied to my organization. In this case, what the customer says about my service is not quite like a "bible," whether he called in to complain or took the time to complete a survey for someone to compile. The "bible" is in what really happened.
Even "garbage" data of any kind does inform and lead to an understanding of some greater problem, and you could find out that a customer has employees who throw things away or the customer has a bad rapport somewhere or some other thing outside the unit of performance that is being observed. And there are ways to get past those episodes.
It would make sense to me that you would have expertise in how to WEED OUT factors such as these, given your background. But I still haven't arrived at where you begin to disagree with me. There is definitely a wealth of information that can be had (or gleaned) from a customer in an infinite number of situations (including our peers, bosses and employee team members). I get that on my worst day. How is something that is not true worth factoring in on any day from a performance metrics perspective on my best day?
Thanks a lot, Barry. |