After looking at last quarter's survey results, I started to get concerned about the validity of our high "customer satisfaction score". Although our responses have been rather consistent, the drop in satisfaction rate between Q3 2008 and Q1 2009 got me a little worried--mainly because it was a 4% drop. I started to re-examine the data and it appears as though there are many factors that should be considered when looking at the “drop” in “satisfaction”.
The first flag was what appears to be a drop in survey participation over the past few quarters. During the first installment of the survey we had 30,272 respondents. That number has dropped by nearly 50%, with 15,823 respondents in Q4 of 2008 and 15,682 respondents in Q1 2009.
Here are a few possible explanations for this “apparent” drop in participation over time:
1) The novelty of the first survey encouraged more participation.
-Even though we have 100 million people seeing the “What’s New” page with every update and we only allowed the survey to appear to 10% of that population, there is a self-selecting bias in any opt-in survey.
-The further we move away from Firefox 3 launch, the lower the enthusiasm for the product. I'm not saying that people have stopped loving Firefox 3, I'm just saying that the further we get away from the launch, the less likely people are to be willing to take a survey to tell us they love the product. (Its’ awesomeness, in essence, has become the status quo).
2) The first survey was released with 3.02 and 3.03 (released a day later), which may have inadvertently increased the number of people who saw the survey link, and consequently, may have artificially doubled our participation with the survey’s first release.
I think option #2 is really the only one that makes sense in this context. It’s easy to see how (self-selecting) individuals would have taken the survey twice and thus inflated our initial “customer satisfaction score”. And since the past two surveys have been consistent in participation and responses, I think that it may be worth completely throwing out the data from the first survey.
Other concerns about the survey came from looking at the responses to certain questions.
For example: "How long have you been using Firefox?"
Another red flag was the number of people who started but did not complete the third installment of the survey. The survey released in December ’08 had a total of 37 “abandoned” surveys. The most recent installment of the survey was “abandoned” 5419 times. Why the huge jump in abandonment rate? I think it comes down to two changes we made to the survey:
1) We increased the number of questions from 8 to 10.
2) We reformatted the survey so that the questions were split equally on two pages.
I think both of these changes, though well intentioned, really took us away from our main goal of this program—to create a simple, short survey that would efficiently allow us to get a better picture of all of our customers.
To review, here are the takeaways:
1) Q32008 Results and “Satisfaction Score” are probably inflated and should be disregarded.
2) The surveys tend to be taken mostly by long-term, loyal Firefox users. We need to find a way to get responses from a more diverse cross-section of Firefox users.
3) We need to pay attention to the design of our survey to ensure we don’t scare users off with the number of questions/layout of the survey.
So, what next? I think our next steps are to step back and try and get a better understanding of what the data we have collected in the past nine months *really* means. That also means trying to figure out what we have previously called our "Customer Satisfaction Score" is really measuring. Is it a customer loyalty score? Is it an enthusiasm score? At this point, I'm not really sure, mainly because I don't know the extent of the biases in our data.
It would be great to hear your thoughts and suggestions: what can we do differently? How do we get a better sample of all types of Firefox users? How can we try to minimize bias?