Replicate Google’s Panda Questionnaire: Processing

As we all know, Google’s Panda update aimed to improve the quality of results returned in search. As Will recently explained on SEO Moz’s Whiteboard Friday, questionnaires collecting users’ opinion of a page or a whole site can help determine an outside interpretation of page (or site) quality. These survey results can be a useful tool for persuading clients or site owners that changes need to be made based on a number of quality factors.

Running the Survey:

The questions were chosen in order to glean a measure of how users felt about the quality of a page. We collected the data using Smartsheet and used Mechanical Turk to recruit web-users to answer questions. For this survey, the respondents didn’t need to have any particular demographic characteristics other than being familiar with looking at websites. Administered in this way, the questionnaire could be made available to any number of respondents depending on time and budget. The respondents were asked to answer ‘yes’, ‘no’ or ‘don’t know’ to the following:

  • Would you trust information from this website?
  • Is this website written by experts?
  • Would you give this site your credit card details?
  • Do the pages on this site have obvious errors?
  • Does the website provide original content or info?
  • Would you recognise this site as an authority?
  • Does this website contain insightful analysis?
  • Would you consider bookmarking pages on this site?
  • Are there excessive adverts on this website?
  • Could pages from this site appear in print?
Getting Answers:

The responses were downloaded as a CSV file and were processed using Excel. As these were fixed-response questions, we did a frequency count for each question’s responses. The quickest way to do this in Excel is to use a pivot table (Insert > Pivot Table) for each question. Select the data and drag the question into both the Row Labels and Values areas in the Pivot Table Field List.



This summarised data can then be pasted out and used to calculate percentages for each question.

Presenting the Results to Encourage Action:

The processed results should be presented in a way that clearly identifies where there are problems. The best way to do this is using a table with some formatting. The coloured cells within each question shows how the majority of respondents answered and whether or not this was a good or a bad thing for opinion on quality. Check through the responses and look at what they mean for measures of quality: is a majority response of ‘yes’ a good thing given the question? Green shows were the majority of responses is positive for quality assessment, yellow cells show where there is little or no difference between responses and red shows where the majority of responses show an area of concern. For example:



Colour coding responses is an easy way to quickly see where there are problems. In the example above, 70% of respondents answered ‘No’ to ‘Would you give this site your credit card details?’; potentially a big problem for ecommerce sites with these goals.  A stacked bar or bar chart is inappropriate for these results as a particular answer (’yes’ or ‘no’) doesn’t consistently show an area performing well or an area of concern. The table can be accompanied by notes to explain what the responses mean and potential follow up actions.

Collecting users’ opinions is a fast, easy, and inexpensive means of getting some authentic feedback from outside your site. This is a potentially powerful tool when trying to bring about change to pages which may be problematic and may help to improve quality overall.

Get blog posts via email


  1. Robert

    Loved the article, some great insights on how to ensure that any website is perceived as a quality resource.

    Do you know of any similar services to Mechanical Turk? The service only currently allows US submissions.

    reply >
  2. Will Critchlow

    @Robert I use for a UK-friendly interface to mechanical turk.

    reply >
  3. Donald

    Very important to mention.
    Detail as part of your instructions in Smartsheet that Mechanical Turks are only allowed to respond once. Otherwise you may get 20 responses from the same person. See below.

    How do I know that tasks submitted to Amazon Mechanical Turk are done by 20 different individuals?
    I submitted a Website review where each row was a different job and I wanted 20 different reviews by 20 different people.

    Hi Donald,

    A tip to monitor which workers are answering your HITs is to insert a column to your sheet titled "Worker ID".
    *Don't include this column in your instruction/answer picker in the HIT form. It will auto-populate with each worker's unique ID.

    You can include in your instructions that a worker can only answer once and that you will reject all work answered more than once.
    You can sort on this worker ID column to quickly spot if a worker answers more than one HIT.

    Hope this helps!

    reply >
  4. I don't understand why the response "no" for advertising is in red.

    The question about credit card isn't useful for some kind of site, I think.

    reply >
  5. Thank you for this analysis, I have a website that had a decrease of Panda, I'll do my analysis and I share the result here;)

    reply >
  6. Fantastic idea.

    Have you thought about asking distilled readers who implement such functionality to share their results? If you do, someone can then take those results and create a reference that can be used to help others determine if another website isn't likely seen as high quality by its visitors.

    Although the technology behind Panda is intriguing, I have this sinking feeling that sites trying to rank highly in the SERPs may end up implementing banal and unoriginal site designs so as not to incur the wrath of the Panda.

    reply >

Leave a Reply

Your email address will not be published. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>