Developing a comprehensive feedback strategy

When I joined Citizens Advice, I was presented with the challenge of building, developing and refining their feedback strategy.

To understand the situation I looked at what data we were already gathering and analysing. There was a variety of surveys dotted around the website collecting varied data sets. Some of the results were being analysed, but most were sat gathering dust. We simply didn’t have the resource to follow up on all the responses. At the bottom of each web page was a star rating asking the user to vote how helpful the advice had been from 1 star to 5. There were a couple of internal email addresses where staff and volunteers could report various content or website issues to the team. So to summarise, we had the following:

  1. Surveys
  2. Star ratings
  3. Email addresses for feedback

After understanding what was currently in place, I wanted to understand what the team wanted from their feedback system so I set them a tweet challenge. A tweet challenge is a great way to get really focussed, succinct ideas from your team. Here’s what I asked them:

I’d be massively grateful if you could indulge me in a tweet challenge! In 140 characters please tell me what you want from the new Citizens Advice feedback system. We got a fantastic response. Here’s an example of just a few of the tweets we got:

I want to know what proportion of feedback is positive and what is negative. We currently don’t differentiate.

 

Would Net Promoter Score be more a helpful measurement for Citizens Advice?

 

There star ratings are ineffective, but we need something like that to get a quick indication of page performance. We need a simple indicator.

 

I want a system that processes, monitors and auto responds to feedback with minimal involvement from us, while also alerting us to the most important things.

 

I would like an automatic red flag for pages that get poor/high volume of feedback. Then we’d be able to address them immediately.

This activity was followed by a workshop to discuss and expand on all the tweets I had collected. Our ideas grew into large, complex and exciting solutions. We quickly realised that time and resource meant that we would need to strip back to the fundamentals of our required feedback strategy. So we asked ourselves a question that got two very simple and unanimous answers:

Why do we need feedback?

  1. To know if we are helping people
  2. To identify areas of improvement

Immediately we had identified that we wanted a quality measurement (to identify areas of improvement) and a satisfaction measurement (to know if we are helping people). This was simple, but very real progress. We next agreed upon an objective that would guide us through the project ahead:

Let’s develop a simple feedback strategy to understand whether we are providing helpful advice and identify areas in need of iteration and improvement

We wrote where we wanted to be in 1 year’s time:

  • We’ll know how satisfied clients and advisors are with our online advice
  • We’ll have a better understanding of the quality of our advice
  • We’ll automatically flag potential issues
  • We’ll have a feedback dashboard to visualise this with data
  • We’ll analyse trends in feedback
  • We’ll have a better way to close the feedback loop

The first thing we did was replace the star rating. It was ineffective because a lot of the feedback was 3 stars which didn’t really tell us very much – it was neither positive or negative. We replaced it with a simple yes/no question: “Did this advice help?” This gives us our user satisfaction measurement. If a user tells us that the advice did not help, then we ask an additional question: “Why wasn’t this advice helpful”. We made it a closed ended question by allowing the user to select one of 4 reasons why the advice wasn’t helpful, but also allowed for an open-ended answer by offering a free text box too.  It looks like this:

Did this advice help?

We then tidied up our survey so that we only use one survey across the whole site. It’s now much easier to get a big picture view of the site because all the survey results are in one place. We use the survey to get our quality measurement by asking the user what sort of feedback they would like to give. The options are “criticism”, “suggestions for improvement”, or “errors, mistakes or broken links”. We monitor the last option to measure the quality of our advice. (We are considering adding “praise” as an option, however it is much more likely for a user to leave feedback when they are feeling disgruntled.)

The survey used to have multiple free text fields that were difficult to analyse and in which users were repeating the same information multiple times. We now have just one free text field which makes it much easier to analyse.

Survey Monkey screengrab
The first question on the survey allows us to measure the quality of our content

These small changes made a big impact on the feedback we were gathering. However, it was still a time consuming task for user researchers and content designers to sift through all the feedback. Even though we had reduced the three free text fields on the survey to just one, it was still an overwhelming amount of information to extract meaningful feedback from.

We needed a way to expose the feedback in a more meaningful way which had to suit how the team worked. So, using Google Data Studio our data scientist pulled all the insights into one place. And here’s how it looks:

Screenshot of satisfaction measurement
This page of our feedback explorer shows the satisfaction measurement – the percentage of Yes/No answers to the question at the bottom of each page “Did this advice help?”. We can choose to look at the percentage for the whole site, site sections, or drill down to individual pages to see how they perform in relation to the rest of the site.

 

Screen Shot 2018-02-15 at 14.09.32
This page of our feedback explorer shows the satisfaction measurement broken down. If a users says that the advice was not helpful, then we ask them why. Again, we can do this for the whole site, site section or individual pages.

 

Screen Shot of feedback explorer
This page of our feedback explorer shows the quality measurement that we get from our survey data. We monitor how many reports of errors, mistakes or broken links we get, and try to reduce that number month on month. We measure the sentiment of the user’s message using text analysis.

Finally, we had the email feedback to deal with. Staff and volunteers could email a group account. This was difficult to manage and keep on top of. We decided that the best approach to dealing with these was to pull them into a ticketing system. With the direction of our Service Desk team we implemented FreshService ticketing to manage all feedback. This is worth a blog post in it’s own right, so I’ll save the details for another day!

So where are we now based on where we wanted to be a year ago?

  • We now know how satisfied clients and advisors are with our online advice
  • We have a better understanding of the quality of our advice
  • We use Google Data Studio to visualise this with data
  • We can now close the feedback loop with FreshService ticketing functionality
  • We don’t yet automatically flag potential issues
  • We don’t yet analyse trends in feedback

Our feedback strategy is a work in progress that we strive to continually improve.

 

You may also like

Leave a Reply

Your email address will not be published. Required fields are marked *