Case study | Improving search

Thanks to https://www.makeuseof.com/tag/search-engine-using-today/ for the image

Citizens Advice aims to provide the advice people need for the problems they face and to improve the policies and practices that affect people’s lives. They provide free, independent, confidential and impartial advice to everyone on their rights and responsibilities.

The problem: “Search is broken”

Advisers have been reporting problems with search. They used to be able to find what they were looking for, but it is becoming increasingly difficult to surface the information they need.  

Advisers need to find specific, detailed information at a moment’s notice to give the right advice to a client. Giving correct advice could mean people won’t lose their benefits, their job or be declared bankrupt.

 

Adviser search vs public search

Citizens Advice has 2 core user groups: The public and advisers. They use search in very different ways:

Advisers need to find information quickly to help their clients. They predominantly rely on search to recover information that they know exists, whereas the public use search to discover information about an issue affecting them. The public predominantly reach our advice pages via Google search, and thus rely less on internal search.

  • 8% of public sessions use internal site search, while 48% of adviser sessions do.
  • 25% of the public refine their search in comparison to 45% of advisers.

Refinement was clearly an issue for advisers. The high number of refined searches led us to identify that advisers could not find what they were looking for in their first search. They had to perform multiple searches to find what they were looking for.

Setting an objective

We quickly agreed that our goal should be to reduce search refinement. This would mean that advisers were finding the right content faster, saving time and allowing them to focus on the client. Saving time in advice sessions could even enable advisers to see more clients as a result. As a team, we defined our objective:

“To provide quick, clear, relevant, device agnostic search results”

Our core KPI was to reduce the number of secondary searches by Advisers by 30% in 6 months. It was a bit of a ‘finger in the air’ number, but I convinced the team that having an unambiguous target was important for drive. We would know in 6 months whether the number had been wildly inaccurate, and thus be able to set a more accurate goal for the following 6 months.

We had a clear focus, and this would guide how we prioritised the workload throughout the project.

Gathering ideas

I organised a workshop for the team. Armed with the web stats that we had already gathered, some anecdotal information from advisers, and our combined industry knowledge we created a list of hypotheses of what we could do to improve search. We all had some fairly clear ideas, many of which overlapped, confirming our suspicions that they were solid, educated suggestions. And from them prioritised the following:

  1. Spellcheck: replace ‘did you mean’ with ‘showing results for’  
  2. Improve the design
  3. Improve the metadata
  4. Give better guidance on the ‘no results’ page

We built a search dashboard to track our KPI and some other core health metrics. We didn’t want to improve secondary searches, only to find out that we had negatively impacted our bounce rate. We added a survey to the bottom of the search results page where Advisers could leave us more detailed feedback. This, we hoped, would add more depth and detail to the anecdotal information we already had.

Spellcheck: replaced ‘did you mean’ with ‘showing results for’

A lot of secondary searches were performed due to spelling mistakes. If a user typed Baliff we would show a page with the message “Did you mean Bailiff”. The user would then click on the link to Bailiff to reach their search results. The new journey takes the user straight to the results page for ‘Bailiff’ and offers the message “Showing results for Bailiff, search instead for Baliff. It’s a mental model users of Google Search are familiar with.   

Improve the design

The search results page used an out of date stylesheet that was not in keeping with the rest of the website. It also had a block of text before the results that unnecessarily explained what was in the results. The results were split into 3 tabbed sections, and our stats told us that only 2% of users were clicking on these. Social sharing was available on the results page, and these too were rarely interacted with. We decided that a simpler approach would make readability and scannability of the results page much easier for the user. It wasn’t visually clear whether a result was an index page or an advice page, so we wanted to highlight these with a sitelinks design pattern.

Search-results-page-1-desktop

search

 

Improve the metadata

A lot of the metadata had been poorly written and was keyword stuffed. We didn’t have the resource to embark on manually rewriting all the metadata on the site, so we selected our top 100 adviser search terms and rewrote the metadata for those pages. In doing so, we created new guidelines and training for how to create good metadata going forward.  We bandied around the idea of auto removing metadata from all pages of a certain age, but decided that it might make too large a change, and we were seeking to incrementally improve rankings. A big change would be jarring for the advisers who had become accustomed to expecting certain results from certain search terms.

The content designers were encouraged to think of the meta description as a normal line of content, designed to tell the public and advisers what the page is about. They should use the same language as the rest of the content and show users that the page will answer their need. They shouldn’t try to list everything the page covers – just explain the main topic in a way that doesn’t repeat the title.

 

Give better guidance on the ‘no results’ page

Our no results page was unhelpful and busy:

Search-no-results-desktop

searchnew

The new design clearly explains that there are no results, and offers a better onward journey.

So, what happened?

Our KPI was to reduce the number of secondary searches by Advisers by 30% in 6 months. We reduced them by approximately 8% in 6 months. We were relatively happy with the result.

However, we continued to receive complaints about poor search results.

The eye opening moment was when we undertook further observational research. We discovered that secondary searches were not necessarily because advisers couldn’t find what they were looking for, but rather they were a way of navigating to a section of the site before searching in more detail. It was a bit of a blow to learn that hitting our KPI was not indicative of success.   

What did we learn?

Did we agree on our KPI too early? Perhaps.

Should we have done more upfront research? Definitely.

The quantitative stats were telling us that secondary searches were an issue. We reached our own conclusion about why. More research would have revealed that the issue was with the information architecture, not the search results. We should have investigated the anecdotal information from advisers in more detail.

We undertook some really great work, and have definitely made the experience of using search better. However, the problem remains that the results are simply not good enough.

Cue phase 2

So, what’s on our radar for phase 2 of the search improvements project? Setting a new KPI for one! We’ll also be looking more closely at the search engine itself. Here are a few of the things we are considering:

  1. Edit the weightings of our content
  2. Add filtering functionality
  3. Improve our synonyms dictionary
  4. Investigate our usage of stop words
  5. Offer an alternate way of exploring the advice
  6. Display related topics and related results
  7. Offer search as you type functionality
  8. Auto remove metadata from pages over a certain age
  9. New search engine

Watch this space for an update!

Continue Reading

Developing a comprehensive feedback strategy

When I joined Citizens Advice, I was presented with the challenge of building, developing and refining their feedback strategy.

To understand the situation I looked at what data we were already gathering and analysing. There was a variety of surveys dotted around the website collecting varied data sets. Some of the results were being analysed, but most were sat gathering dust. We simply didn’t have the resource to follow up on all the responses. At the bottom of each web page was a star rating asking the user to vote how helpful the advice had been from 1 star to 5. There were a couple of internal email addresses where staff and volunteers could report various content or website issues to the team. So to summarise, we had the following:

  1. Surveys
  2. Star ratings
  3. Email addresses for feedback

After understanding what was currently in place, I wanted to understand what the team wanted from their feedback system so I set them a tweet challenge. A tweet challenge is a great way to get really focussed, succinct ideas from your team. Here’s what I asked them:

I’d be massively grateful if you could indulge me in a tweet challenge! In 140 characters please tell me what you want from the new Citizens Advice feedback system. We got a fantastic response. Here’s an example of just a few of the tweets we got:

I want to know what proportion of feedback is positive and what is negative. We currently don’t differentiate.

 

Would Net Promoter Score be more a helpful measurement for Citizens Advice?

 

There star ratings are ineffective, but we need something like that to get a quick indication of page performance. We need a simple indicator.

 

I want a system that processes, monitors and auto responds to feedback with minimal involvement from us, while also alerting us to the most important things.

 

I would like an automatic red flag for pages that get poor/high volume of feedback. Then we’d be able to address them immediately.

This activity was followed by a workshop to discuss and expand on all the tweets I had collected. Our ideas grew into large, complex and exciting solutions. We quickly realised that time and resource meant that we would need to strip back to the fundamentals of our required feedback strategy. So we asked ourselves a question that got two very simple and unanimous answers:

Why do we need feedback?

  1. To know if we are helping people
  2. To identify areas of improvement

Immediately we had identified that we wanted a quality measurement (to identify areas of improvement) and a satisfaction measurement (to know if we are helping people). This was simple, but very real progress. We next agreed upon an objective that would guide us through the project ahead:

Let’s develop a simple feedback strategy to understand whether we are providing helpful advice and identify areas in need of iteration and improvement

We wrote where we wanted to be in 1 year’s time:

  • We’ll know how satisfied clients and advisors are with our online advice
  • We’ll have a better understanding of the quality of our advice
  • We’ll automatically flag potential issues
  • We’ll have a feedback dashboard to visualise this with data
  • We’ll analyse trends in feedback
  • We’ll have a better way to close the feedback loop

The first thing we did was replace the star rating. It was ineffective because a lot of the feedback was 3 stars which didn’t really tell us very much – it was neither positive or negative. We replaced it with a simple yes/no question: “Did this advice help?” This gives us our user satisfaction measurement. If a user tells us that the advice did not help, then we ask an additional question: “Why wasn’t this advice helpful”. We made it a closed ended question by allowing the user to select one of 4 reasons why the advice wasn’t helpful, but also allowed for an open-ended answer by offering a free text box too.  It looks like this:

Did this advice help?

We then tidied up our survey so that we only use one survey across the whole site. It’s now much easier to get a big picture view of the site because all the survey results are in one place. We use the survey to get our quality measurement by asking the user what sort of feedback they would like to give. The options are “criticism”, “suggestions for improvement”, or “errors, mistakes or broken links”. We monitor the last option to measure the quality of our advice. (We are considering adding “praise” as an option, however it is much more likely for a user to leave feedback when they are feeling disgruntled.)

The survey used to have multiple free text fields that were difficult to analyse and in which users were repeating the same information multiple times. We now have just one free text field which makes it much easier to analyse.

Survey Monkey screengrab
The first question on the survey allows us to measure the quality of our content

These small changes made a big impact on the feedback we were gathering. However, it was still a time consuming task for user researchers and content designers to sift through all the feedback. Even though we had reduced the three free text fields on the survey to just one, it was still an overwhelming amount of information to extract meaningful feedback from.

We needed a way to expose the feedback in a more meaningful way which had to suit how the team worked. So, using Google Data Studio our data scientist pulled all the insights into one place. And here’s how it looks:

Screenshot of satisfaction measurement
This page of our feedback explorer shows the satisfaction measurement – the percentage of Yes/No answers to the question at the bottom of each page “Did this advice help?”. We can choose to look at the percentage for the whole site, site sections, or drill down to individual pages to see how they perform in relation to the rest of the site.

 

Screen Shot 2018-02-15 at 14.09.32
This page of our feedback explorer shows the satisfaction measurement broken down. If a users says that the advice was not helpful, then we ask them why. Again, we can do this for the whole site, site section or individual pages.

 

Screen Shot of feedback explorer
This page of our feedback explorer shows the quality measurement that we get from our survey data. We monitor how many reports of errors, mistakes or broken links we get, and try to reduce that number month on month. We measure the sentiment of the user’s message using text analysis.

Finally, we had the email feedback to deal with. Staff and volunteers could email a group account. This was difficult to manage and keep on top of. We decided that the best approach to dealing with these was to pull them into a ticketing system. With the direction of our Service Desk team we implemented FreshService ticketing to manage all feedback. This is worth a blog post in it’s own right, so I’ll save the details for another day!

So where are we now based on where we wanted to be a year ago?

  • We now know how satisfied clients and advisors are with our online advice
  • We have a better understanding of the quality of our advice
  • We use Google Data Studio to visualise this with data
  • We can now close the feedback loop with FreshService ticketing functionality
  • We don’t yet automatically flag potential issues
  • We don’t yet analyse trends in feedback

Our feedback strategy is a work in progress that we strive to continually improve.

 

Continue Reading