Why are you here?
Quick poll: How many websites do you go to and do absolutely nothing on it?
Time’s up. Not many?
Websites are there for users to do something with it. Put simply, users come to your site with a motive: to get stuff done.
Yet, I’ve seen many eye tracking reports from a several outfits that appear to have a test plan based on a variation of “take a look at this screen”. That’s like asking you to go to some site you never heard of and sit there. And when was the last time you did that? Sure it might validate some visual priorities or superficial reaction to brand in the creative, but gives little insight into the bigger picture of users understanding your site and brand, why they’re there and their ability to do stuff. That’s where they’re missing real value of an eye tracking study – understanding the user in the context of achieving the user’s task.
Let’s take a look at this very clever video below to illustrate how much a task can influence a result:
Strong safety messages aside, you won’t need to conduct an eye tracking study to predict that the results will be very different. Take the two instructions away and then ask the user: “take a look at this video”, and you’ll likely get a different conclusion again.
When planning eye tracking studies, one of the very first questions I ask the client is: “What do we want users to do?”. Working in the marketing industry, their initial answer will invariably be “get our customers to buy stuff”. It’s not much of a start, but at least they know where I’m going. From there we can then drill down into the types of goals and objectives we would like users to achieve to start to form the basis of our test cases, then measure against what users are actually doing. Ultimately we want to ask: As big, bright and flashy the “Buy now” button is on your home page, are people buying in to your site?
“So what?”
Time and time again, I see eye tracking outfits generate heatmaps with absolutely no context or task. Okay, clients may look at the pretty heatmap that their whizz-bang bit of technology generated. Fine. That’s useful for some quick attention overviews. But after a little while of staring, they’ll eventually say “So what?” then followed inevitably by “And how is this making me money?”
This is what gives eye tracking a bad name. Clients pay top dollar and put design on hold for some pretty heatmap pictures…but so what? Quite a few clients I’ve spoken to who have seen or heard of eye tracking say “pretty pictures but so what?”. These are the same people who have seen reports in the past that are based purely on attention. That’s where words like “expensive” and “superficial” start to be associated with eye tracking studies.
More than meets the eye
When done properly, empirical eye tracking allows you to establish not only what grabs people’s attention, but why. This gives us more insights into what drives our users to do things. And from there, we can more precisely work out the best way to guide them to complete their task. From there we can cut out what bits users ignore, pull advertising that doesn’t work and objectively get the most bang for marketing buck. And that’s where the value is.
Banner blindness
Since the quintessential article on banner blindness published by Jackob Nielsen, several reports have sprung up to counter the initial research. These “banner blindness denialists” have incorporated their own research with eye tracking (usually sponsored by display ad companies) to show how much attention users give to (typically their) display ads and banners.
Their findings are usually in the form of “95% of users fixate on the banner within 10 seconds of the page loading”. But the question is: What were users trying to do?
Let’s compare the two eye tracking results from a quick study below:
This highlights the dangers of eye tracking (and the bad rap it’s getting). Context, methodology and tasks (or lack of them) can be used for both good and evil. We must ensure that eye tracking must provide insight into the user in the context of helping them achieve their goals.
Rise of the machines
We’re starting to see automated eye tracking simulators or “attention simulators” spring up such as 3M VAS, Feng-GUI or Attention Wizard. Although it’s still early days yet for these companies and there are some fundamental questions about their accuracy, their algorithms will no doubt improve over time. Eventually, admittedly, they’ll probably make pretty accurate attention simulators.
But the one thing it won’t be able to do is understand a person’s motive, or specifically attention in a given context or task. That’s where a real eye tracking study will have the edge in terms of insight and ultimately ROI. Unless they’ve got some complex AI algorithm to deal with things like “interest” or “emotion”, I’m not putting my Tobii unit on eBay just yet.
Do they “get it”?
At the end of the day, we are all motivated to “do” something on a site. The secret sauce of valuable user studies, and to quote a usability sage, is to find out whether they “get it”. In other words:
- Visual attention in the context of what they are trying to achieve
- Visual cognition and mental models (i.e. familiarity)
- What gets filtered into working memory (i.e. learning how it works)
- How visual elements get prioritised and how they react to key actions
- Their ability to complete a task.