Fundamental Attribution Error
We judge others on their personality, but we judge ourselves on the situation.
In website testing, this can come back to haunt us because we ignore the feedback of other team members based on our view of them. “Of course they found the website difficult to use, they’re such a negative person,” or “Little wonder they feel the site needs more information, they always ask far too many questions”.
But your own performance, whether positive or negative, will be based on the situation – so you’ll place more weight on your own findings because in your mind they are based on nothing more than the task at hand.
We prefer people in our in-group.
While website testing needs to be done across a range of departments and ability levels, it should be overseen by a neutral to ensure equal weighting is given to feedback from all departments. Otherwise, comments and concerns will be ignored or overstated based on whether the person making them was in the same department as the tester, or was part of the out-group instead.
“Of course they flagged up that the information was thin – they’re in finance, they don’t understand that a website is all about visual elements.”
Ideas, fads and beliefs grow as more people adopt them.
Strangely, given the Bandwagon Effect bias, best-pratice when it comes to UX has remained remarkably steady over the years.
Repeated user testing performed by the Nielsen Norman Group shows that most UX guidelines from the early 90s still hold true today – while fads have come and gone, they do seem to fade away if they don’t present an advantage over the status quo.
However, where the bandwagon effect can come into play is with internal fads. It only takes one or two people within a group to develop an idea for it to take hold, and if that happens within a design agency – for example – it can lead to numerous websites being produced that all feature the same poorly thought-out fad based on nothing more than a cool idea.
Due to a desire for conformity and harmony in groups, we make irrational decisions, often to minimise conflict.
This ties in to the Bandwagon Effect – if a few people within your team have decided that your new website should have a fancy but unusable feature, it won’t be long before the whole team agrees just to maintain the peace.
If you see a person as having a positive trait, that positive impression will spill into other traits – whether positive or negative.
This extends to brands and, in turn, their website. Testing has shown that users on a slow website from a big brand have used that to reinforce their views – “wow, this site must be really popular today because it’s quite slow”. In the same test, a slow site from an unknown brand was seen as a sign their hosting must be poor.
We believe more people agree with us than is actually the case.
The impact of this bias is all too real and obvious. As soon as we spot something when testing a site, we assume everyone else will see the same thing – whether that be positive or negative.
This bias can lead us into ignoring an issue that needs attention, or waste time and money solving an issue that may not be as important as you think. Always remember – you are not your customer! Test to see if an issue really is an issue before heading down a blind alley.
Curse of knowledge
Once we know something we assume everyone else knows it too.
This bias perfectly sums up the need for user testing, especially if you’re trying something unusual like a new navigation style, or developing software that will require the user to go through a learning curve.
Designers and developers often assume the user will be able to quickly and easily figure out this brave new world without direction, forgetting that the only reason they don’t need direction is because the thing being tested is their own invention.
We overestimate how much people are paying attention to our behaviour and appearance.
While this may not be a major one for user testing, it’s worth bearing in mind when getting hung up on web changes. We’ve seen user tests where buttons being in the brand colours makes them less obvious to a user, but internal comms teams worry that changing the button to a non-brand colour would alienate or confuse users by introducing unrecognised colours.
In reality, customers rarely know what your colour palette includes and will spend more time thanking you for addressing the poor buttons than chastising you for going off-brand.
Remembering that your users won’t be paying as much attention to you as you think will enable you to focus on fixing issues in the best way rather than allowing for customer concerns that likely don’t exist.
We rely on immediate examples that come to mind while making judgements.
This ties in with Jakob Nielsen’s Law of Internet User Experience – people spend more time on other sites.
They will be basing their expectations on the examples that comes to mind, and as that typically means either the most-visited or most-recently visited websites, so it’s important to go with widely used conventions and best practices.
We believe that we observe objective reality and that other people are irrational, uninformed or biased.
How many times have you flagged something up with marketing – whether on your website or printed work – to be told “Well you may be reading it in that way, but there’s no way any one else will be.”?
That’s because the person you’re flagging the issue with is displaying naive realism – you’re being irrational, they’re being realistic and sensible. In reality, if one person has raised a concern, chances are others will feel the same way.
It’s one of the reasons why you can find 85% of a site’s issues with just five users. But if you’re the person collecting feedback, don’t assume someone is ill-informed or wrong to flag something up. It will cause you to miss things that need to be dealt with.
The less you know, the more confident you are. The more you know, the less confident you are.
This is quite often seen when conducting user testing. Older users, for example, tend to feel they’ve performed poorly in user tests because they don’t trust their abilities online, while younger users frequently over-estimate their skills.
This is why observing users is important – you need to be able to tally their perceived performance with their actual performance.
The other danger of the Dunning-Kruger Effect is that if the people heading up the project don’t know as much as they think – usually because they’re basing their thinking on feelings rather than data – they will be more confident in their incorrect assumptions.
We rely heavily on the first piece of information introduced when making decisions.
This bias is actually a useful technique when it comes to reducing cognitive load for the user.
Charity websites can yield more donation money by putting a recommended amount rather than leaving it to the user to decide.
Sites with a long sign-up process can set expectations by letting the user know how many steps they’ll need to go through.
And during a sale, putting the old price as well as the new one anchors estimates of how much the item is actually worth and what value they’re getting from the sale.
In the absence of other information, people rely heavily on anchoring in order to shape the decisions they make when using a product or service.
Good anchors help users set their expectation for what’s normal or exceptional, lower the cognitive cost of decision making, and can even increase the perceived value of a product.
We tend to find and remember information that confirms our perceptions.
This is especially important to be aware of when conducting user testing, as the data you collect while running tests will only be useful if you treat all of it as equal.
Even with data to back up your UX views, confirmation bias makes it easy to simply ignore the stats that don’t agree with you. As they say, there are lies, damned lies, and statistics.
Being aware of confirmation bias gives you a greater chance of taking a step back and asking “Is this what the data says, or am I just looking for number that support what I already thought?”.
We judge an argument’s strength on how plausible it seems in our mind.
This has a similar impact as Naive Realism in that we discount others’ views for fallacious reasons.
In this instance, we may discount another’s view on our website if we don’t think their opinion seems likely.
Where that runs into difficulty is that plausibility can only ever be placed within our own frame of reference. If we find it implausible that a web user would encounter a certain issue on your site, we’ll discount the argument of the colleague raising the concern – even if they have examples and data to back it up.
Your view isn’t based on what they’re bringing to the table, but it instead based on whether you find it a valid concern to be raising.
Status Quo Bias
We tend to prefer things to stay the same. Changes from the baselines are considered to be a loss.
This may well explain why user testing results have remained largely the same since the early 90s – unless users can see a real benefit to changing the way websites work, it’s less cognitive strain to continue doing the same things, even if a potential new solution would actually improve their overall online experience.
Sunk Cost Fallacy
We invest more in things that have cost us something rather than altering our investments, even if we face negative outcomes.
This is an important one to be aware of when flagging up possible issues late in the day. We’ve heard of web projects that have encountered an unexpected bump highlighted by user testing – or even a passing comment from a colleague – that management chose to ignore because of the implications it had on the overall project.
In some instances it was fine, but in others it led to noticeable flaws in the website that could have been fixed but were ignored due to the sunk-cost fallacy.
We often draw different conclusions from the same information depending on how it’s framed.
The Framing Effect is one to bear in mind when reviewing data, and is potentially best demonstrated by looking at bounce rate (which is a pretty meaningless metric anyway).
Let’s say a page has an 80% bounce rate. Depending on the purpose of that page, this could be 80% of your visitors were immediately unimpressed by your site and decided to leave straight away, or 80% of your visitors found the information they were looking for instantly (e.g. contact details) and so left your site happy.
Knowing how the data is framed will help you make the best use of that data.
We trust and are more often influenced by the opinion of authority figures.
This can be easily distilled as “Just because your boss (or a user experience practitioner) tells you something, doesn’t make it right!”. In fact, any UX professional worth their salt will give the same answer when asked whether a website’s problem should go with option A or option B – test it!
Being an expert in UX and user testing means knowing what issues to look for, spotting basic flaws that can be found on countless sites, and knowing how to setup and conduct tests in a way that will give genuine insight. What it doesn’t mean is that they know the answer without running tests.
Similarly, your boss may be hugely experienced in their industry, but that doesn’t mean their view on the content or structure of your company website is correct. The saying “You are not your customer” is just as relevant to company bigwigs as it is to anyone else – sometimes more so!
Our perception of time shift depending on trauma, drug use and physical exertion.
The perception of time also shifts with mental exertion too – an overly complicated site may seem like it’s taking longer to load or perform basic tasks than a site that’s easier to use. The takeaway from that? Making your site as easy and intuitive as possible will make your users feel their visit was fast and smooth.