With all the usability testing software, services and books available, we no longer have any excuse not to do usability testing early and often. I went to a workshop with Nate Bolt, author of Remote Research, at IA Summit 2012 to find out how. This is the second post (of as many as I can find the time to do) about what I learned at the Summit.
The timing was great; I had a long list of things I needed to test back on my desk at work. Although there’s a list of possible tools on GCPEDIA, I wasn’t sure which one to use for what. I had limited time, no budget and was already overwhelmed before I’d begun.
Tools I’ve used
During the workshop, I had a chance to test out Usabilla.com, and since then my team has used Optimal Workshop‘s suite of products. We’ve also done tests with paper prototypes, which is quick and easy but doesn’t necessarily mimic the digital screen the content will eventually appear on.
I’ve also used TechSmith’s Morae and had a product demo of usertesting.com. IntuitionHQ’s blog is helpful whether you use that tool or not (I haven’t yet). I haven’t yet tried wufoo.com either, but one of the other workshop participants showed me the admin interface with all it’s fancy charts & graphs and I really want to try it. She was taking her iPad out to the campus where she worked and had students and teachers (her target audience) complete tests then breeze back into her office and review the results with her team to decide what changes to make to her site. Now that’s agile!
So, how can you choose a tool?
Tools like Usabilla.com and Chalkmark are basically only good for testing 1-step processes, where you can test simple wireframes or screen shots of live (or demo) pages. Loop11 and others that use live sites can test multi-step processes. The resulting data is easy to read and varies somewhat so I highly recommend a trial of as many tools as you can to make sure you get what you need out of it.
First off, let me say that none of these tools replaces moderated testing, which can be done remotely over the ‘net or sitting right beside someone. If you have the good fortune of having a real usability test lab, you can’t beat Morae for it’s ease of use or the functionality for processing data after testing.
The nice thing about doing testing remotely is that you can catch people as they are about to use your website in their native environment. In this way, it’s more like ethnographic research than usability testing. You can start by asking them what they were about to do, then watch them as they attempt to complete the task they came to do.
To demonstrate, Nate started the workshop by getting us to do a simple intercept survey from his website using ethn.io, then immediately calling a survey respondent on the phone to request participation in a quick usability test. Personally, I would find it a bit creepy if someone called me 1 minute after filling in a survey online, but he said that’s never been an issue.
Of course the technology failed in his demo, which is the same thing that happened to my team back in the office. We wanted to send a link to the online test software to our colleagues, but we couldn’t access the platform from inside our network. Sigh.
There are also services available where you can use a company, like usertesting.com, to review your website for you. They have a panel of users to choose from (by demographics such as age, location, income bracket or education) or you can recruit your own. The drawback here are that these are frequent web users, and unless you do the recruiting, they don’t necessarily map to your target audience. The output of the tool is quite thorough – hours and hours of video of people using your website. You can even pay the company to analyze the data for you. In which case perhaps you’re better off doing your own moderated testing with real users. I haven’t yet taken the time to uncover whether there are privacy issues with collecting so much data about people but I imagine a well-worded consent form would be needed.
None of the tools I’ve used yet included the entire workflow of recruiting, screening potential participants, having them sign a consent form, testing then capturing follow-up information. You need to figure that out yourself before you get started.
Testing the test
Inge De Bleecker did a presentation later in the week on crowdsourcing remote unmoderated usability testing highlighting the advantage of these hosted solutions. They are just plain cheap and easy. Some might consider that a drawback as it’s just as easy to misinterpret results and create skewed test plans if you don’t know what you’re doing.
During the workshop, we were given time to test out a number of tools and I tried Usabilla.com. My test plan only consisted of one task but within minutes of sending the link to the test out over Twitter, about a quarter of the people got back to me that they messed up the test. I should have tested the test. Oops. Even so, because so many people completed the test, I got enough data to confirm that the goal itself could be completed. I would definitely use this approach again, especially where I wanted to validate something I’d uncovered elsewhere with more people.
Within an hour I had registered for a demo, learned the tool, set up the test, recruit participants, get nearly 30 completed tests, and figure out how to view and analyze the results. A screenshot of the tool for the task I tested is shown below.
Back at the office, we observed participants as they completed 4 tasks in Chalkmark on a laptop that we carried around to their workstations. We asked them to talk aloud while they made their way through the tasks and made notes about what they said. The most useful part was hearing what they had to say.
Unfortunately, we weren’t able to analyze the results by audience segment within the software, even though the opening question was to self-identify by segments that matched our personas. Next time we’ll record whether or not each user successfully completed the task while we’re observing.
The bottom line
While none of these replace moderated testing with real users, they are, in my opinion:
- Most useful when used with/in between moderated tests.
- Useful for expanding the number of people tested for the same tasks tested in moderated tests.
- Really good for simple tests or when comparing 2 options.
- Better than not doing any testing at all.
- Cheap, easy and an interesting way to learn about testing.
The hardest part
The hardest part of using any of these tools was coming up with the right questions. You don’t want to inadvertently skew your data by having a poor test method, for example, by ordering questions or choosing words in a way that influences the results.
I found it tough not to fall victim to my own assumptions but over time I think I’ll figure out how to build my hypotheses into my test plans without it pre-defining the outcome.