Sunday was the first day in a 3 day intensive boot camp on how to run User Tests called Usability In Practice. I have been trying to keep up with my activities in Washington, D.C. by posting on Twitter as well as here on my blog.
Hoa Loranger kicked things off by covering the foundations of usability. She explained that you and your colleagues have a very different experience than your users, and makes it very difficult to predict their needs. This is the basis of user-centered design. She covered 5 dimensions of usability as a quality criterion – learnability, efficiency of use, memorability, errors, and subjective satisfaction. The relationship between design and usability is like the relationship between writers and editors. The Discount Usability method focuses on qualitative rather than quantitative tests. This provides faster methods with fewer resources.
Hoa then introduced the user testing methodology. This is a simple way to collect forst hand data from real users. It is a simple feedback loop – plan your user tests, conduct the tests, analyze your findings, present your findings, and finally modify your designs and retest.
Janelle Estes then took over and walked us through most of the methodology. She covered how to plan your study, how to recruit your participants, how to write your tasks, how to choose your location, and how to observe and take notes. When planning your study you need to decide exactly what you will be testing, what metrics to collect, and how to identify your target users. When recruiting participants, don’t under-estimate the amount of time it takes to find participants. A screener, or script of questions, is a great way to opt in or opt out possible recruits. Once chosen, send a confirmation letter to your users, and include information about their incentives to show up. Schedule your sessions with both their time and your time in mind. When writing your tasks, keep them focused on the goals of the test session. You can have first impression questions, exploratory tasks, and directed tasks. When choosing the location, you need to keep the user, the tester, and the observer in mind. Should it be in-house in a conference room, or in a usability lab? Be sure that you can capture your setup with screen capture, audio and video, time and note taking. Pilot your test before your users. Be sure that on the day of the tests you are ready to take your notes with a notebook or spreadsheet.
Jakob Nielsen then wrapepd up the day presenting information on the Usability Toolbox. He discussed a number of different sources of data and techniques to improve your site or application. Improvement of your site can be fit into any systems development lifecycle. Jakob also walked through Expert Review methods. The first method is a Heuristic Evaluation – a way for experts to examine the interface. The second method is Guidlines inspection – a way to inspect the site relative to a list of guidelines. An interesting thing that he brought up was the expected vs actual results of a Leikert scale. When implementing subjective satisfaction surveys, keep in mind that when using a Leikert scale of 1 to 7, the mean is usually 4.9, not 4. This means that human nature is not to give a 1 or 2, changing to a Leikert scale of 1 to 5 (or 3 to 7, actually). Very interesting.
So that wrapped up Day 1. Lots of great information. Tomorrow will cover conducting the tests, and analyzing, reporting, and presenting the results.