Monthly Archives: April 2009

Usability Week 2009 – Day 3

Day 3 was the final day of the Usability In Practice 3 Day Camp. Today the presenters covered a wrap up of how to report your findings by reviewing our homework. Reviewing the bad Findings Report was just as informative as seeing the good report. They also covered paper prototyping, field studies, how to finance usability studies, the cost benefit analysis of your work, and successful usability programs.

Paper prototyping is a low-fidelity, cheap and easy way to try lots of different ideas for new designs without spending a lot of time building them. It is also great for setting expectations that this is not a polished product, so the critique stays much more focused. Paper prototypes are great for reviewing navigation, work flow, layout, content, terminology, labels, and naming conventions. Testing will run very similar, except your users will use a pen instead of a mouse and keyboard, and you will have an additional person in the room playing the part of the computer.

Field Studies area great way to get lots of information about your customers. These studies show how their environment affects how they use your product. Artifacts, or the things around your users, give you clues to their habits, tools, and distractions. Be sure to tell your subjects not to clean up before you arrive! And if you are going to view someone like a call center employee, don’t let their manager give you a demonstration instead. They will definitely use the application differently, if at all. Field studies will provide lots more information, so be prepared to record all the data in different ways.

Jakob Nielsen presented some great numbers on usability testing. If the time of an employee is subtracted, the only cost of usability testing is the cost of incentives. Usability budgets for big projects from major corporations average around 10% of the project’s budget. What you should expect to see is an increase in conversion rate, a decrease in bounce rate, an increase in community participation, and an increase in clickthrough rates.

Kara wrapped up the session with a discussion of usability programs. The focus wason building user-centereddesign and testing into your projectsfrom the beginning. This means that usability needs to fit within the existing project structure withinan organization. This can happen in centralized, decentralized, or matrix organizations. Having acentralized repository for all usability documentation will instill a cultureof knowledge management and continuous improvement. Start small, make management your ally, and you will be great!

This was a really great session. I walked away with a lotof information on how to conduct usability tests, and how to indtroduce this continuous improvement methodology into our organization. It is nice to move on to a new topic, however. I think our lecturers did a great job, but were getting tired of seeing the same faces day in and day out. A couple of us even noticed that they were cutting Q&A sessions short to avoid some of the participants who could not seem to stay on track. Unfortunately, it did limit other people from asking pertinent and intelligent questions. Looking back, there wasa brief discussion on usability guidelines. I was hoping we would have spent more time on this. This left me looking forward to the sessions over rest of the week.

Hopefully, some of my friends from Usability Week 2009 are reading my blog now. Any comments yet, usabilitists?

Advertisements

Usability Week 2009 – Day 2

Today started off reviewing our homework. We had to write an objective and 3 to 5 tasks to review the inmod web site. We spent the first half hour reviewing the tasks in small groups. I am always surprised when working in small groups how easy it is for people to take the group off track.

The big topic for session 2 was about conducting the user test. You need to make sure that not only you are prepared, but everyone involved is prepared. The steps of a user test session are Introductions, Run the test session, Debrief the user, and Prepare for the next session. You should prepare the participant for what to expect, and make them comfortable. Stay as neutral as possible during the session. Get any final feedback you may need from the user, and answer any questions they may have. Then reset the computer and get your notes ready for the next test.

After all the sessions, it is time to analyze all of your new data. You organize your data into Findings with supporting details, assign each of the findings a Priority, make Recommendations that are based on the findings, and then cycle your changes into the next development cycle. All of this information should be included into a Report of your Findings. You should try to provide a short, informal report within the first 24 hours, and a more detailed, formal report within two weeks. Your reports should focus on the positive (what worked well) as well as the negative (what needs improvement). You should also formally present your findings to your client. Keep your meetings short. Leverage the video you took and include quotes from the users.

Jakob presented variants to the user testing methodology. You can test more than one user at a time, if it makes sense (like husband and wife, co-workers, etc). When you cant go to the user, and the user can’t come to you, remote testing is one of the last possibilities. You can also test more than one site (either two separate designs, or competing sites). Sometimes you may want to follow users over an extended period of time, so diaries or videos can be used. Eye tracking is a new technology that is very useful with video recording.

Based on the studies that NNG have implemented, tehy have found that the ROI per user tested is maximized at 4 users. They recommend on testing 5 users to be sure that you are safe in your results. But, anything beyond 5 users, the number of findings flattens out but the cost continues to increase incrementally.

You also need to be very careful when testing users with special needs. Disabled users and low literacy users should be tested with simpler tasks and shorter sessions. Senior citizens love teh attention and are over polite, but should also be given less tasks, and expect more time for the introduction and wrap-up of the session. Testing childres is also very different than testing adults. Testing in schools is ideal, if you can get permission. Shorter sessions and co-discovery methods make testing easier. International testing can be much more expensive, as it requires translators or much more travel. Hardware testing works similar to software testing, but you need the real product to test.

Jakob closed this session with a discussion of the ethics of user research. You need to remember that these are people that you are testing. You need to treat them with respect and dignity. The rule of thumb is that you should treat them as you would want to be treated.

After today’s session, a few of us went to a sports bar to watch the North Carolina / Michigan State game. The three of us that went all had ties to Michigan. Shawn and his wife live there now, Rebecca has bounced back and forth between Detroit and Chicago, and I used to live there when I was very young. Naturally, we were all cheering for Michigan State. After 3 minutes of the game, Michigan State was down 15 points, and they never recovered.

Usability Week 2009 – Day 1

Sunday was the first day in a 3 day intensive boot camp on how to run User Tests called Usability In Practice. I have been trying to keep up with my activities in Washington, D.C. by posting on Twitter as well as here on my blog.

Hoa Loranger kicked things off by covering the foundations of usability. She explained that you and your colleagues have a very different experience than your users, and makes it very difficult to predict their needs. This is the basis of user-centered design. She covered 5 dimensions of usability as a quality criterion – learnability, efficiency of use, memorability, errors, and subjective satisfaction. The relationship between design and usability is like the relationship between writers and editors. The Discount Usability method focuses on qualitative rather than quantitative tests. This provides faster methods with fewer resources.

Hoa then introduced the user testing methodology. This is a simple way to collect forst hand data from real users. It is a simple feedback loop – plan your user tests, conduct the tests, analyze your findings, present your findings, and finally modify your designs and retest.

Janelle Estes then took over and walked us through most of the methodology. She covered how to plan your study, how to recruit your participants, how to write your tasks, how to choose your location, and how to observe and take notes. When planning your study you need to decide exactly what you will be testing, what metrics to collect, and how to identify your target users. When recruiting participants, don’t under-estimate the amount of time it takes to find participants. A screener, or script of questions, is a great way to opt in or opt out possible recruits. Once chosen, send a confirmation letter to your users, and include information about their incentives to show up. Schedule your sessions with both their time and your time in mind. When writing your tasks, keep them focused on the goals of the test session. You can have first impression questions, exploratory tasks, and directed tasks. When choosing the location, you need to keep the user, the tester, and the observer in mind. Should it be in-house in a conference room, or in a usability lab? Be sure that you can capture your setup with screen capture, audio and video, time and note taking. Pilot your test before your users. Be sure that on the day of the tests you are ready to take your notes with a notebook or spreadsheet.

Jakob Nielsen then wrapepd up the day presenting information on the Usability Toolbox. He discussed a number of different sources of data and techniques to improve your site or application. Improvement of your site can be fit into any systems development lifecycle. Jakob also walked through Expert Review methods. The first method is a Heuristic Evaluation – a way for experts to examine the interface. The second method is Guidlines inspection – a way to inspect the site relative to a list of guidelines. An interesting thing that he brought up was the expected vs actual results of a Leikert scale. When implementing subjective satisfaction surveys, keep in mind that when using a Leikert scale of 1 to 7, the mean is usually 4.9, not 4. This means that human nature is not to give a 1 or 2, changing to a Leikert scale of 1 to 5 (or 3 to 7, actually). Very interesting.

So that wrapped up Day 1. Lots of great information. Tomorrow will cover conducting the tests, and analyzing, reporting, and presenting the results.

Usability Week, Washington D.C. – Day 0 – Cherry Blossom Festival

In preparation for the Usability Week conference, I checked into The Omni Shoreham Hotel the day before. To my pleasant surprise, Saturday was the same day as the Cherry Blossom Festival. It was a very windy day, it was late in the afternoon, and there were hundreds of thousands of people in town to enjoy the cherry blossoms. All of these factors did not lend themselves to a peaceful photo session down along the Tidal Basin. But, it was the last of the 4 day peak bloom period of the trees, so I decided to take the Metro down to the Smithsonian and walk along the Tidal Basin.

When I got off the Metro, I had to fight the tens of thousands of people trying to get back down into the subway. Once I got through the crowd, and made my way past the Smithsonian and the Washington Monument, I finally reached the Tidal Basin. The walkway along the tidal basin was absolutely packed with people. It reminded me of the WaterFire Festival I attended last October in Providence, Rhode Island, except there were more people here in Washington D.C.

I spent two to three hours walking along the basin, and took a lot of photos. I posted them on my Flickr account, and you can see my cherry blossom photos there.

Amit had recommended that I eat at Tono Sushi on Connecticut Avenue. It is only a block away from the Omni Hotel, so I decided to give it a try. The food was excellent, and the seafood was very fresh. I particularly enjoyed the Baked Salmon Roll, which was something new that I tried.