Whole-team User Research

In the 15 or so years I've been designing interactive products, I have always tried to involve the intended audience (users) in the course of product development. A recent article by Jared Spool on UIE.com articulates something different that I've felt but rarely had the data to support: involve the rest of the team in deep exposure to users. I'm in complete agreement with this approach, as long as sensitivity to team structure and dynamics are understood and managed well. Resistance to conducting research (broadly speaking, from field research to usability testing) can come from unexpected quarters, but the usual suspects include Sales (who fear losing control of the relationship they have with their clients), Technology (who look at it as something that will slow down develpment time), and Product (who've invested too much time in their MRD and PRD documentation). The Design team, too, has their share of those who fear user input will impinge on their freedom to create, among other reasons.

I started my career at Morningstar working on Windows-based software, where our team used a waterfall process and testing was difficult. We had to wait until the product was nearly done, which was the only way it could stand up long enough for us to test it with users. Even then the feedback was invaluable, but we rarely had many of the front-line developers on hand to observe. Also, much of the development staff were from mainland China and had a more limited command of English.

Later on at Razorfish, I learned valuable lessons on how not to run user research. One of the failures of the agency model is how often work is done in an assembly-line fashion, with little room for iteration and teams that are separated from each other. My team did excellent work, but too often we weren't tightly coupled with the IA, Design, and Tech teams to provide user feedback in a timely manner. It was also trimmed or eliminated from pitch budgets since it always appeared as a line item.

At Yahoo!, we had a much more tightly integrated UX team, but we weren't physically or organizationally close to Tech. In between us was a firewall of Project Managers (necessary in any large endeavor, I might add). This inhibited our ability to get everyone on board, although we did manage to get some of the front-end developers to some usability tests.

Right now I'm at betaworks, and the situation is different still. Here teams are very small and fast-moving. I don't have a team and everyone is super-busy and its hard to find time to squeeze in anything other than core job responsibilities. There are vastly different outlooks on what roles are critical, and products serve very different markets (although they all have UIs). In a startup environment, there is a lot of pressure to maintain a clear vision and course in the face of all manner of obstacles, users included.

Here lies the challenge: how to integrate user feedback into a team's work so that its not considered unusual or can be sidelined. A few takeaways:

  • Convince teams that something that feels so counter-intuitive is actually better. Spending more time on understanding users, planning and designing actually makes the development cycle shorter (and saves tons of time post-launch).
  • Convince teams that this is as important a part of the process of developing interactive products as continually testing for and fixing bugs.
  • As Spool points out, every discipline needs to be involved to avoid the kind of infighting that comes about when people are basing decisions on assumptions about user goals, not empirical evidence.