Whole-team User Research

In the 15 or so years I've been designing interactive products, I have always tried to involve the intended audience (users) in the course of product development. A recent article by Jared Spool on UIE.com articulates something different that I've felt but rarely had the data to support: involve the rest of the team in deep exposure to users. I'm in complete agreement with this approach, as long as sensitivity to team structure and dynamics are understood and managed well. Resistance to conducting research (broadly speaking, from field research to usability testing) can come from unexpected quarters, but the usual suspects include Sales (who fear losing control of the relationship they have with their clients), Technology (who look at it as something that will slow down develpment time), and Product (who've invested too much time in their MRD and PRD documentation). The Design team, too, has their share of those who fear user input will impinge on their freedom to create, among other reasons.

I started my career at Morningstar working on Windows-based software, where our team used a waterfall process and testing was difficult. We had to wait until the product was nearly done, which was the only way it could stand up long enough for us to test it with users. Even then the feedback was invaluable, but we rarely had many of the front-line developers on hand to observe. Also, much of the development staff were from mainland China and had a more limited command of English.

Later on at Razorfish, I learned valuable lessons on how not to run user research. One of the failures of the agency model is how often work is done in an assembly-line fashion, with little room for iteration and teams that are separated from each other. My team did excellent work, but too often we weren't tightly coupled with the IA, Design, and Tech teams to provide user feedback in a timely manner. It was also trimmed or eliminated from pitch budgets since it always appeared as a line item.

At Yahoo!, we had a much more tightly integrated UX team, but we weren't physically or organizationally close to Tech. In between us was a firewall of Project Managers (necessary in any large endeavor, I might add). This inhibited our ability to get everyone on board, although we did manage to get some of the front-end developers to some usability tests.

Right now I'm at betaworks, and the situation is different still. Here teams are very small and fast-moving. I don't have a team and everyone is super-busy and its hard to find time to squeeze in anything other than core job responsibilities. There are vastly different outlooks on what roles are critical, and products serve very different markets (although they all have UIs). In a startup environment, there is a lot of pressure to maintain a clear vision and course in the face of all manner of obstacles, users included.

Here lies the challenge: how to integrate user feedback into a team's work so that its not considered unusual or can be sidelined. A few takeaways:

  • Convince teams that something that feels so counter-intuitive is actually better. Spending more time on understanding users, planning and designing actually makes the development cycle shorter (and saves tons of time post-launch).
  • Convince teams that this is as important a part of the process of developing interactive products as continually testing for and fixing bugs.
  • As Spool points out, every discipline needs to be involved to avoid the kind of infighting that comes about when people are basing decisions on assumptions about user goals, not empirical evidence.

Design decisions for Chartbeat.com

The chartbeat dashboard recently underwent its first major revision since launching a year ago. Below is a copy of a post I wrote for the chartbeat blog giving some background into the redesign. The team has a strong vision for chartbeat, and to bolster our vision I led some quick-and-clean research into how current and prospective users view chartbeat. Our plan included heuristic evaluation, in-person usability reviews, and group-based cognitive walkthroughs, all to provide insight and momentum. The team arrived at a set of first principles to guide development, and we quickly settled in an iterative design-build-test cycle, where the fidelity of each step evolved as we built out the site. I've highlighted some of the core principles and selected design decisions below:

Structure the site around user goals chartbeat is a tool for front-line workers, not just internal analytics teams. We designed the layout around a set of use cases, so users could walk through the data in a logical fashion, understand causality, and take action. The triad of panels at top allows users to understand "how many people, how did they get here, and what are they looking at?" in a snap. We also heard our users love the kinetic nature of chartbeat, so we extended that a bit with a Matrix-like stream of raw hits in the right-most column.

Data should be appropriately dense, clear and actionable Data should be rich and deep, without compromising ease of use and clarity. As an example, the tree map in v.1 was challenging for users - they liked the intent, but it was difficult to interpret. Sites with extremely low or high traffic or with few pages skewed the chart so that it was impossible to analyze. We decided to use a small range of fixed sizes, ensuring the display of most pages, and using dots to represent visitors. Larger numbers and page titles increase legibility, while isolating the page modules with white space makes it easier to read them as units. We also standardized and gave meaning to the range of colors we used, so users can more associate meaning across the panels.

Everything should be on a single page A key interaction design challenge chartbeat faces is letting users drill into richer data whithout resorting to traditional hierarchical navigation schemes. We came up with the notion of "pivoting" around a selected data element, where the entire page changes to reflect just that element. This way, chartbeat can serve as a site-level analysis tool and easily shift to isolate a page with a single click.

Use historical data as context, but keep it a real-time tool Users love the real-time aspect of chartbeat data, but a unanimous request is to provide some context to what they're viewing. Some amount of historical data was in the previous version, hidden behind a tab at the top of the page. Hidden, too, was a powerful replay feature that lets users isolate events and walk through it ("Tivo for your website" as Tony likes to say). By bringing the replay to the fore, we signaled to users that historical data is available by showing a trend chart. When users pivot on data, we also show a thin historical chart that can be expanded for more detail.

We've been doing some followup visits with users to understand their long-term usage and how it is fitting into workflows. One big finding has been that the clarity of the data brought many new features to the foreground for users, giving them new reasons to use chartbeat.

A Real Nudge

William Poundstone unravels a great example of how businesses use "nudges" to direct user (customer) behavior to make choices they otherwise might not.

"Puzzles, anchors, stars, and plowhorses; those are a few of the terms consultants now use when assembling a menu (which is as much an advertisement as anything else). “A star is a popular, high-profit item—in other words, an item for which customers are willing to pay a good deal more than it costs to make,” Poundstone explains. “A puzzle is high-profit but unpopular; a plowhorse is the opposite, popular yet unprofitable. Consultants try to turn puzzles into stars, nudge customers away from plowhorses, and convince everyone that the prices on the menu are more reasonable than they look.”

More at: http://nymag.com/restaurants/features/62498/#ixzz0ZgvsUHUB

Behavioral Economics and User Experience

A short but interesting article in the WSJ goes into the use of "nudges" and social pressure to encourage people to modify their behavior. The basic idea is that people don't always behave rationally or in their best self-interest. While this wasn't big news to the rest of the world, apparently it is for many mainstream economists, who continue (or did, until recently) to believe that markets are always efficient because people will always carefully weigh choices and make the best one. One of the bits that stood out for me was the theme of social pressure being used to modify electrical use in Sacramento. By making neighbor's power use known, utility customers will actually lower theirs to meet or beat their neighbor's. When we think people are watching us, our behavior is quite different, it turns out. A lot of this is covered in "Nudge", a great read on the subject.

This kind of feedback, along with ordering choices and playing off people's tendencies to overvalue free things, can be useful tools in designing user experiences. We've been exploring this at Chartbeat (betaworks), and I've been wanting to leverage this more in fund-raising at my children's school (where we already have had some success using social media).

Facial Recognition

Wired Science has a short discussion about how humans recognize and process facial characteristics and why we sometimes stare at people with facial deformities. An evolutionary response causes our brain to momentarily stumble when we see people that don't have symmetrical features:

To decide, your eyes sweep over the person’s face, retrieving only parts, mainly just his nose and eyes. Your brain will then try to assemble those pieces into a configuration that you know something about.

When the pieces you supply match nothing in the gallery of known facial expressions, when you encounter a person whose nose, mouth or eyes are distorted in a way you have never encountered before, you instinctively lock on. Your gaze remains riveted, and your brain stays tuned for further information.

“When a face is distorted, we have no pattern to match that,” Rosenberg said. “All primates show this [staring] at something very different, something they have not evolved to see. They need to investigate further. ‘Are they one of us or not?’ In other species, when an animal looks very different, they get rejected.”

Some of this response might be applicable to interface design, one would think. How do we respond to interfaces that aren't symmetrical or don't fit a recognizable pattern? Are the same processes at work? Some studies show interfaces that are better designed are percieved as more usable than funtionally identical, but poorly designed ones.

Judgement

Douglas Bowman posted today on about his decision to leave Google, where he was a Lead Visual Designer. He sounds torn: on the one hand it is a fast-paced environment where you can define an entirely new practice with the ability to affect millions of users. On the other hand it is a strongly engineering-driven culture, where decisions are made based on hard data. I've encountered workplaces that fit that description and almost every other one as well. Whether its engineers, MBAs, marketers, or even other designers, an environment that doesn't understand and respect the role of design can be extremely hard to work in. Douglas is right in identifying what has always been a deal-breaker for me: if top management doesn't get it, forget it. I don't expect CEOs to have degrees in design, but they should have an appreciation that it is more than styling, more than just the touching up at the end of a product development process. I'm willing to work with a CEO that is willing to be brought around, too, but its almost hopeless when your role is not understood and poorly utilized (true for anyone, but I'm focused on design here). I'm perfectly happy to discuss the merits of any design decision I've made, but I've encountered a lot of dishonesty when the guys who cooked up the numbers in the Powerpoint deck are unwilling to acknowledge so, or the tech team won't concede that there are many more solutions available to the problem at hand. I'm not saying people are intentionally lying, but many fields have the veneer of science so that people are convinced they are actually producing scientifically valid work.

So what happens then is a Science vs. Art debate. Design decisions are percieved to be all a matter of opinion - most people have eyes, and therefore the ability to judge what's in front of them. Paradoxically, the designer's judgement is not respected when the subject matter is so easily manipulated. What can happen next is the descent into data.

In an effort to placate a manager who just doesn't like what he sees, design decisions are subjected to quantitative analysis. In many cases, there are strong arguments for using data to guide design decisions, whether usability issues are at stake, page load times are affected, and even brand perceptions. In many cases, incremental design decisions can best be informed by data. I'm a huge proponent of incorporating testing into the product development process - not as a validation at the end, but a part of the process. But when people are spending time testing between 41 shades of blue, clearly some time is being wasted.

All of us rely on judgement to make it through our daily lives and our work. Designers rely on judgement (based on years of training and practice), along with data and input from stakeholders, users, competitors, etc. to develop solutions. The solutions are indeterminate, and often not apparent until after the solution has been arrived at. And of course, there are several available options. All this makes the ability to arrive at a solution extremely valuable.

MSFT Marketing Masquerading as Usability

I've always been a proponent of leveraging usability test artifacts (video, transcripts, quotes) to help communicate to decision makers the impact of design decisions. I always do this with great care, since I don't want to overstate issues I've uncovered, while at the same time making clear the human effects of software usability. Recently, Microsoft partnered with a branding agency to run what appear to be usability tests of its Vista operating system. The real goal was to counter the the large amount of negative press around Vista. I'm a little concerned that what Bradley and Montgomery has done may damage user experience and usability professionals do by trivializing research and making user look like fools.

  • Turns out that these were just 10-minute demonstrations by experts, not actual usability tests.
  • Apparently none of the participants were current Vista users, so their ratings of Vista at the start were entirely perception-based. Unsurprisingly, their approval ratings were astronomical.
  • This is represented as scientific by calling it an "experiment", and using candid-looking video clips of users in a lab setting. Video clips are not shown in their entirety, nor is a transcript of the interview made available. I didn't expect otherwise, since even if this was real product research, this stuff wouldn't be made public.
  • From a methodology standpoint, they out-and-out lied to the users about what the software is and what their intentions were. To make matters worse, "gotcha" clips are shown of users after it is revealed that they are indeed using Vista.
  • I don't have access to the genesis of this project, but its clear this was not an open-ended inquiry into the usability of Vista, but a marketing exercise from the outset to prove that Vista is fine and there's some unjustified bad press out there.

Is this going to hurt user researchers? I doubt it, given how few people will view the marketing site, but it is a real danger. I wonder how the product folks at Microsoft are viewing this.

*Update: The response at the MS Vista product blog isn't entirely positive either.

Giving Users Feedback and Control Over Energy Usage

This is a classic feedback loop - just like giving people a scale will help them lose weight, this study gave people feedback and the ability to particiapate in the energy market, and it lowered energy usage.http://www.nytimes.com/2008/01/10/technology/10energy.html

How the brain discriminates

I'm fascinated with human behavior, and in particular how people are constantly looking for differences in the world around them. I don't know if this is an evolutionary response (fight or flight), or is something else. This is a powerful response that allows us to quickly assess what is going on around us, but it does have a down side. I think because it is so powerful, we have the capacity to amplify distinctions that aren't meaningful. This leads to "the battle of the sexes", India and Pakistan, etc. A recent Harvard study looks at neural patterns when people encounter others who are similar and dissimilar to them. "How does the brain differentiate those who are similar to us from those who are different? Does it analyze differences in skin color, language, religion, height, eye color, foot size? Does it discriminate cat versus dog lovers, Pepsi versus Coke drinkers, Shiite versus Sunni, Crips versus Bloods?"

http://science-community.sciam.com/blog-entry/Mind-Matters/Harvard-Students-Perceive-Rednecks-Neural/300008563

I wonder if we can overcome this through training (education)?

Multitasking makes you dumb

Article in the Atlantic looks at multitasking as a generational and technological phenomenon that may (or should) be peaking . The article is well written and humorous, interspersing personal anecdotes with scientific studies. "Even worse, certain studies find that multitasking boosts the level of stress-related hormones such as cortisol and adrenaline and wears down our systems through biochemical friction, prematurely aging us. In the short term, the confusion, fatigue, and chaos merely hamper our ability to focus and analyze, but in the long term, they may cause it to atrophy."

http://www.theatlantic.com/doc/200711/multitasking

"Google Generation is a myth"

A new study overturns the common assumption that the ‘Google Generation' – youngsters born or brought up in the Internet age – is the most web-literate. The first ever virtual longitudinal study carried out by the CIBER research team at University College London claims that, although young people demonstrate an apparent ease and familiarity with computers, they rely heavily on search engines, view rather than read and do not possess the critical and analytical skills to assess the information that they find on the web.

Also noteworthy is that many of the behaviors often associated with younger users are becoming the norm for all users.

via Putting People First