Eric Ulken, executive director of digital strategy at the Philadelphia Media Network, says they see product managers as the “voice of the user.”

Everyone inside your organization can speak for themselves — advocating for what editorial, business, marketing or other departments want in a product. But users aren’t in those meetings, and a good product manager makes sure the product discussions and decisions consider what the users need and want.

To do that well, a product manager needs a process of actually knowing what the users need and want. This is often accomplished through processes of testing and gathering feedback. The participants at our summit offered important best practices for doing it well.

Before any kind of user testing, decide if you are looking for “discovery” or “optimization.” Both are valuable, but each leads to very different processes and goals so you have to be clear about which you’re seeking at any given time, said Matt LeMay, the veteran product manager who spoke about where the field is heading.

A product manager needs a process of actually knowing what the users need and want.

A discovery-focused exploration would ask people about their lives, about their needs, about their decisions, in an attempt to hear some new idea or perspective you didn’t realize. It leads toward new products or new approaches.

An optimization-focused exploration asks people to react to details of your existing product — does it work well, is it confusing, what could be better? It makes your existing product more valuable.

An everyday example of an optimization testing process is the refraction test a patient takes at an eye doctor — the patient looks into a machine and chooses the better of two prescription strengths each time. By the end, the doctor has determined the optimal lense prescription. But here you’re only narrowing options within the set of already available prescription strengths for lenses. If you wanted inspiration for designing a new line of eyeframes, or inventing a whole new approach like the bifocal lense, then you need a discovery-oriented process to probe how people feel about the look of their frames and how vision affects their lives.

As a product manager, you need to match user testing processes to the type of insight you’re trying to get — discovery or optimization. Both are important, but distinct.

User feedback isn’t just for design. Although user testing is great for studying how the design of a product affects a user’s experience and interactions, it shouldn’t stop there.

“It’s pretty difficult to isolate ‘product’ user testing, and separate that from content, advertising, other aspects of the business,” said Jeff Anderson of Pilot Media. “Users don’t want to just give feedback to where things are located and size and design, even though everybody in the organization would champion ‘Let’s build products around our users,’ there was no appetite to change content from a steady stream of vegetables to mixing in some candy, there was no appetite to loosen a pay meter, there was no appetite to reduce number of ads on page, regardless of what the users had to say.”

It’s pretty difficult to isolate ‘product’ user testing, and separate that from content, advertising, other aspects of the business.

This doesn’t mean pandering minor decisions to every little thing a user says they want, but if feedback sessions repeatedly say there are too many ugly ads on the website, or the news reports don’t feel relevant to them — there’s a warning flag there you should seek to further understand and fix.

Get bosses to directly observe feedback sessions. The types of direct feedback you get from users testing your product are most persuasive when heard or observed firsthand, said Kelly Alfieri, executive director of special editorial projects at The New York Times.

“What I’ve found in user feedback is that it’s really valuable for people who are there the entire session and hearing firsthand from the people who are being interviewed, and it loses value as somebody hears it as part of a presentation from the research team or hears it secondhand,” she said. “It has so much value when people really see people and hear what they’re saying — it really gets everyone on the same page.”

If you can get the core project team and even some top-level managers to sit in on some of the feedback sessions, or view the raw recordings of them, the bosses are more likely to develop empathy for what those users’ felt and said. It also creates more understanding and respect for the role of user testing in product decisions.

Combine user testing with broader data evidence. The personal feedback from individual user testing can be very powerful and memorable, but it can also be dismissed as anecdotal by those who don’t want to follow the findings. It helps, then, to back up that user testing with quantitative data that suggests similar behaviors or patterns among all users. “It’s very easy for executive stakeholders, in particular those that have a strong point of view, to ignore that user feedback, even when a session has been recorded and is being presented to them,” said William Renderos, senior product manager for audience development at The Seattle Times. “The only approach is to continue to provide additional data, whether it be [Google Analytics], customer or subscriber feedback from our customer service representatives” or other sources.

Back up that user testing with quantitative data that suggests similar behaviors or patterns among all users.

Test the interest for products before you build them. It’s relatively simple to test what users think of a product you’ve already built. But some at our summit also found value in testing ideas for products, to determine the level and type of interest people might have in them.

One creative approach from The New York Times was called “provocations” — the essential idea is to run an ad campaign for an imagined product or feature, and use the engagement with the ad as a proxy for measuring potential interest in the product concept.

We’ve also seen others do this through small-scale content experiments like a new blog or Twitter account — to see if those can grow quickly as a sign that a bigger investment in a full product is warranted.

One thing that is essential in any of these approaches is to set in advance the benchmarks that determine success. That means looking at comparable data and setting a precise hypothesis — that the test ad campaign should get a 5% click through rate, or the new blog should gain 5,000 readers in one month. Without doing this in advance, people are free later to distort and debate whether the small-scale experiment was a “success” or not.

Share with your network

You also might be interested in: