ux
The Freelance Studio
Denver, Co. a User Experience Agency
Show us the pictures
There’s an increasing gulf between the privacy of individuals and that of corporations and monopolies.
An individual is almost certainly going be videotaped every time he leaves home. You will be caught on camera in the store, at the airport and on the street. Your calls to various organizations will also be recorded “for quality purposes.”
At the same time, it’s against the law to film animal cruelty on farms in many states. And if you say to a customer service rep, “I’m taping this call,” you’re likely to be met with hostility or even a dead line.
Kudos, then, to police departments for responding to the public and putting cameras in cars and on uniforms. And points to Perdue for building a chicken processing plant where the animals aren’t covered with feces and where they’re able to proudly give a tour to a reporter. They're not doing this because they're nice guys... they're doing it because customers are demanding it. They view a transparent supply chain as a competitive advantage that their competitors will have trouble replicating.
Your online history with a company ought to include a complete history of all the emails and phone calls you've had with them. And when you choose a piece of clothing or a piece of fish, it ought to be easy to see where it was made and who touched it along the way.
If we're willing to see it.
It's not a technical problem. It will happen as soon as enough voices in the supply chain (perhaps us, the end of the chain) demand it.
Posted by Seth Godin on August 02, 2015 | Permalink
inShare
241
Yes!, please and thank you
Don't jerk people around
Here's a simple marketing strategy for a smaller company trying to compete in a big-company world: Choose your customers, trust them, treat them well.
Say yes.
Bend the rules.
Show up on time.
Keep your promises.
Don't exert power merely because you can.
Be human, be kind, pay attention, smile.
Not everyone deserves this sort of treatment, not everyone will do their part to be the kind of customer you can delight and serve. But that's okay, you don't need everyone.
When in doubt, be the anti-airline.
Posted by Seth Godin on August 01, 2015 | Permalink | TrackBack (0)
inShare
869
On adding a zero
Just about everyone can imagine what it would be like to add 10% more to their output, to be 10% better or faster.
Many people can envision what their world would be like if they were twice as good, if the work was twice as insightful or useful or urgent.
But ten times?
It's really difficult to imagine what you would do with ten times as many employees, or ten times the assets or ten times the audience.
And yet imagining it is often the first step to getting there.
Posted by Seth Godin on July 31, 2015 | Permalink
inShare
393
Three things that make CEOs stupid
I sat through an endless presentation by the CEO of a fast-growing company. He was doing fine for half an hour, but then, when his time was up, he chose to spend 45 minutes more on his final slide, haranguing and invecting, jumping from topic to topic and basically bringing the entire group to its knees in frustration.
Power, of course, is the first problem. When things are going fairly well, the CEO has a ton of power, and often, that power makes things appear to work, even when they're not the right thing to do for the long-term. As a result, there's no market that is correcting the bad decisions, at least not right now.
Exposure is the second problem. Once a company gets big enough, the CEO spends his time with investors and senior executives, not with people who actually invent or deliver products and services, and not with customers. Another form of not getting the right feedback, because the people being pleased aren't the right ones.
The truth is the final and most endemic problem. Employees incorrectly (in many cases) believe that the boss doesn't want to hear from them, doesn't want constructive feedback. Everyone else has a boss, and built into the nature of boss-ness is the idea that someone is going to tell you what's not working. But we fall into the trap of believing that just because the CEO isn't assigned a boss, he doesn't need or want one.
A stupid CEO can coast for a long time if the systems are good. But a stupid CEO is always wasting opportunities, because being smarter usually leads to doing better. Plus, they're a lot more fun to work for.
Posted by Seth Godin on July 30, 2015 | Permalink | TrackBack (0)
inShare
853
Notes, not received
An expected apology rarely makes things better. But an expected apology that never arrives can make things worse.
An expected thank you note rarely satifies. But an expected thank you that never arrives can make things worse.
On the other hand, the unexpected praise or apology, the one that comes out of the blue, can change everything.
It's easier than ever to reach out and speak up. Sad, then, how rarely we do it when it's not expected.
Posted by Seth Godin on July 29, 2015 | Permalink
inShare
459
Predicting the future isn't easy
The best plans are based on trends, not specific events.
Here's a hopeless task: There are 18 candidates in the GOP race.
If you can rank them in the order they're going to drop out, I'll give you a signed copy of my newbook or $10,000, your choice. The chances of being correct are 1:18!, or about one in six quadrillion, so I think the prize is safe.
On the other hand, this blog's twitter account is consistently creeping toward 500,000 followers. If you can guess the date, I'll send you a signed book. Your odds are a lot better on this one.
When in doubt, pick projects where the factors you need to have in place are on the road the audience is already on.
Posted by Seth Godin on July 28, 2015 | Permalink
inShare
180
What is your art?
I define art as having nothing at all to do with painting.
Art is a human act, a generous contribution, something that might not work, and it is intended to change the recipient for the better, often causing a connection to happen.
Five elements that are difficult to find and worth seeking out. Human, generous, risky, change and connection.
You can be perfect or you can make art.
You can keep track of what you get in return, or you can make art.
You can enjoy the status quo, or you can make art.
The most difficult part might be in choosing whether you want to make art at all, and committing to what it requires of you.
Posted by Seth Godin on July 28, 2015 | Permalink
inShare
619
Thoughts for the consigliere
The marketer, the sales rep, the CFO. These are the indispensable levers that help creative work get to the world.
When you're part of a project but not the driving creative force, when you work to lever the work of a team of mad scientists and brilliant designers, consider a blend of three roles:
Generous skeptic: When the new idea is on the table, when things are being discussed, hashed out and workshopped, are you able to ask the useful and difficult questions? Someone needs to be the trusted critic, asking not with fear, but with confidence. Your question is useful when it exposes the truth, not when it helps us hide.
Shameless cheerleader: Once the work is done and ready for market, your job is to stand fully behind it, far more than even those that actively created it. This might be hard work, but it's your work. If you can't own it, don't ship it.
Fierce advocate: And now that it's launched, you put yourself on the line for the change we're out to make in the world. The rest of the team doesn't need to know about how much it costs you to put this out there, just as you don't need to know the pain it took to create it. The relentless push to make the change we seek is a key part of why you're here.
These three elements, taken together, define the consigliere who can add extraordinary value to a project, to a leader, to a team. They are the opposite of "tell me what to do," combined with, "stand with me as we take on the market."
Posted by Seth Godin on July 27, 2015 | Permalink | TrackBack (0)
inShare
420
"Can we talk about this?"
That simple question is the litmus test for a productive relationship.
If one professional says it to another, the answer is an emotion-free, "sure." There's no baggage. Talking is the point. Talking is what we do. We communicate to solve problems.
On the other hand, if the question brings with it fear and agitation and, "uh oh, what's wrong," you can bet that important stuff goes undiscussed all the time.
[PS altMBA2 applications are due by tomorrow.]
Posted by Seth Godin on July 26, 2015 | Permalink
inShare
356
In search of your calling
I don't think we have a calling.
I do think it's possible to have a caring.
A calling implies that there's just one thing for you, just one thing you're supposed to do.
What we most need in our lives, though, is something worth doing, worth it because we care.
There are plenty of forces pushing us to not care. Bosses, systems, bureaucracies and the fear of mattering.
None of them are worth sacrificing something as important as caring.
Posted by Seth Godin on July 25, 2015 | Permalink
inShare
685
Opposition
The opposite of creativity is fear.
And fear's enemy is creativity.
The opposite of yes is maybe.
Because maybe is non-definitive, and both yes and no give us closure and the chance to move ahead.
Perfect is the enemy of good.
Us is not the enemy of them. Us is the opposite of alone.
They can become us as soon as we permit it.
Everything is the opposite of okay. Everything can never be okay. Except when we permit it.
The right is not the opposite of the left. Each side has the chance to go up, which is precisely the opposite of down.
Dreams are not the opposite of reality. Dreams inform reality.
Posted by Seth Godin on July 24, 2015 | Permalink | TrackBack (0)
inShare
650
You have no credibility (yet)
You believe you have a great idea, a hit record, a press release worth running, a company worth funding. You know that the customer should use your limited-offer discount code, that the sponsor should run an ad, that the admissions office should let you in. You know that the fast-growing company should hire you, and you're ready to throw your (excellent) resume over the transom.
This is insufficient.
Your belief, even your proof, is insufficient for you to get the attention, the trust and the action you seek.
When everyone has access, no one does. The people you most want to reach are likely to be the very people that are the most difficult to reach.
Attention is not yours to take whenever you need it. And trust is not something you can insist on.
You can earn trust, just as you can earn attention. Not with everyone, but with the people that you need, the people who need you.
This is the essence of permission marketing.
When I began in the book industry thirty years ago, if you had a stamp, you had everything you needed to get a book proposal in front of an editor. You could send as many proposals as you liked, to as many editors as you liked. All you needed to do was mail them.
In my first year, after my first book came out, I was totally unsuccessful. Not one editor invested in one of the thirty books I was busy creating.
It wasn't that the books were lousy. It was me. I was lousy. I had no credibility. I didn't speak the right language, in the right way. Didn't have the credibility to be believed, and hadn't earned the attention of the people I was attempting to work with.
Email and other poking methods have made it easy to spew and spray and cold call large numbers of people, but the very ease of this behavior has also made it even less likely to work. The economics of attention scarcity are obvious, and you might not like it, but it's true.
The bad news is that you are not entitled to attention and trust. It is not allocated on the basis of some sort of clearly defined scale of worthiness.
The good news is that you can earn it. You can invest in the community, you can patiently lead and contribute and demonstrate that the attention you are asking be spent on you is worthwhile.
But, no matter how urgent your emergency is, you're unlikely to be able to merely take the attention you want.
Posted by Seth Godin on July 23, 2015 | Permalink
inShare
742
Are you ready?
You're more powerful than you think. The altMBA is now accepting applicants for its second class. The program is working. We're helping accelerate the impact people are making in the world, and I hope you'll forward this post to someone in search of transformation.
Are you ready to grow, to see, to be transformed?
One way to get to where you're going is to surround yourself with people on a similar journey. That's what I set out to create when I founded the altMBA, and it has dramatically exceeded all of my expectations.
This week, some extraordinary people are graduating from our first month-long intensive session, and the feedback from our inaugural class is even better than I hoped.
"The content is hugely applicable to so many different disciplines. I'm learning and growing at the speed of light, and it's very easy to see the changes within my peers as well. Honestly, this should be a mandatory for marketing graduates. Period."
-Kelli Wood
"Community feedback, peer support, shared beliefs in personal potential, and the right to pursue happiness make the altMBA a perfect place to prepare to leap. My creative confidence is growing immensely. This process confirmed for me that I could map out taking on a big project, stick to the plan, and have a completed product when I'm done with altMBA."
-Ryon Lane
“I literally feel transformed from each project. I have never experienced anything like this. I am surprised by the genuine personal connections. Seth talked about that as part of the MBA experience, but I didn’t believe that would happen in 30 days.”
- Chris Carroll
If you're ready for this sort of change, I hope you will check out this page profiling our graduates (the peer-to-peer interactions among our students are the most important part of the program). Then, check out this quick overview of what we've built and how it can help you get to where you're going. Here's the FAQ.
The altMBA is designed to transform professionals—to assemble a talented cadre of people and give them a platform to push each other to make real change happen.
The biggest insight: it was a group effort. It's about the student-to-student connection, the reciprocal challenges of discovery and growth and quality that created an environment that worked. We are as good as the people we hang out with.
Applications are now open for the next session.
The altMBA is an important step in the evolution of online learning, but way more important than that, it's a huge step in how you develop yourself and your career.
There's a free informational audio webinar about the course, held tomorrow at 12 pm NY time, and archived if you can't watch it live. I think it may help you decide if this is the right opportunity for you.
We're selective in who is admitted, curating the class to improve its impact. Priority is given based on your work history as well as the date of your application. I hope this is something you'll consider, and I apologize if we're not able to admit everyone who applies.
In many ways, the altMBA is the culmination of much of what I've been teaching over the last two decades. I hope you can join in.
If you're ready for this, we're ready for you. Here we go.
Posted by Seth Godin on July 22, 2015 | Permalink
inShare
218
"I'll take care of it"
There are endless opportunities for people and organizations that can reliably and fairly take a problem off our hands.
"I'll take care of it," and I'll do it well, at least as well as you can, for a price that won't make you feel stupid. "I'll take care of it," and I won't come back to you when things go sideways, I won't ask for a bigger budget or more time, either. I won't have excuses ready to go, I won't stumble over the details, I won't point fingers. I'll merely take care of it.
It's not easy, but it's worth a lot.
Posted by Seth Godin on July 21, 2015 | Permalink
inShare
565
Preparing for a shark attack
A shark attack is sudden, visceral and overwhelming.
And it's impossible to be a tough guy in the face of one.
The sheer terror of it overwhelms us, paralyzing us, helpless to do a thing about it.
And, most important, and easily overlooked:
Shark attacks are astonishingly rare.
It turns out that there's no useful correlation between the enormity of a hazard and its relevance to our lives.
The same thing is true of your project, your upcoming speech, and the meeting you're about to schedule.
Using the phrase, "shark attack" to describe the imaginary but horrible pitfall ahead is a good way to remind ourselves to focus on something else. Better to prepare for a hazard both likely and avoidable instead.
Posted by Seth Godin on July 20, 2015 | Permalink
inShare
407
An alternative to believing in yourself
Of course, self-belief is more than just common advice. It's at the heart of selling, of creating, of shipping, of leadership...
Telling someone, "believe in yourself," is often worthless, though, because it's easier said than done.
Perhaps the alternative is: "Do work you can believe in."
Not trust, verification. Not believing that one day you'll do worthwhile work. Instead, do worthwhile work, look at it, then believe that you can do it again.
Step by step, small to large, easy to difficult.
Do work you can believe in.
Posted by Seth Godin on July 19, 2015 | Permalink
inShare
689
"Because it has always been this way"
That's a pretty bad answer to a series of common questions.
Why is the format of the board meeting like this? Why do we always structure our annual conference like this? Why is this our policy? Why do we let him decide these issues? Why is this the price?
The real answer is, "Because if someone changes it, that someone will be responsible for what happens."
Are you okay with that being the reason things are the way they are?
Posted by Seth Godin on July 18, 2015 | Permalink
inShare
905
Raising money is not the same thing as making a sale
Both add to your bank balance...
But raising money (borrowing it or selling equity) creates an obligation, while selling something delivers value to a customer.
Raising money is hard to repeat. Selling something repeatedly is why you do this work.
If things are going well, it might be time to sell more things to even more customers, so you won't ever need to raise money.
And if things aren't going well, the money you'll be able to raise will come with expectations or a price you probably won't be happy to live with.
When in doubt, make a customer happy.
[My exception: it pays to borrow money to pay for something (an asset) that delivers significantly more value to more customers more profitably over time. In the right situation, it's an essential building block to significance, but it's too often used as a crutch.]
[A different myth, re book publishing.]
Posted by Seth Godin on July 17, 2015 | Permalink
inShare
451
In search of metaphor
The best way to learn a complex idea is to find it living inside something else you already understand.
"This," is like, "that."
An amateur memorizes. A professional looks for metaphors.
It's not a talent, it's a practice. When you see a story, an example, a wonderment, take a moment to look for the metaphor inside.
Lessons are often found where we look for them.
Posted by Seth Godin on July 16, 2015 | Permalink
inShare
557
Shadows and light
There are two ways to get ahead: the race to the bottom and the race to the top.
You can get as close to the danger zone as you dare. Spam people. Seek deniability. Hide in the shadows. Push to close every sale. Network up, aggressively. Always leave yourself an out.
Or, you can do your work out loud, in public, and for others. Be relentlessly generous, without focusing on when it will come back to you.
In each case, the race to the bottom or the race to the top, you might win. Up to you.
Posted by Seth Godin on July 15, 2015 | Permalink
inShare
513
The technology ratchet
Any useful technology that's successfully adopted by a culture won't be abandoned. Ever. (Except by top-down force).
The technology might be replaced by a better alternative, but society doesn't go backwards.
After books were accepted, few went back to scrolls.
After air conditioning is installed, it's never uninstalled.
Vinyl records, straight razors and soon, drivable cars, will all be perceived as hobbies, not mainstream activities.
This one-way ratchet is accelerating and it's having a profound effect on every culture we are part of. As Kevin Kelly has pointed out, technology creates more technology, and this, combined with the ratchet, has a transformative effect.
In a corollary to this, some technologies, once adopted, create their own demand cycles. A little electricity creates a demand for more electricity. A little bandwidth creates a demand for more bandwidth.
And the roll-your-own media that has come along with the connection economy is an example of this demand cycle. Once people realize that they can make their own apps, write their own words, create their own movements, they don't happily go back to the original sources of controlled, centralized production.
The last hundred years have also seen a similar ratchet (amplified, I'd argue, by the technology of media and of the economy) in civil rights. It's unlikely (with the exception of despotic edicts) that women will ever lose the vote, that discrimination on race will return to apartheid-like levels, that marriage will return to being an exclusionary practice... once a social justice is embraced by a culture, it's rarely abandoned.
Fashion ebbs and flows, the tide goes in and it goes out, but some changes tend to flow in one direction.
Posted by Seth Godin on July 14, 2015 | Permalink
inShare
392
Bounce forward
When we hit an obstacle, sometimes the best we can hope for is to bounce back. To recover, to get through this and get back to normal.
But when our project hits a snag, perhaps we can consider using the moment to bounce forward instead. Being on the alert for opportunities, not merely repairs.
If we're spending our time and effort focusing on a return to normal, sometimes we miss the opportunity that's right in front of us.
Bouncing forward means an even better path, not merely the one we were on in the first place.
Posted by Seth Godin on July 13, 2015 | Permalink
inShare
619
Telling, not showing
The brilliant decision in making the new Star Wars ComicCon reel was this: J.J. Abrams could have chosen to wow the audience with special effects, to show a little more, to try to pique interest by satisfying the tension felt by the true fans who don't know what's coming, and can't stand not knowing.
Instead, of following the conventional wisdom and showing, he told. He told a story of care, of excitement, of anticipation.
He created tension instead of relieving it.
This takes resolve and guts. Most of the time, we want to blurt out the answer. But the thing is, people rarely get excited about blurts.
Posted by Seth Godin on July 12, 2015 | Permalink
inShare
378
I'm afraid of that
If you can say this out loud, when you've been holding back, avoiding your confrontation with the truth, you will free yourself to do something important. Saying it takes away the power of the fear.
On the other hand, if you say it 8 times or 11 times or every time, you're using the label to reinforce your fear, creating an easy escape hatch to avoid doing something important. Saying it amplifies the fear.
The brave thing is to find the unspeakable fear and speak it. And to stop rehearsing the easy fears that have become habits.
Posted by Seth Godin on July 11, 2015 | Permalink
inShare
397
Happy birthday
When I was fifteen, I wanted a bike for my birthday. I dropped a few hints, and about a week before the day, I asked my mom for a hint as to what I could expect. "Well," she said, "it has feathers."
I was getting a parrot.
What could be cooler than a parrot? Alas, I got a down blanket. Can't win them all.
Today's my 55th, and it would be great if you wouldn't send me a gift, a card or even an email. Not because I have birthday issues, but because I think we might be able to plant the seed for a very significant culture change, something bigger than a bike.
Is it possible for your birthday to change the world?
Instead of dropping me a note, I'm hoping you'll join 5,000 other blog readers and give your birthday to charity:water. (Note: I'm not asking you to make a donation, at least not at first. Something more difficult but important: I want you to start a change in our culture with just a few clicks. Read on...)
This might sound a bit familiar. Five years ago, I gave away my birthday and asked you, my astonishingly generous readers, to make a donation. We ended up raising nearly $40,000 (and it's gone up since then) and ten villages, families with children, now have water as a result (try to imagine going just two days without clean water...)
The donations made a difference, but let's go further and establish a pattern, a standard where lots and lots of people give away their birthdays. What if it becomes normal for everyone over 22 years old to ask for donations instead of presents or cards?
So far, 65,000 people have given their birthdays. But with just three generations of friends telling friends can take that up by a factor of ten. 5,000 people telling ten people telling ten people, and we'd change the world.
5,000 people pledging to give their birthdays to charity:water would mean that when your birthday rolls around, you'd ask the people in your life to give their birthdays to charity:water as well. And then a few months later, they'd ask the people in their lives... In just a few cycles, perhaps we could change the expectation of birthdays from, "I'd like a bike," to, "Can we save someone's life?"
The mechanics are simple: go to this page and sign up to donate your birthday. While you're there, I hope you'll consider donating $10 (I'll match the $10 donation from each reader who pitches in). Done.
One more bonus, in case changing the culture and saving lives isn't enough: if 1,000 people sign up to share their birthdays today, I'll update this post tomorrow and release the audio from a speech about bravery (a recent gig I did for Endeavor) on the bottom of this post...
Change the culture, change the world.
Thanks. And happy birthday. Even better than a parrot.
[UPDATE: This is already the most successful birthday pledge campaign they've ever seen. You guys are amazing. It's not too late to pledge your birthday or make a donation. Thank you all.]
Here's the audio file I promised:
Seth Godin live at Endeavor
Posted by Seth Godin on July 10, 2015 | Permalink
inShare
442
Debt
Greece. Puerto Rico. Student loans. Mortgages.
The forces of debt are reshaping the world, creating dislocations and crises on a regular basis. And yet, few of us really understand how debt works.
Not the debt of, "can I borrow five dollars?" but the debt of corporations, nations and bureaucratic bodies. What's debt, really? What is money, and which came first?
The most fascinating book I've read all year is Debt, by David Graeber. (The audio is highly recommended).
Debt is older than money, and money was probably invented not to help the imaginary harried merchant who is struggling with barter (what? you want to trade your sheep for my muffins? but I don't need sheep!) but instead to enable nation states to feed their armies, and for individuals to trade debts with one another.
[His army insight: The easiest way to feed an army is to invent a coin, then require all your citizens to pay taxes in that coin, a coin they can only get by trading. Then give a bunch of coins to your soldiers. Bingo.]
From this surprising beginning, Graeber takes us on a tour that covers 10,000 years. He talks about the origins of slavery as well as the inequities caused by the World Bank and the IMF. One simple example: If a dictator runs up a huge debt and then absconds with the money, are the citizens of that nation responsible? For how much? For how long? Should they be put into peonage, they and their children and all of their descendants?
If a mortgage is overdue, is it better to kick people out of the house and watch the neighborhood descend into rubble?
If 10 million Americans are overwhelmed with student debt they can't repay, what should we do then?
If the purpose of inter-country loans is to foster growth as well as international relations and trade, how does bankrupting and isolating an entire country when they can't pay accomplish this?
Or consider a much smaller example of how the world's most profitable profession can change even simple elements of user experience and customer satisfaction: Every time I pay for something with Paypal, I'm interrupted by a window insisting that I should pay for this item on credit, instead of using my balance. And every time, I close this window. Paypal knows this. And yet, they continue to interrupt millions of people a day, intentionally breaking their already weak user experience, because the idea of putting more people into more credit card debt is so financially seductive.
A key tenet of our culture is, "you must pay your debts." Debt makes us think about what this simple sentence means. Even if your instinct is to answer with, "of course everyone should pay their debts," the next question is obvious: How should we deal with nations and peoples who can't? How far do we go?
I can't do Graeber's book justice in a blog post, but I want to point it out to anyone who wants to understand the acceptance and future of bitcoin, the changing wealth of nations or why countries still own tons and tons of gold. Mostly, knowing how we got here makes it a lot easier to figure out where we might head next.
Posted by Seth Godin on July 09, 2015 | Permalink
inShare
500
Unreasonable
It's fascinating to note that everyone else is consistently more unreasonable in their demands and their policies and their views than we are.
I know the math is impossible, but we certainly act as though the other person is the unreasonable one, no matter which side of the table he sits on.
Posted by Seth Godin on July 08, 2015 | Permalink
inShare
273
Templates for organic and viral growth
Each of these examples is different, but they all share common traits.
Invent a connection venue or format, but give up some control.
Show it can be done, but don't insist that it be done precisely the same way you did it.
Establish a cultural norm.
Get out of the way...
Crossfit
EDM shows
Do Lectures
The Girl Scouts
Airbnb listings
No kill shelters
Vertical TEDx's
Meetup events
Night basketball
Farmers' markets
Rock climbing gyms
Alcoholics Anonymous
Ultimate frisbee leagues
Independent record stores
Grateful Dead cover bands
True Value hardware stores
Habitat for Humanity chapters
Posted by Seth Godin on July 07, 2015 | Permalink
inShare
262
Comparison, escalation and the golf clap
We've all encountered a tepid group, an audience that won't make noise, a bunch of disaffected students, or perhaps the distracted masses.
Cat taught me this trick, which gives great insight into human nature.
"Can everyone give me a golf clap, a level one clap, a quiet, polite amount of applause?"
Of course, everyone can do this. This is risk-free, enthusiasm-free and easier to do than not.
"Okay, what does level two sound like? Can you take it up a notch?"
And within a minute, she's created a level-ten tsunami of sound.
Comparison and escalation are at the heart of what makes our culture work.
Posted by Seth Godin on July 06, 2015 | Permalink
inShare
188
Interesting
If you think about it, there's generally no correlation between how much something cost to make and how interesting it is.
There are boring movies that bomb... and that cost $100mm to make. And the sound of a crying infant in the next room costs nothing at all, but it certainly gains your attention.
A video made for free can go viral, and we'll happily ignore an ad campaign that cost a million or more to make.
So, if money isn't related to interestingness, why do we worry so much about spending more on the media we create?
Over-the-top production values are sometimes a place to hide. It's tempting to cover up boring with polish, but it rarely works.
Stories and relevance are far more important than budgets.
Posted by Seth Godin on July 05, 2015 | Permalink
inShare
609
Embellishments
What are they for?
Absolutely nothing.
Well, that's not true. The fact that they aren't directly related to what you're trying to deliver is precisely why they exist. The 'nothingness' of their value is why they are valuable. An embellishment, a garnish, a filigree... it exists because it means you took a little extra time, you cared enough to add some beauty or rhythm to the thing you brought me.
As soon as we can afford it, as soon as we care, we pay extra for beauty.
Posted by Seth Godin on July 04, 2015 | Permalink
inShare
187
"All other difficulties are of minor importance"
The Wright Brothers decided to solve the hardest problem of flight first.
It's so tempting to work on the fun, the urgent or even the controversial parts of a problem.
There are really good reasons to do the hard part first, though. In addition to not wasting time in meetings about logos, you'll end up getting the rest of your design right if you do the easy parts last.
Posted by Seth Godin on July 03, 2015 | Permalink | TrackBack (0)
inShare
355
More pious
Tribe members often fall into a trap, a trap created by the fear of standing out, and a natural avoidance to question things.
"You're not wearing the proper tie."
"That's not how someone like us gets married."
"My tweets are of the proper format, yours aren't."
"The way you are teaching your kids the rules is wrong."
"That symbol of purity isn't good enough for my family."
"Your version of the way things should be is a compromise."
"What, you're not wearing an official jersey to the game?"
As soon as someone says, "I am more pious than you," they've chosen to push someone down in order to pull themselves up, at least in feeling more secure as a member of the tribe. This might be good for the hegemony of the tribe, but it ultimately degrades the spirit that the tribe set out to create.
Posted by Seth Godin on July 02, 2015 | Permalink
inShare
273
Announcing my candidacy
Today, with just 495 days before the election, I'm announcing my run for President of the United States.
I'm well aware that electoral politics have been transformed by the collision of semi-modern marketing techniques with the money necessary to implement them. The TV-Industrial complex demands ever more partisan politics, more tribal division, more vote-suppressing vitriol. As we've turned raising money into a game similar to box office returns (where quantity appears to equal quality), candidates have almost no choice but to sell themselves to the highest bidder of the moment, again and again and again.
Once you see this, it's hard to miss, even though candidates and the media work to conceal it with big promises and lots of apparently retail politics.
Is it any wonder that voters are cynical? Marketers and marketing made us that way.
My candidacy, on the other hand, will be marked by stunning transparency:
I'm not promising to get anything done, anything at all, so there is no chance you will be disappointed.
I'm selling slots in my campaign to the highest bidder, Google style. Digitally organized bidding makes it easy for any corporation or mogul to determine what something will cost, and real-time auctions will maximize the return.
I'll just keep the money, because TV ads merely coarsen our political discourse, almost never leading to a more informed electorate.
Most of all, once elected I'll stick to talk shows and other feel-good interactions, which is what the public wants most from its President.
Marketing has changed, but someone forgot to tell the inside-the-beltway power brokers. Brands aren't built the way they used to be, but politicians insist on the impatient churn-and-burn mass market awareness that even Procter & Gamble is choosing to leave behind.
Consider this: In the 2016 election, the candidates for President will together spend more money on advertising than any single US brand. That's never been true before--and it's because marketers today know something that impatient, self-centered politicians don't. Money isn't enough.
The brand of the future (the candidate of the future) is patient, consistent, connected, and trusted. The new brand is based on the truth that only comes from experiencing the product, not just yelling about it. Word of mouth is more important (by a factor of 20) than TV advertising, and the remarkability word of mouth demands comes from what we experience, not from spin or taglines or a campaign slogan.
Movements have leaders, but mostly, they have a place to lead to. And their leader can't stop, won't stop, has no choice but stay connected, keep raising the bar, continue to cycle forward.
So no, of course I won't be running (but I was a candidate for six paragraphs).
If the history of politics catching up with commercial marketing is any guide, I think that we're about to see a fundamental shift in how we talk about our leaders (and they talk to us), and perhaps (we can hope), the media will respond in kind.
And in the meantime, your brand, your campaign, your project, will benefit from what's happening now, which is marketing, not advertising, which is connection, not interruption. We've moved past the long-lost Mad Men era. Don't do marketing the way they do.
Posted by Seth Godin on July 01, 2015 | Permalink
inShare
709
What happens when things go wrong?
Service resilience is too often overlooked. Most organizations don't even have a name for it, don't measure it, don't plan for it.
I totally understand our focus on putting on a perfect show, on delighting people, on shipping an experience that's wonderful.
But how do you and your organization respond/react when something doesn't go right?
Because that's when everyone is paying attention.
Posted by Seth Godin on June 30, 2015 | Permalink | TrackBack (0)
inShare
583
The rejectionists
We can choose to define ourselves (our smarts, our brand, our character) on who rejects us.
Or we can choose to focus on those that care enough to think we matter.
Carrying around a list of everyone who thinks you're not good enough is exhausting.
Posted by Seth Godin on June 29, 2015 | Permalink
inShare
569
Buzzer management
I started the quiz team at my high school. Alas, I didn't do so well at the tryouts, so I ended up as the coach, but we still made it to the finals.
It took me thirty years to figure out the secret of getting in ahead of the others who also knew the answer (because the right answer is no good if someone else gets the buzz):
You need to press the buzzer before you know the answer.
As soon as you realize that you probably will be able to identify the answer by the time you're asked, buzz. Between the time you buzz and the time you're supposed to speak, the answer will come to you. And if it doesn't, the penalty for being wrong is small compared to the opportunity to get it right.
This feels wrong in so many ways. It feels reckless, careless and selfish. Of course we're supposed to wait until we're sure before we buzz. But the waiting leads to a pattern of not buzzing.
No musician is sure her album is going to be a hit. No entrepreneur is certain that every hire is going to be a good one. No parent can know that every decision they make is going to be correct.
What separates this approach from mere recklessness is the experience of discovering (in the right situation) that buzzing makes your work better, that buzzing helps you dig deeper, that buzzing inspires you.
The habit is simple: buzz first, buzz when you're confident that you've got a shot. Buzz, buzz, buzz. If it gets out of hand, we'll let you know.
The act of buzzing leads to leaping, and leaping leads to great work. Not the other way around.
Posted by Seth Godin on June 28, 2015 | Permalink
inShare
772
A corollary to 'Too big to fail'
"Too big to listen."
Great organizations listen to our frustrations, our hopes and our dreams.
Alas, when a company gets big enough, it starts to listen to the requirements of its shareholders and its best-paid executives instead.
Too big to listen is just a nanometer away from "Too big to care."
Posted by Seth Godin on June 27, 2015 | Permalink
inShare
471
Pugilists
Fighters and pugilists are different.
The fighter fights when she has to, when she's cornered, when someone or something she truly believes in is threatened. It's urgent and it's personal.
The pugilist, on the other hand, skirmishes for fun. The pugilist has a hobby, and the hobby is being oppositional.
The pugilist can turn any statement, quote or event into an opportunity to have an urgent argument, one that pins you to the ground and makes you question just about anything.
Instead of playing chess, the pugilist is playing you.
Pugilists make great TV commentators. And they even seem like engaged participants in meetings, for a while. Over time, we realize that they are more interested in seeing what reactions they can get, rather than in actually making positive change happen.
A committed pugilist has a long list of clever ways to bait you into an argument. You'll never win, of course, because the argument itself is what the pugilist seeks. Call it out, give it a name, share this post and then walk away. Back to work actually making things better.
Posted by Seth Godin on June 26, 2015 | Permalink
inShare
399
Pulling a hat out of a rabbit
It's tempting to do what's been done before, certain in the belief that if you do it, it'll be a little better and a little more popular, merely because you're the one doing it.
In fact, though, that's unlikely. You'll care more, but it's unlikely the market will.
Consider the alternative, which is choosing to turn the question upside down, to do it backwards, sideways, or in a significantly more generous or risky way.
Remarkable often starts with the problem you set out to solve and the way you choose to solve it.
Posted by Seth Godin on June 25, 2015 | Permalink
inShare
455
The tragedy of small expectations (and the trap of false dreams)
Ask a hundred students at Harvard Business School if they expect to be up for a good job when they graduate, and all of them will say "yes."
Ask a bright ten-year old girl if she expects to have a chance at a career as a mathematician, and the odds are she's already been brainwashed into saying "no."
Expectations aren't guarantees, but expectations give us the chance to act as if, to trade now for later, to invest in hard work and productive dreaming on our way to making an impact.
Expectations work for two reasons. First, they give us the enthusiasm and confidence to do hard work. Second, like a placebo, they subtly change our attitude, and give us the resilience to make it through the rough spots. "Eventually" gives us the energy to persist.
When our culture (our media, our power structures, our society) says, "people who look like you shouldn't expect to have a life like that," we're stealing. Stealing from people capable of achieving more, and stealing from our community as well. How can our society (that's us) say, "we don't expect you to graduate, we don't expect you to lead, we don't expect you to be trusted to make a difference?"
When people are pushed to exchange their passion and their effort for the false solace of giving up and lowering their expectations, we all lose. And (almost as bad, in the other direction) when they substitute the reality of expectations for the quixotic quest of impossibly large, unrealistic dreams, we lose as well. Disneyesque dreams are a form of hiding, because Prince Charming isn't coming any time soon.
Expectations are not guarantees. Positive thinking doesn't guarantee results, all it offers is something better than negative thinking.
Expectations that don't match what's possible are merely false dreams. And expectations that are too small are a waste. We need teachers and leaders and peers who will help us dig in deeper and discover what's possible, so we can push to make it likely.
Expectations aren't wishes, they're part of a straightforward equation: This work plus that effort plus these bridges lead to a likelihood of that outcome. It's a clear-eyed awareness of what's possible combined with a community that shares your vision.
It's easy to manipulate the language of expectations and turn it into a bootstrapping, you're-on-your-own sort of abandonment. But expectation is contagious. Expectation comes from our culture. And most of all, expectation depends on support—persistent, generous support to create a place where leaping can occur.
There are limits all around us, stereotypes, unlevel playing fields, systemic challenges where there should be support instead. A quiet but intensely corrosive impact these injustices create is in the minds of the disenfranchised, in their perception of what is possible.
The mirror we hold up to the person next to us is one of the most important pictures she will ever see.
If we can help just one person refuse to accept false limits, we've made a contribution. If we can give people the education, the tools and the access they need to reach their goals, we've made a difference. And if we can help erase the systemic stories, traditions and policies that push entire groups of people to insist on less, we've changed the world.
Posted by Seth Godin on June 24, 2015 | Permalink
inShare
1,069
"Did you win?"
A far better question to ask (the student, the athlete, the salesperson, the programmer...) is, "what did you learn?"
Learning compounds. Usually more reliably than winning does.
Posted by Seth Godin on June 23, 2015 | Permalink
inShare
530
New times call for new decisions
Those critical choices you made then, they were based on what you knew about the world as it was.
But now, you know more and the world is different.
So why spend so much time defending those choices?
We don't re-decide very often, which means that most of our time is spent doing, not choosing. And if the world isn't changing (if you're not changing) that doing makes a lot of sense.
The pain comes from falling in love with your status quo and living in fear of making another choice, a choice that might not work.
You might have been right then, but now isn't then, it's now.
If the world isn't different, no need to make a new decision.
The question is, "is the world different now?"
Posted by Seth Godin on June 22, 2015 | Permalink
inShare
622
The problem with holding a grudge
...is that your hands are then too full to hold onto anything else.
It might be the competition or a technology or the lousy things that someone did a decade ago. None of it is going to get better as a result of revisiting the grudge.
Posted by Seth Godin on June 21, 2015 | Permalink
inShare
545
You will rarely guess/create/cause #1
The breakthrough pop hit is so unpredictable that it's basically random.
You will always do better with a rational portfolio of second and third place reliable staples than you will in chasing whatever you guess that pop culture will want tomorrow.
Of course, it means giving up hoping for a miracle and instead doing the hard work of being there for the people who count on you.
[Update: It turns out the key word here is rarely. Just because I'm incapable of predicting the hits doesn't mean everyone is. I just heard from Scott Borchetta at Big Machine. He's had a #1 hit on the pop music charts every year for the last thirty. At some point, it's not luck, it's your profession.]
Posted by Seth Godin on June 20, 2015 | Permalink
inShare
196
Kneejerks
Just about all the ranting we hear is tribal. "He's not one of us, he's wrong." Or, the flipside, "He's on our team, he's right, you're blowing this out of proportion."
The most powerful thing we can do to earn respect from those around us, though, is to call out one of our own when he crosses the line. "People like us, we don't do things like that." This is when real change starts to happen, and when others start to believe that we really care about something more than scoring points.
Calling out our own jerks is the best kind of kneejerk.
Posted by Seth Godin on June 19, 2015 | Permalink
inShare
338
How, why and the other thing
Almost all the inputs, advice and resources available are about how. How to write better copy, how to code, how to manage, how to get people to do what you want, how to lose weight, how to get ahead...
Far more scarce is help in understanding why. Why bother? Why move forward? Why care?
And rarest of all, yet ironically the most important, is help and insight about getting to the core of the fear that is holding us back.
This is the cause of the unfinished novel, of the self-sabotaging aggressive marketing campaign and the speech that goes on too long. It's at the heart of too much, too little, and too boring as well.
You might need confidence in your 'how' to deal with your fear. You might have found your 'why' overwhelmed by your fear. But all the how and all the why aren't going to help much if we can't acknowledge that essential driver is, "where is the fear?"
Are we so afraid of it that we can't even discuss it?
Posted by Seth Godin on June 18, 2015 | Permalink
inShare
438
Plenty more
One of the critical decisions of every career:
"Well, there's plenty more to do, I'll do the least I can here and then move on to the next one."
vs.
"I only get to do this one, once. So I'll do it as though it's the last chance I'll ever have to do this work, to please this customer, to ring this bell."
As little as possible. Or as much. The system might push you to become mediocre, but that very same system rewards excellence. The perception that the minimum is viable is built deep into our notion of productivity, but it turns out that the maximum is valuable indeed.
The biggest cause of excellence is the story we tell ourselves about our work.
It's a choice, a commitment and a lifelong practice.
Posted by Seth Godin on June 17, 2015 | Permalink
inShare
607
Abandoning perfection
It's possible you work in an industry built on perfect. That you're a scrub nurse in the OR, or an air traffic controller or even in charge of compliance at a nuclear power plant.
The rest of us, though, are rewarded for breaking things. Our job, the reason we have time to read blogs at work or go to conferences or write memos is that our organization believes that just maybe, we'll find and share a new idea, or maybe (continuing a run on sentence) we'll invent something important, find a resource or connect with a key customer in a way that matters.
So, if that's your job, why are you so focused on perfect?
Perfect is the ideal defense mechanism, the work of Pressfield's Resistance, the lizard brain giving you an out. Perfect lets you stall, ask more questions, do more reviews, dumb it down, safe it up and generally avoid doing anything that might fail (or anything important).
You're not in the perfect business. Stop pretending that's what the world wants from you.
Truly perfect is becoming friendly with your imperfections on the way to doing something remarkable.
Posted by Seth Godin on June 16, 2015 | Permalink
inShare
1,065
Overpriced
Things that are going up in value almost always appear to be overpriced.
Real estate, fine art and start up investments have something in common: the good ones always seem too expensive when we have a chance to buy them. (And so do the lame ones, actually).
That New York condo that's going for $8 million? You didn't buy it when it was only a tenth that, when it was on a block where no one wanted to live. Of course, if everyone saw what was about to happen, it wouldn't have been for sale at the price being offered.
And you could have bought stock in (name company here) for just a dollar or two, but back then, no one thought they had a chance... which is precisely why the stock was so cheap.
And the lousy investments also seem overpriced, because they are.
Investments don't always take cash. They often require our effort, our focus, or our commitment. And the good ones always seem like they take too much, until later, when we realize what a bargain that effort would have been.
The challenge isn't in finding an overlooked obvious bargain that people didn't notice. The challenge is in learning to tell the difference between the ones that feel overpriced and the ones that actually are.
The insight is that when dealing with the future, there's no right answer, no obvious choice—everything is overpriced. Until it's not.
The Freelance Studio
Denver Colorado, User Experience Agency
The blogs every UX pro & enthusiast should keep their eye on
If you’re reading this article right now, you’re aware that there are a whole bunch of user experience blogs out there. If you work in UX you’ve probably heard of most of them already. If you are new to online user research, or are just looking for fresh UX content, check out the list we put together. We handpicked 10 blogs for UX pros and user research enthusiasts and present them to you in no particular order.
Nielsen Norman Group
The Nielsen Norman Group is a user interface and user experience consulting company. Its founders, Jakob Nielsen, Donald Norman, and Bruce Tognazzini, are regarded as pioneers in the field of human-computer interaction. Their posts and articles are about web usability, user testing, mobile devices, e-commerce, user behavior, interaction design and more.
Usability Geek
Owned by UX evangelist Justin Mifsud, Usability Geek provides tips and advice about usability, user experience, usability testing, business, tools and technology. New blog posts are published twice a week. This blog is a must-read for every UX pro and expert to be.
Boxes and Arrows
Boxes and Arrows is a blog dedicated to discussing and improving graphic design, interaction design, information architecture, and design of business. The guys from Boxes and Arrows also intent to promote the work of the information architecture community, writing about current and future issues related to IA.
UX Matters
UX Matters was founded by Pabini Gabriel-Petit in 2005. The blog provides the user experience community with insights and inspiration. No matter if you are a beginner or a UX pro, you will always find highly valuable information about user research from leading UX experts that share best practices. There is only one question that comes to mind when reading UX Matters: When will the design of the blog be updated?
UX Booth
A blog by and for the UX community. UX Booth is dedicated to sharing best practices about user experience and interaction design. Articles cover subjects from beginners to intermediate.
A List Apart
A List Apart focuses on web design, development, and content. Founded in 1998, this blog is full of interesting posts, columns, and articles related to the user experience, code and design cosmos.
UX movement
The authors at UX Movement write about which interface design practices work and which don’t. Articles like “Why Users Aren’t Clicking your Home Page Carousel” or “How Button Placement Standards Reinforce User Habits” show how good and bad practices affect the user experience in a hands-on way.
UX Magazine
UX Mag was founded in 2005 and has developed into an online magazine exploring all facets of experience design. Their articles cover topics from all areas of UX design and are informative, up-to-date, and well researched.
Smashing Magazine
Smashing Magazine is all about web development. It is mostly directed to web designers and developers. Its portfolio ranges from coding, design, mobile, graphics, to content management systems and, of course, UX design.
MeasuringU
MeasuringU is a quantitative research firm, helping companies answer questions about their software, websites and apps. Their blog contains a lot of valuable information on usability testing methods. The company´s founder Jeff Sauro is an experienced statistical analyst and expert in quantifying the user experience.
Extra: The UserZoom Blog
In our blog our own researchers and usability experts share their experiences with UX projects and usability testing. Their insights are enriched by tips, trends, and advice about remote un-moderated research. Articles range from beginners up to pro
The Freelance Studio
Denver, Co. User Experience Agency
Ending the UX Designer Drought
Part 2 - Laying the Foundation
by Fred Beecher
June 23rd, 2015 11 Comments
The first article in this series, “A New Apprenticeship Architecture,” laid out a high-level framework for using the ancient model of apprenticeship to solve the modern problem of the UX talent drought. In this article, I get into details. Specifically, I discuss how to make the business case for apprenticeship and what to look for in potential apprentices. Let’s get started!
Defining the business value of apprenticeship
Apprenticeship is an investment. It requires an outlay of cash upfront for a return at a later date. Apprenticeship requires the support of budget-approving levels of your organization. For you to get that support, you need to clearly show its return by demonstrating how it addresses some of your organization’s pain points. What follows is a discussion of common pain points and how apprenticeship assuages them.
Hit growth targets
If your company is trying to grow but can’t find enough qualified people to do the work that growth requires, that’s the sweet spot for apprenticeship. Apprenticeship allows you to make the designers you’re having trouble finding. This is going to be a temporal argument, so you need to come armed with measurements to make it. And you’ll need help from various leaders in your organization to get them.
UX team growth targets for the past 2-3 years (UX leadership)
Actual UX team growth for the past 2-3 years (UX leadership)
Average time required to identify and hire a UX designer (HR leadership)
Then you need to estimate how apprenticeship will improve these measurements. (Part 3 of this series, which will deal with the instructional design of apprenticeship, will offer details on how to make these estimates.)
How many designers per year can apprenticeship contribute?
How much time will be required from the design team to mentor apprentices?
Growth targets typically do not exist in a vacuum. You’ll likely need to combine this argument with one of the others.
Take advantage of more revenue opportunities
One of the financial implications of missing growth targets is not having enough staff to capitalize on all the revenue opportunities you have. For agencies, you might have to pass up good projects because your design team has a six-week lead time. For product companies, your release schedule might fall behind due to a UX bottleneck and push you behind your competition.
The data you need to make this argument differ depending on whether your company sells time (agency) or stuff (product company).
When doing the math about an apprenticeship program, agencies should consider:
What number of projects have been lost in the past year due to UX lead time? (Sales leadership should have this information.)
What is the estimated value of UX work on lost projects? (Sales leadership)
What is the estimated value of other (development, strategy, management, etc.) work on lost projects? (Sales leadership)
Then, contrast these numbers with some of the benefits of apprenticeship:
What is the estimated number of designers per year apprenticeship could contribute?
What is the estimated amount of work these “extra” designers would be able to contribute in both hours and cash?
What is the estimated profitability of junior designers (more) versus senior designers (less), assuming the same hourly rate?
Product companies should consider:
The ratio of innovative features versus “catch-up” features your competitors released last year. (Sales or marketing leadership should have this information.)
The ratio of innovative features versus “catch-up” features you released in the past year. (Sales or marketing leadership)
Any customer service and/or satisfaction metrics. (Customer service leadership)
Contrast this data with…
The estimated number of designers per year you could add through apprenticeship.
The estimated number of features they could’ve completed for release.
The estimated impact this would have on customer satisfaction.
Avoid high recruiting costs
Recruiting a mid- to senior-level UX designer typically means finding them and poaching them from somewhere else. This requires paying significant headhunting fees on top of the person-hours involved in reviewing resumes and portfolios and interviewing candidates. All the data you need to make this argument can come from UX leadership and HR.
Average cost per UX designer recruit
Average number of hours spent recruiting a UX designer
Contrast this data with:
Estimated cost per apprentice
To estimate this, factor in:
Overhead per employee
Salary (and benefits if the apprenticeship is long enough to qualify while still an apprentice)
Software and service licenses
Mentorship time from the current design team
Mentorship/management time from the designer leading the program
Increase designer engagement
This one is tricky because most places don’t measure engagement directly. Measuring engagement accurately requires professional quantitative research. However, there are some signs that can point to low engagement.
High turnover is the number one sign of low engagement. What kind of people are leaving—junior designers, seniors, or both? If possible, try to get exit interview data (as raw as possible) to develop hypotheses about how apprenticeship could help. Maybe junior designers don’t feel like their growth is supported… allowing them to leverage elements of an apprenticeship program for further professional development could fix that. Maybe senior designers are feeling burnt out. Consistent mentorship, like that required by apprenticeship, can be reinvigorating.
Other signs of low engagement include frequently missing deadlines, using more sick time, missing or being late to meetings, and more. Investigate any signs you see, validate any assumptions you might take on, and hypothesize about how apprenticeship can help address these issues.
Help others
If your organization is motivated by altruism, that is wonderful! At least one organization with an apprenticeship program actually tries very hard not to hire their apprentices. Boston’s Fresh Tilled Soil places their graduated apprentices with their clients, which creates a very strong relationship with those clients. Additionally, this helps them raise the caliber and capacity of the Boston metro area when it comes to UX design.
Hiring great UX apprentices
Hiring apprentices requires a different approach to evaluating candidates than hiring established UX designers. Most candidates will have little to no actual UX design skills, so you have to evaluate them for their potential to acquire and hone those skills. Additionally, not everyone learns effectively through apprenticeship. Identifying the traits of a good apprentice in candidates will help your program run smoothly.
Evaluating for skill potential
Portfolio. Even though you’re evaluating someone who may never have designed a user experience before, you still need them to bring some examples of something they’ve made. Without this, it’s impossible to get a sense of what kind of process they go through to make things. For example, one apprentice candidate brought in a print brochure she designed. Her description of how she designed it included identifying business goals, balancing competing stakeholder needs, working within constraints, and getting feedback along the way, all of which are relevant to the process of UX design.
Mindset. The number one thing you must identify in a candidate is whether they already possess the UX mindset, the point of view that things are designed better when they’re designed with people in mind. This is usually the light bulb that goes off in people’s heads when they discover UX design. If that light hasn’t gone off, UX might not be the right path for that person. Apprenticeship is too much of an investment to risk that. Evaluating for this is fairly simple. It usually comes out in the course of a conversation. If not, asking outright “What does user experience design mean to you” can be helpful. Pay careful attention to how people talk about how they’ve approached their work. Is it consistent with their stated philosophy? If not, that could be a red flag.
Intrinsic motivation. When people talk about having a “passion” for something, what that means is that they are intrinsically motivated to do that thing. This is pretty easy to evaluate for. What have they done to learn UX? Have they taken a class? That’s a positive sign. Have they identified and worked through a UX problem on their own? Even better! If a candidate hasn’t put in the effort to explore UX on their own, they are likely not motivated enough to do well in the field.
Self-education. While self-education is a sign of intrinsic motivation, it’s also important in its own right. Apprenticeship relies heavily on mentorship, but the responsibility for the direction and nature of that mentorship lies with the apprentice themselves. If someone is a self-educator, that’s a good predictor that they’ll be able to get the most out of mentorship. This is another fairly easy one to evaluate. Ask them to tell you about the most recent UX-related blog post or article they read. It doesn’t matter what it actually is, only whether they can quickly bring something to mind.
Professional skills. UX design is not a back-office field. UX designers talk with clients, customers, stakeholders, developers, and more. To be an effective UX designer a candidate must possess basic professional skills such as dressing appropriately and communicating well. Simple things like sending a “thank you” email are a great indication of good professional skills. (Physically mailed thank you notes get extra bonus points. One-off letterpressed mailed thank you notes get even more!)
Collaboration. UX design is a collaborative discipline. If a candidate struggles with collaboration, they’ll struggle in the field. When discussing their work (especially class project work), be sure to ask what role they played on the project and how they interacted with other people. Complaining about others and taking on too much work themselves are some warning signs that could indicate that a candidate has trouble with collaboration.
Evaluating for apprenticeship fit
Learning pattern. Some people learn best by gradually being exposed to a topic. I call these people toe-dippers, as they prefer to dip their toes into something before diving in. Others prefer to barrel off the dock straight into the deep end and then struggle to the surface. I call these people deep-enders. While apprenticeship can be modified to work better for deep-enders, its gradual exposure can often frustrate them. It is much better suited for toe-dippers. Evaluating for this is tricky, though. Asking people whether they prefer to dive in or learn gradually, they’ll say “dive in” because they think that’s what you want to hear. Asking them how they’ve approached learning other skills can give some insight, but this is not 100% reliable.
Learning by doing. Apprenticeship helps people acquire skills through experiential learning. If this is not how a person learns, apprenticeship may not be for them. Evaluating for this is very much like evaluating for intrinsic motivation. Has someone gone to the trouble of identifying and solving a design problem themselves? Have they practiced UX methods they have learned about? If so, it’s likely that learning by doing is effective for them.
Receptiveness to critique. Apprenticeship is a period of sustained critique. Someone whose response to criticism is defensiveness or despondency will not be successful as an apprentice. This is easy to identify in an interview within the context of discussing the work examples the candidate has brought. My favorite technique for doing this is to find something insignificant to critique and then hammer on it. This is not how I normally critique, of course; it’s a pressure test. If a candidate responds with openness and a desire to learn from this encounter, that’s a very positive sign. If they launch into a monologue defending their decisions, the interview is pretty much over.
If you’re fired up about UX apprenticeship (and how could you not be?), start making it happen in your organization! Do the research, find the data, and share your vision with your company’s leadership so they can see it too! When you get the go-ahead, you’ll be all ready to start looking for apprentices. If you follow these guidelines, you’ll get great apprentices who will grow into great designers. Stay tuned for Part 3 of this series where I’ll get detailed about the instructional design of apprenticeship, pedagogy, mentorship, and tracking!
Share this:
EmailTwitter206RedditLinkedIn229Facebook20Google
Posted in Big Ideas, Business Design, Education, Workplace and Career | 11 Comments »
11 Comments
Building the Business Case for Taxonomy
Taxonomy of Spices and Pantries: Part 1
by Grace G Lau
September 1st, 2015 9 Comments
XKCD comic strip about not being able to name all seven dwarfs from Snow White.
How often have you found yourself on an ill-defined site redesign project? You know, the ones that you end up redesigning and restructuring every few years as you add new content. Or perhaps you spin up a new microsite because the new product/solution doesn’t fit in with the current structure, not because you want to create a new experience around it. Maybe your site has vaguely labelled navigation buckets like “More Magic”—which is essentially your junk drawer, your “everything else.”
Your top concerns on such projects are:
You can’t find anything.
Your users can’t find anything.
The navigation isn’t consistent.
You have too much content.
Your hopeful answer to everything is to rely on an external search engine, not the one that’s on your site. Google will find everything for you.
A typical site redesign project might include refreshing the visual design, considering the best interaction practices, and conducting usability testing. But what’s missing? Creating the taxonomy.
“Taxonomy is just tagging, right? Sharepoint/AEM has it—we’re covered!”
In the coming months, I will be exploring the what, why, and how of taxonomy planning, design, and implementation:
Building the business case for taxonomy
Planning a taxonomy
The many uses of taxonomy
Card sorting to validate a taxonomy
Tree testing a taxonomy
Taxonomy governance
Best practices of enterprise taxonomies
Are you ready?
ROI of taxonomy
Although the word “taxonomy” is often used interchangeably with tagging, building an enterprise taxonomy means more than tagging content. It’s essentially a knowledge organization system, and its purpose is to enable the user to browse, find, and discover content.
Spending the time on building that taxonomy empowers your site to
better manage your content at scale,
allow for meaningful navigation,
expose long-tail content,
reuse content assets,
bridge across subjects, and
provide more efficient product/brand alignment.
In addition, a sound taxonomy in the long run will improve your content’s findability, support social sharing, and improve your site’s search engine optimization. (Thanks to Mike Atherton’s “Modeling Structured Content” workshop, presented at IA Summit 2013, for outlining the benefits.)
How do you explain taxonomy to get stakeholders on board? No worries, we won’t be going back to high school biology.
Explaining taxonomy
Imagine a household kitchen. How would you organize the spices?
Consider the cooks: In-laws from northern China, mom from Hong Kong, and American-born Grace. I’ve moved four times in the past five years. My husband, son, and I live with my in-laws. I have a mother who still comes over to make her Cantonese herbal soups.
We all speak different languages: English, Mandarin Chinese, and Cantonese Chinese.
I have the unique need of organizing my kitchen for multiple users. For my in-laws, they need to be able to find their star anise, peppercorn, tree ear mushrooms, and sesame oil. My mom needs a space to store her dried figs, dried shiitake mushrooms, dried goji berries, and snow fungus. I need to find a space for dried thyme and rosemary for the “American” food I try to make. Oh, and we all need a consistent place for salt and sugar.
People can organize their kitchen by activity zones: baking, canning, preparing, and cooking. Other ways to organize a kitchen successfully could include:
attributes (shelf-life, weight, temperature requirements)
usage (frequency, type of use)
seasonality (organic, what’s in season, local)
occasion (hot pot dinners, BBQ parties)
You can also consider organizing by audience such as for the five year old helper. I keep refining how the kitchen is organized each time we move. I have used sticky notes in Chinese and English with my in-laws and my mom as part of a card sorting exercise; I’ve tested the navigation around the kitchen to validate the results.
A photo of pantry shelves labeled noodles, rice, garlic, and the like.
Early attempts at organizing my pantry.
If this is to be a data-driven taxonomy, I could consider attaching RFID tags to each spice container to track frequency and type of usage for a period of time to obtain some kitchen analytics. On the other hand, I could try guesstimating frequency by looking at the amount of grime or dust collected on the container. How often are we using chicken bouillon and to make what dishes? Does it need to be within easy reach of the stovetop or can it be relegated to a pantry closet three feet away?
Photo of labeled spice jars in a drawer.
From Home Depot.
Understanding the users and their tasks and needs is a foundation for all things UX. Taxonomy building is not any different. How people think about and use their kitchen brings with it a certain closeness that makes taxonomy concepts easier to grasp.
Who are the users? What are they trying to do? How do they currently tackle this problem? What works and what doesn’t? Watch, observe, and listen to their experience.
Helping the business understand the underlying concepts is one of the challenges I’ve faced with developing a solid taxonomy. We’re not just talking about tagging but breaking down the content by its attributes and metadata as well as by its potential usage and relation to other content. The biggest challenge is building the consensus and understanding around that taxonomy—taxonomy governance—and keeping the system you’ve designed well-seasoned!
Now, back to that site redesign project that you were thinking of: How about starting on that taxonomy? My next post will cover taxonomy planning.
How to determine when customer feedback is actionable
Merging statistics with product management
by Naira Musallam, Nis Frome, Michael Williams, and Tim Lawton
October 13th, 2015 1 Comments
One of the riskiest assumptions for any new product or feature is that customers actually want it.
Although product leaders can propose numerous ‘lean’ methodologies to experiment inexpensively with new concepts before fully engineering them, anything short of launching a product or feature and monitoring its performance over time in the market is, by definition, not 100% accurate. That leaves us with a dangerously wide spectrum of user research strategies, and an even wider range of opinions for determining when customer feedback is actionable.
To the dismay of product teams desiring to ‘move fast and break things,’ their counterparts in data science and research advocate a slower, more traditional approach. These proponents of caution often emphasize an evaluation of statistical signals before considering customer insights valid enough to act upon.
This dynamic has meaningful ramifications. For those who care about making data-driven business decisions, the challenge that presents itself is: How do we adhere to rigorous scientific standards in a world that demands adaptability and agility to survive? Having frequently witnessed the back-and-forth between product teams and research groups, it is clear that there is no shortage of misconceptions and miscommunication between the two. Only a thorough analysis of some critical nuances in statistics and product management can help us bridge the gap.
Quantify risk tolerance
You’ve probably been on one end of an argument that cited a “statistically significant” finding to support a course of action. The problem is that statistical significance is often equated to having relevant and substantive results, but neither is necessarily the case.
Simply put, statistical significance exclusively refers to the level of confidence (measured from 0 to 1, or 0% to 100%) you have that the results you obtained from a given experiment are not due to chance. Statistical significance alone tells you nothing about the appropriateness of the confidence level selected nor the importance of the results.
To begin, confidence levels should be context-dependent, and determining the appropriate confidence threshold is an oft-overlooked proposition that can have profound consequences. In statistics, confidence levels are closely linked to two concepts: type I and type II errors.
A type I error, or false-positive, refers to believing that a variable has an effect that it actually doesn’t.
Some industries, like pharmaceuticals and aeronautics, must be exceedingly cautious against false-positives. Medical researchers for example cannot afford to mistakenly think a drug has an intended benefit when in reality it does not. Side effects can be lethal so the FDA’s threshold for proof that a drug’s health benefits outweigh their known risks is intentionally onerous.
A type II error, or false-negative, has to do with the flip side of the coin: concluding that a variable doesn’t have an effect when it actually does.
Historically though, statistical significance has been primarily focused on avoiding false-positives (even if it means missing out on some likely opportunities) with the default confidence level at 95% for any finding to be considered actionable. The reality that this value was arbitrarily determined by scientists speaks more to their comfort level of being wrong than it does to its appropriateness in any given context. Unfortunately, this particular confidence level is used today by the vast majority of research teams at large organizations and remains generally unchallenged in contexts far different than the ones for which it was formulated.
Matrix visualising Type I and Type II errors as described in text.
But confidence levels should be representative of the amount of risk that an organization is willing to take to realize a potential opportunity. There are many reasons for product teams in particular to be more concerned with avoiding false-negatives than false-positives. Mistakenly missing an opportunity due to caution can have a more negative impact than building something no one really wants. Digital product teams don’t share many of the concerns of an aerospace engineering team and therefore need to calculate and quantify their own tolerance for risk.
To illustrate the ramifications that confidence levels can have on business decisions, consider this thought exercise. Imagine two companies, one with outrageously profitable 90% margins, and one with painfully narrow 5% margins. Suppose each of these businesses are considering a new line of business.
In the case of the high margin business, the amount of capital they have to risk to pursue the opportunity is dwarfed by the potential reward. If executives get even the weakest indication that the business might work they should pursue the new business line aggressively. In fact, waiting for perfect information before acting might be the difference between capturing a market and allowing a competitor to get there first.
In the case of the narrow margin business, however, the buffer before going into the red is so small that going after the new business line wouldn’t make sense with anything except the most definitive signal.
Although these two examples are obviously allegorical, they demonstrate the principle at hand. To work together effectively, research analysts and their commercially-driven counterparts should have a conversation around their organization’s particular level of comfort and to make statistical decisions accordingly.
Focus on impact
Confidence levels only tell half the story. They don’t address the magnitude to which the results of an experiment are meaningful to your business. Product teams need to combine the detection of an effect (i.e., the likelihood that there is an effect) with the size of that effect (i.e., the potential impact to the business), but this is often forgotten on the quest for the proverbial holy grail of statistical significance.
Many teams mistakenly focus energy and resources acting on statistically significant but inconsequential findings. A meta-analysis of hundreds of consumer behavior experiments sought to qualify how seriously effect sizes are considered when evaluating research results. They found that an astonishing three-quarters of the findings didn’t even bother reporting effect sizes “because of their small values” or because of “a general lack of interest in discovering the extent to which an effect is significant…”
This is troubling, because without considering effect size, there’s virtually no way to determine what opportunities are worth pursuing and in what order. Limited development resources prevent product teams from realistically tackling every single opportunity. Consider for example how the answer to this question, posed by a MECLABS data scientist, changes based on your perspective:
In terms of size, what does a 0.2% difference mean? For Amazon.com, that lift might mean an extra 2,000 sales and be worth a $100,000 investment…For a mom-and-pop Yahoo! store, that increase might just equate to an extra two sales and not be worth a $100 investment.
Unless you’re operating at a Google-esque scale for which an incremental lift in a conversion rate could result in literally millions of dollars in additional revenue, product teams should rely on statistics and research teams to help them prioritize the largest opportunities in front of them.
Sample size constraints
One of the most critical constraints on product teams that want to generate user insights is the ability to source users for experiments. With enough traffic, it’s certainly possible to generate a sample size large enough to pass traditional statistical requirements for a production split test. But it can be difficult to drive enough traffic to new product concepts, and it can also put a brand unnecessarily at risk, especially in heavily regulated industries. For product teams that can’t easily access or run tests in production environments, simulated environments offer a compelling alternative.
That leaves product teams stuck between a rock and a hard place. Simulated environments require standing user panels that can get expensive quickly, especially if research teams seek sample sizes in the hundreds or thousands. Unfortunately, strategies like these again overlook important nuances in statistics and place undue hardship on the user insight generation process.
A larger sample does not necessarily mean a better or more insightful sample. The objective of any sample is for it to be representative of the population of interest, so that conclusions about the sample can be extrapolated to the population. It’s assumed that the larger the sample, the more likely it is going to be representative of the population. But that’s not inherently true, especially if the sampling methodology is biased.
Years ago, a client fired an entire research team in the human resources department for making this assumption. The client sought to gather feedback about employee engagement and tasked this research team with distributing a survey to the entire company of more than 20,000 global employees. From a statistical significance standpoint, only 1,000 employees needed to take the survey for the research team to derive defensible insights.
Within hours after sending out the survey on a Tuesday morning, they had collected enough data and closed the survey. The problem was that only employees within a few timezones had completed the questionnaire with a solid third of the company being asleep, and therefore ignored, during collection.
Clearly, a large sample isn’t inherently representative of the population. To obtain a representative sample, product teams first need to clearly identify a target persona. This may seem obvious, but it’s often not explicitly done, creating quite a bit of miscommunication for researchers and other stakeholders. What one person may mean by a ‘frequent customer’ could mean something different entirely to another person.
After a persona is clearly identified, there are a few sampling techniques that one can follow, including probability sampling and nonprobability sampling techniques. A carefully-selected sample size of 100 may be considerably more representative of a target population than a thrown-together sample of 2,000.
Research teams may counter with the need to meet statistical assumptions that are necessary for conducting popular tests such as a t-test or Analysis of Variance (ANOVA). These types of tests assume a normal distribution, which generally occurs as a sample size increases. But statistics has a solution for when this assumption is violated and provides other options, such as non-parametric testing, which work well for small sample sizes.
In fact, the strongest argument left in favor of large sample sizes has already been discounted. Statisticians know that the larger the sample size, the easier it is to detect small effect sizes at a statistically significant level (digital product managers and marketers have become soberly aware that even a test comparing two identical versions can find a statistically significant difference between the two). But a focused product development process should be immune to this distraction because small effect sizes are of little concern. Not only that, but large effect sizes are almost as easily discovered in small samples as in large samples.
For example, suppose you want to test ideas to improve a form on your website that currently gets filled out by 10% of visitors. For simplicity’s sake, let’s use a confidence level of 95% to accept any changes. To identify just a 1% absolute increase to 11%, you’d need more than 12,000 users, according to Optimizely’s stats engine formula! If you were looking for a 5% absolute increase, you’d only need 223 users.
But depending on what you’re looking for, even that many users may not be needed, especially if conducting qualitative research. When identifying usability problems across your site, leading UX researchers have concluded that “elaborate usability tests are a waste of resources” because the overwhelming majority of usability issues are discovered with just five testers.
An emphasis on large sample sizes can be a red herring for product stakeholders. Organizations should not be misled away from the real objective of any sample, which is an accurate representation of the identified, target population. Research teams can help product teams identify necessary sample sizes and appropriate statistical tests to ensure that findings are indeed meaningful and cost-effectively attained.
Expand capacity for learning
It might sound like semantics, but data should not drive decision-making. Insights should. And there can be quite a gap between the two, especially when it comes to user insights.
In a recent talk on the topic of big data, Malcolm Gladwell argued that “data can tell us about the immediate environment of consumer attitudes, but it can’t tell us much about the context in which those attitudes were formed.” Essentially, statistics can be a powerful tool for obtaining and processing data, but it doesn’t have a monopoly on research.
Product teams can become obsessed with their Omniture and Optimizely dashboards, but there’s a lot of rich information that can’t be captured with these tools alone. There is simply no replacement for sitting down and talking with a user or customer. Open-ended feedback in particular can lead to insights that simply cannot be discovered by other means. The focus shouldn’t be on interviewing every single user though, but rather on finding a pattern or theme from the interviews you do conduct.
One of the core principles of the scientific method is the concept of replicability—that the results of any single experiment can be reproduced by another experiment. In product management, the importance of this principle cannot be overstated. You’ll presumably need any data from your research to hold true once you engineer the product or feature and release it to a user base, so reproducibility is an inherent requirement when it comes to collecting and acting on user insights.
We’ve far too often seen a product team wielding a single data point to defend a dubious intuition or pet project. But there are a number of factors that could and almost always do bias the results of a test without any intentional wrongdoing. Mistakenly asking a leading question or sourcing a user panel that doesn’t exactly represent your target customer can skew individual test results.
Similarly, and in digital product management especially, customer perceptions and trends evolve rapidly, further complicating data. Look no further than the handful of mobile operating systems which undergo yearly redesigns and updates, leading to constantly elevated user expectations. It’s perilously easy to imitate Homer Simpson’s lapse in thinking, “This year, I invested in pumpkins. They’ve been going up the whole month of October and I got a feeling they’re going to peak right around January. Then, bang! That’s when I’ll cash in.”
So how can product and research teams safely transition from data to insights? Fortunately, we believe statistics offers insight into the answer.
The central limit theorem is one of the foundational concepts taught in every introductory statistics class. It states that the distribution of averages tends to be Normal even when the distribution of the population from which the samples were taken is decidedly not Normal.
Put as simply as possible, the theorem acknowledges that individual samples will almost invariably be skewed, but offers statisticians a way to combine them to collectively generate valid data. Regardless of how confusing or complex the underlying data may be, by performing relatively simple individual experiments, the culminating result can cut through the noise.
This theorem provides a useful analogy for product management. To derive value from individual experiments and customer data points, product teams need to practice substantiation through iteration. Even if the results of any given experiment are skewed or outdated, they can be offset by a robust user research process that incorporates both quantitative and qualitative techniques across a variety of environments. The safeguard against pursuing insignificant findings, if you will, is to be mindful not to consider data to be an insight until a pattern has been rigorously established.
Divide no more
The moral of the story is that the nuances in statistics actually do matter. Dogmatically adopting textbook statistics can stifle an organization’s ability to innovate and operate competitively, but ignoring the value and perspective provided by statistics altogether can be similarly catastrophic. By understanding and appropriately applying the core tenets of statistics, product and research teams can begin with a framework for productive dialog about the risks they’re willing to take, the research methodologies they can efficiently but rigorously conduct, and the customer insights they’ll act upon.
Share this:
Planning a Taxonomy Project
Taxonomy of Spices and Pantries: Part 2
by Grace G Lau
October 20th, 2015 No Comments
This is part 2 of “Taxonomy of Spices and Pantries,” in which I will be exploring the what, why, and how of taxonomy planning, design, and implementation:
Building the business case for taxonomy
Planning a taxonomy
The many uses of taxonomy
Card sorting to validate a taxonomy
Tree testing a taxonomy
Taxonomy governance
Best practices of enterprise taxonomies
In part 1, I enumerated the business reasons for a taxonomy focus in a site redesign and gave a fun way to explain taxonomy. The kitchen isn’t going to organize itself, so the analogy continues.
I’ve moved every couple of years and it shows in the kitchen. Half-used containers of ground pepper. Scattered bags of star anise. Multiple bags of ground and whole cumin. After a while, people are quick to stuff things into the nearest crammable crevice (until we move again and the IA is called upon to organize the kitchen).
Planning a taxonomy covers the same questions as planning any UX project. Understanding the users and their tasks and needs is a foundation for all things UX. This article will go through the questions you should consider when planning a kitchen, er, um…, a taxonomy project.
Rumination of stuff in my kitchen and the kinds of users and stakeholders the taxonomy needs to be mindful of.
Rumination of stuff in my kitchen and the kinds of users and stakeholders the taxonomy needs to be mindful of. Source: Grace Lau.
Same as a designing any software, application, or website, you’ll need to meet with the stakeholders and ask questions:
Purpose: Why? What will the taxonomy be used for?
Users: Who’s using this taxonomy? Who will it affect?
Content: What will be covered by this taxonomy?
Scope: What’s the topic area and limits?
Resources: What are the project resources and constraints?
(Thanks to Heather Hedden, “The Accidental Taxonomist,” p.292)
What’s your primary purpose?
Why are you doing this?
Are you moving, or planning to move? Is your kitchen so disorganized that you can’t find the sugar you needed for soy braised chicken? Is your content misplaced and hard to search?
How often have you found just plain old salt in a different spot? How many kinds of salt do you have anyway–Kosher salt, sea salt, iodized salt, Hawaiian pink salt? (Why do you have so many different kinds anyway? One of my favorite recipe books recommended using red Hawaiian sea salt for kalua pig. Of course, I got it.)
You might be using the taxonomy for tagging or, in librarian terms, indexing or cataloging. Maybe it’s for information search and retrieval. Are you building a faceted search results page? Perhaps this taxonomy is being used for organizing the site content and guiding the end users through the site navigation.
Establishing a taxonomy as a common language also helps build consensus and creates smarter conversations. On making baozi (steamed buns), I overheard a conversation between fathers:
Father-in-law: We need 酵母 [Jiàomǔ] {noun}.
Dad: Yi-see? (Cantonese transliteration of yeast)
Father-in-law: (confused look)
Dad: Baking pow-daa? (Cantonese transliteration of baking powder)
Meanwhile, I look up the Chinese translation of “yeast” in Google Translate while mother-in-law opens her go-to Chinese dictionary tool. I discover that the dictionary word for “yeast” is 发酵粉 [fājiàofěn] {noun}.
Father-in-law: Ah, so it rises flour: 发面的 [fāmiànde] {verb}
This discovery ensues more discussion about what it does and how it is used. There was at least 15 more minutes of discussing yeast in five different ways before the fathers agreed that they were talking about the same ingredient and its purpose. Eventually, we have this result in our bellies.
Homemade steamed baozi. Apparently, they’re still investigating how much yeast is required for the amount of flour they used. Source: Grace Lau.
Homemade steamed baozi. Apparently, they’re still investigating how much yeast is required for the amount of flour they used. Source: Grace Lau.
Who are the users?
Are they internal? Content creators or editors, working in the CMS?
Are they external users? What’s their range of experience in the domain? Are we speaking with homemakers and amateur cooks or seasoned cooks with many years at various Chinese restaurants?
Looking at the users of my kitchen, I identified the following stakeholders:
Content creators: the people who do the shopping and have to put away the stuff
People who are always in the kitchen: my in-laws
People who are sometimes in the kitchen: me
Visiting users: my parents and friends who often come over for a BBQ/grill party
The cleanup crew: my husband who can’t stand the mess we all make
How do I create a taxonomy for them? First, I attempt to understand their mental models by watching them work in their natural environment and observing their everyday hacks as they complete their tasks. Having empathy for users’ end game—making food for the people they care for—makes a difference in developing the style, consistency, and breadth and depth of the taxonomy.
What content will be covered by the taxonomy?
In my kitchen, we’ll be covering sugars, salts, spices, and staples used for cooking, baking, braising, grilling, smoking, steaming, simmering, and frying.
The Freelance Studio
Denver, Co. User Experience Agency
Ending the UX Designer Drought
Part 2 - Laying the Foundation
by Fred Beecher
June 23rd, 2015 11 Comments
The first article in this series, “A New Apprenticeship Architecture,” laid out a high-level framework for using the ancient model of apprenticeship to solve the modern problem of the UX talent drought. In this article, I get into details. Specifically, I discuss how to make the business case for apprenticeship and what to look for in potential apprentices. Let’s get started!
Defining the business value of apprenticeship
Apprenticeship is an investment. It requires an outlay of cash upfront for a return at a later date. Apprenticeship requires the support of budget-approving levels of your organization. For you to get that support, you need to clearly show its return by demonstrating how it addresses some of your organization’s pain points. What follows is a discussion of common pain points and how apprenticeship assuages them.
Hit growth targets
If your company is trying to grow but can’t find enough qualified people to do the work that growth requires, that’s the sweet spot for apprenticeship. Apprenticeship allows you to make the designers you’re having trouble finding. This is going to be a temporal argument, so you need to come armed with measurements to make it. And you’ll need help from various leaders in your organization to get them.
UX team growth targets for the past 2-3 years (UX leadership)
Actual UX team growth for the past 2-3 years (UX leadership)
Average time required to identify and hire a UX designer (HR leadership)
Then you need to estimate how apprenticeship will improve these measurements. (Part 3 of this series, which will deal with the instructional design of apprenticeship, will offer details on how to make these estimates.)
How many designers per year can apprenticeship contribute?
How much time will be required from the design team to mentor apprentices?
Growth targets typically do not exist in a vacuum. You’ll likely need to combine this argument with one of the others.
Take advantage of more revenue opportunities
One of the financial implications of missing growth targets is not having enough staff to capitalize on all the revenue opportunities you have. For agencies, you might have to pass up good projects because your design team has a six-week lead time. For product companies, your release schedule might fall behind due to a UX bottleneck and push you behind your competition.
The data you need to make this argument differ depending on whether your company sells time (agency) or stuff (product company).
When doing the math about an apprenticeship program, agencies should consider:
What number of projects have been lost in the past year due to UX lead time? (Sales leadership should have this information.)
What is the estimated value of UX work on lost projects? (Sales leadership)
What is the estimated value of other (development, strategy, management, etc.) work on lost projects? (Sales leadership)
Then, contrast these numbers with some of the benefits of apprenticeship:
What is the estimated number of designers per year apprenticeship could contribute?
What is the estimated amount of work these “extra” designers would be able to contribute in both hours and cash?
What is the estimated profitability of junior designers (more) versus senior designers (less), assuming the same hourly rate?
Product companies should consider:
The ratio of innovative features versus “catch-up” features your competitors released last year. (Sales or marketing leadership should have this information.)
The ratio of innovative features versus “catch-up” features you released in the past year. (Sales or marketing leadership)
Any customer service and/or satisfaction metrics. (Customer service leadership)
Contrast this data with…
The estimated number of designers per year you could add through apprenticeship.
The estimated number of features they could’ve completed for release.
The estimated impact this would have on customer satisfaction.
Avoid high recruiting costs
Recruiting a mid- to senior-level UX designer typically means finding them and poaching them from somewhere else. This requires paying significant headhunting fees on top of the person-hours involved in reviewing resumes and portfolios and interviewing candidates. All the data you need to make this argument can come from UX leadership and HR.
Average cost per UX designer recruit
Average number of hours spent recruiting a UX designer
Contrast this data with:
Estimated cost per apprentice
To estimate this, factor in:
Overhead per employee
Salary (and benefits if the apprenticeship is long enough to qualify while still an apprentice)
Software and service licenses
Mentorship time from the current design team
Mentorship/management time from the designer leading the program
Increase designer engagement
This one is tricky because most places don’t measure engagement directly. Measuring engagement accurately requires professional quantitative research. However, there are some signs that can point to low engagement.
High turnover is the number one sign of low engagement. What kind of people are leaving—junior designers, seniors, or both? If possible, try to get exit interview data (as raw as possible) to develop hypotheses about how apprenticeship could help. Maybe junior designers don’t feel like their growth is supported… allowing them to leverage elements of an apprenticeship program for further professional development could fix that. Maybe senior designers are feeling burnt out. Consistent mentorship, like that required by apprenticeship, can be reinvigorating.
Other signs of low engagement include frequently missing deadlines, using more sick time, missing or being late to meetings, and more. Investigate any signs you see, validate any assumptions you might take on, and hypothesize about how apprenticeship can help address these issues.
Help others
If your organization is motivated by altruism, that is wonderful! At least one organization with an apprenticeship program actually tries very hard not to hire their apprentices. Boston’s Fresh Tilled Soil places their graduated apprentices with their clients, which creates a very strong relationship with those clients. Additionally, this helps them raise the caliber and capacity of the Boston metro area when it comes to UX design.
Hiring great UX apprentices
Hiring apprentices requires a different approach to evaluating candidates than hiring established UX designers. Most candidates will have little to no actual UX design skills, so you have to evaluate them for their potential to acquire and hone those skills. Additionally, not everyone learns effectively through apprenticeship. Identifying the traits of a good apprentice in candidates will help your program run smoothly.
Evaluating for skill potential
Portfolio. Even though you’re evaluating someone who may never have designed a user experience before, you still need them to bring some examples of something they’ve made. Without this, it’s impossible to get a sense of what kind of process they go through to make things. For example, one apprentice candidate brought in a print brochure she designed. Her description of how she designed it included identifying business goals, balancing competing stakeholder needs, working within constraints, and getting feedback along the way, all of which are relevant to the process of UX design.
Mindset. The number one thing you must identify in a candidate is whether they already possess the UX mindset, the point of view that things are designed better when they’re designed with people in mind. This is usually the light bulb that goes off in people’s heads when they discover UX design. If that light hasn’t gone off, UX might not be the right path for that person. Apprenticeship is too much of an investment to risk that. Evaluating for this is fairly simple. It usually comes out in the course of a conversation. If not, asking outright “What does user experience design mean to you” can be helpful. Pay careful attention to how people talk about how they’ve approached their work. Is it consistent with their stated philosophy? If not, that could be a red flag.
Intrinsic motivation. When people talk about having a “passion” for something, what that means is that they are intrinsically motivated to do that thing. This is pretty easy to evaluate for. What have they done to learn UX? Have they taken a class? That’s a positive sign. Have they identified and worked through a UX problem on their own? Even better! If a candidate hasn’t put in the effort to explore UX on their own, they are likely not motivated enough to do well in the field.
Self-education. While self-education is a sign of intrinsic motivation, it’s also important in its own right. Apprenticeship relies heavily on mentorship, but the responsibility for the direction and nature of that mentorship lies with the apprentice themselves. If someone is a self-educator, that’s a good predictor that they’ll be able to get the most out of mentorship. This is another fairly easy one to evaluate. Ask them to tell you about the most recent UX-related blog post or article they read. It doesn’t matter what it actually is, only whether they can quickly bring something to mind.
Professional skills. UX design is not a back-office field. UX designers talk with clients, customers, stakeholders, developers, and more. To be an effective UX designer a candidate must possess basic professional skills such as dressing appropriately and communicating well. Simple things like sending a “thank you” email are a great indication of good professional skills. (Physically mailed thank you notes get extra bonus points. One-off letterpressed mailed thank you notes get even more!)
Collaboration. UX design is a collaborative discipline. If a candidate struggles with collaboration, they’ll struggle in the field. When discussing their work (especially class project work), be sure to ask what role they played on the project and how they interacted with other people. Complaining about others and taking on too much work themselves are some warning signs that could indicate that a candidate has trouble with collaboration.
Evaluating for apprenticeship fit
Learning pattern. Some people learn best by gradually being exposed to a topic. I call these people toe-dippers, as they prefer to dip their toes into something before diving in. Others prefer to barrel off the dock straight into the deep end and then struggle to the surface. I call these people deep-enders. While apprenticeship can be modified to work better for deep-enders, its gradual exposure can often frustrate them. It is much better suited for toe-dippers. Evaluating for this is tricky, though. Asking people whether they prefer to dive in or learn gradually, they’ll say “dive in” because they think that’s what you want to hear. Asking them how they’ve approached learning other skills can give some insight, but this is not 100% reliable.
Learning by doing. Apprenticeship helps people acquire skills through experiential learning. If this is not how a person learns, apprenticeship may not be for them. Evaluating for this is very much like evaluating for intrinsic motivation. Has someone gone to the trouble of identifying and solving a design problem themselves? Have they practiced UX methods they have learned about? If so, it’s likely that learning by doing is effective for them.
Receptiveness to critique. Apprenticeship is a period of sustained critique. Someone whose response to criticism is defensiveness or despondency will not be successful as an apprentice. This is easy to identify in an interview within the context of discussing the work examples the candidate has brought. My favorite technique for doing this is to find something insignificant to critique and then hammer on it. This is not how I normally critique, of course; it’s a pressure test. If a candidate responds with openness and a desire to learn from this encounter, that’s a very positive sign. If they launch into a monologue defending their decisions, the interview is pretty much over.
If you’re fired up about UX apprenticeship (and how could you not be?), start making it happen in your organization! Do the research, find the data, and share your vision with your company’s leadership so they can see it too! When you get the go-ahead, you’ll be all ready to start looking for apprentices. If you follow these guidelines, you’ll get great apprentices who will grow into great designers. Stay tuned for Part 3 of this series where I’ll get detailed about the instructional design of apprenticeship, pedagogy, mentorship, and tracking!
Share this:
EmailTwitter206RedditLinkedIn229Facebook20Google
Posted in Big Ideas, Business Design, Education, Workplace and Career | 11 Comments »
11 Comments
Building the Business Case for Taxonomy
Taxonomy of Spices and Pantries: Part 1
by Grace G Lau
September 1st, 2015 9 Comments
XKCD comic strip about not being able to name all seven dwarfs from Snow White.
How often have you found yourself on an ill-defined site redesign project? You know, the ones that you end up redesigning and restructuring every few years as you add new content. Or perhaps you spin up a new microsite because the new product/solution doesn’t fit in with the current structure, not because you want to create a new experience around it. Maybe your site has vaguely labelled navigation buckets like “More Magic”—which is essentially your junk drawer, your “everything else.”
Your top concerns on such projects are:
You can’t find anything.
Your users can’t find anything.
The navigation isn’t consistent.
You have too much content.
Your hopeful answer to everything is to rely on an external search engine, not the one that’s on your site. Google will find everything for you.
A typical site redesign project might include refreshing the visual design, considering the best interaction practices, and conducting usability testing. But what’s missing? Creating the taxonomy.
“Taxonomy is just tagging, right? Sharepoint/AEM has it—we’re covered!”
In the coming months, I will be exploring the what, why, and how of taxonomy planning, design, and implementation:
Building the business case for taxonomy
Planning a taxonomy
The many uses of taxonomy
Card sorting to validate a taxonomy
Tree testing a taxonomy
Taxonomy governance
Best practices of enterprise taxonomies
Are you ready?
ROI of taxonomy
Although the word “taxonomy” is often used interchangeably with tagging, building an enterprise taxonomy means more than tagging content. It’s essentially a knowledge organization system, and its purpose is to enable the user to browse, find, and discover content.
Spending the time on building that taxonomy empowers your site to
better manage your content at scale,
allow for meaningful navigation,
expose long-tail content,
reuse content assets,
bridge across subjects, and
provide more efficient product/brand alignment.
In addition, a sound taxonomy in the long run will improve your content’s findability, support social sharing, and improve your site’s search engine optimization. (Thanks to Mike Atherton’s “Modeling Structured Content” workshop, presented at IA Summit 2013, for outlining the benefits.)
How do you explain taxonomy to get stakeholders on board? No worries, we won’t be going back to high school biology.
Explaining taxonomy
Imagine a household kitchen. How would you organize the spices?
Consider the cooks: In-laws from northern China, mom from Hong Kong, and American-born Grace. I’ve moved four times in the past five years. My husband, son, and I live with my in-laws. I have a mother who still comes over to make her Cantonese herbal soups.
We all speak different languages: English, Mandarin Chinese, and Cantonese Chinese.
I have the unique need of organizing my kitchen for multiple users. For my in-laws, they need to be able to find their star anise, peppercorn, tree ear mushrooms, and sesame oil. My mom needs a space to store her dried figs, dried shiitake mushrooms, dried goji berries, and snow fungus. I need to find a space for dried thyme and rosemary for the “American” food I try to make. Oh, and we all need a consistent place for salt and sugar.
People can organize their kitchen by activity zones: baking, canning, preparing, and cooking. Other ways to organize a kitchen successfully could include:
attributes (shelf-life, weight, temperature requirements)
usage (frequency, type of use)
seasonality (organic, what’s in season, local)
occasion (hot pot dinners, BBQ parties)
You can also consider organizing by audience such as for the five year old helper. I keep refining how the kitchen is organized each time we move. I have used sticky notes in Chinese and English with my in-laws and my mom as part of a card sorting exercise; I’ve tested the navigation around the kitchen to validate the results.
A photo of pantry shelves labeled noodles, rice, garlic, and the like.
Early attempts at organizing my pantry.
If this is to be a data-driven taxonomy, I could consider attaching RFID tags to each spice container to track frequency and type of usage for a period of time to obtain some kitchen analytics. On the other hand, I could try guesstimating frequency by looking at the amount of grime or dust collected on the container. How often are we using chicken bouillon and to make what dishes? Does it need to be within easy reach of the stovetop or can it be relegated to a pantry closet three feet away?
Photo of labeled spice jars in a drawer.
From Home Depot.
Understanding the users and their tasks and needs is a foundation for all things UX. Taxonomy building is not any different. How people think about and use their kitchen brings with it a certain closeness that makes taxonomy concepts easier to grasp.
Who are the users? What are they trying to do? How do they currently tackle this problem? What works and what doesn’t? Watch, observe, and listen to their experience.
Helping the business understand the underlying concepts is one of the challenges I’ve faced with developing a solid taxonomy. We’re not just talking about tagging but breaking down the content by its attributes and metadata as well as by its potential usage and relation to other content. The biggest challenge is building the consensus and understanding around that taxonomy—taxonomy governance—and keeping the system you’ve designed well-seasoned!
Now, back to that site redesign project that you were thinking of: How about starting on that taxonomy? My next post will cover taxonomy planning.
How to determine when customer feedback is actionable
Merging statistics with product management
by Naira Musallam, Nis Frome, Michael Williams, and Tim Lawton
October 13th, 2015 1 Comments
One of the riskiest assumptions for any new product or feature is that customers actually want it.
Although product leaders can propose numerous ‘lean’ methodologies to experiment inexpensively with new concepts before fully engineering them, anything short of launching a product or feature and monitoring its performance over time in the market is, by definition, not 100% accurate. That leaves us with a dangerously wide spectrum of user research strategies, and an even wider range of opinions for determining when customer feedback is actionable.
To the dismay of product teams desiring to ‘move fast and break things,’ their counterparts in data science and research advocate a slower, more traditional approach. These proponents of caution often emphasize an evaluation of statistical signals before considering customer insights valid enough to act upon.
This dynamic has meaningful ramifications. For those who care about making data-driven business decisions, the challenge that presents itself is: How do we adhere to rigorous scientific standards in a world that demands adaptability and agility to survive? Having frequently witnessed the back-and-forth between product teams and research groups, it is clear that there is no shortage of misconceptions and miscommunication between the two. Only a thorough analysis of some critical nuances in statistics and product management can help us bridge the gap.
Quantify risk tolerance
You’ve probably been on one end of an argument that cited a “statistically significant” finding to support a course of action. The problem is that statistical significance is often equated to having relevant and substantive results, but neither is necessarily the case.
Simply put, statistical significance exclusively refers to the level of confidence (measured from 0 to 1, or 0% to 100%) you have that the results you obtained from a given experiment are not due to chance. Statistical significance alone tells you nothing about the appropriateness of the confidence level selected nor the importance of the results.
To begin, confidence levels should be context-dependent, and determining the appropriate confidence threshold is an oft-overlooked proposition that can have profound consequences. In statistics, confidence levels are closely linked to two concepts: type I and type II errors.
A type I error, or false-positive, refers to believing that a variable has an effect that it actually doesn’t.
Some industries, like pharmaceuticals and aeronautics, must be exceedingly cautious against false-positives. Medical researchers for example cannot afford to mistakenly think a drug has an intended benefit when in reality it does not. Side effects can be lethal so the FDA’s threshold for proof that a drug’s health benefits outweigh their known risks is intentionally onerous.
A type II error, or false-negative, has to do with the flip side of the coin: concluding that a variable doesn’t have an effect when it actually does.
Historically though, statistical significance has been primarily focused on avoiding false-positives (even if it means missing out on some likely opportunities) with the default confidence level at 95% for any finding to be considered actionable. The reality that this value was arbitrarily determined by scientists speaks more to their comfort level of being wrong than it does to its appropriateness in any given context. Unfortunately, this particular confidence level is used today by the vast majority of research teams at large organizations and remains generally unchallenged in contexts far different than the ones for which it was formulated.
Matrix visualising Type I and Type II errors as described in text.
But confidence levels should be representative of the amount of risk that an organization is willing to take to realize a potential opportunity. There are many reasons for product teams in particular to be more concerned with avoiding false-negatives than false-positives. Mistakenly missing an opportunity due to caution can have a more negative impact than building something no one really wants. Digital product teams don’t share many of the concerns of an aerospace engineering team and therefore need to calculate and quantify their own tolerance for risk.
To illustrate the ramifications that confidence levels can have on business decisions, consider this thought exercise. Imagine two companies, one with outrageously profitable 90% margins, and one with painfully narrow 5% margins. Suppose each of these businesses are considering a new line of business.
In the case of the high margin business, the amount of capital they have to risk to pursue the opportunity is dwarfed by the potential reward. If executives get even the weakest indication that the business might work they should pursue the new business line aggressively. In fact, waiting for perfect information before acting might be the difference between capturing a market and allowing a competitor to get there first.
In the case of the narrow margin business, however, the buffer before going into the red is so small that going after the new business line wouldn’t make sense with anything except the most definitive signal.
Although these two examples are obviously allegorical, they demonstrate the principle at hand. To work together effectively, research analysts and their commercially-driven counterparts should have a conversation around their organization’s particular level of comfort and to make statistical decisions accordingly.
Focus on impact
Confidence levels only tell half the story. They don’t address the magnitude to which the results of an experiment are meaningful to your business. Product teams need to combine the detection of an effect (i.e., the likelihood that there is an effect) with the size of that effect (i.e., the potential impact to the business), but this is often forgotten on the quest for the proverbial holy grail of statistical significance.
Many teams mistakenly focus energy and resources acting on statistically significant but inconsequential findings. A meta-analysis of hundreds of consumer behavior experiments sought to qualify how seriously effect sizes are considered when evaluating research results. They found that an astonishing three-quarters of the findings didn’t even bother reporting effect sizes “because of their small values” or because of “a general lack of interest in discovering the extent to which an effect is significant…”
This is troubling, because without considering effect size, there’s virtually no way to determine what opportunities are worth pursuing and in what order. Limited development resources prevent product teams from realistically tackling every single opportunity. Consider for example how the answer to this question, posed by a MECLABS data scientist, changes based on your perspective:
In terms of size, what does a 0.2% difference mean? For Amazon.com, that lift might mean an extra 2,000 sales and be worth a $100,000 investment…For a mom-and-pop Yahoo! store, that increase might just equate to an extra two sales and not be worth a $100 investment.
Unless you’re operating at a Google-esque scale for which an incremental lift in a conversion rate could result in literally millions of dollars in additional revenue, product teams should rely on statistics and research teams to help them prioritize the largest opportunities in front of them.
Sample size constraints
One of the most critical constraints on product teams that want to generate user insights is the ability to source users for experiments. With enough traffic, it’s certainly possible to generate a sample size large enough to pass traditional statistical requirements for a production split test. But it can be difficult to drive enough traffic to new product concepts, and it can also put a brand unnecessarily at risk, especially in heavily regulated industries. For product teams that can’t easily access or run tests in production environments, simulated environments offer a compelling alternative.
That leaves product teams stuck between a rock and a hard place. Simulated environments require standing user panels that can get expensive quickly, especially if research teams seek sample sizes in the hundreds or thousands. Unfortunately, strategies like these again overlook important nuances in statistics and place undue hardship on the user insight generation process.
A larger sample does not necessarily mean a better or more insightful sample. The objective of any sample is for it to be representative of the population of interest, so that conclusions about the sample can be extrapolated to the population. It’s assumed that the larger the sample, the more likely it is going to be representative of the population. But that’s not inherently true, especially if the sampling methodology is biased.
Years ago, a client fired an entire research team in the human resources department for making this assumption. The client sought to gather feedback about employee engagement and tasked this research team with distributing a survey to the entire company of more than 20,000 global employees. From a statistical significance standpoint, only 1,000 employees needed to take the survey for the research team to derive defensible insights.
Within hours after sending out the survey on a Tuesday morning, they had collected enough data and closed the survey. The problem was that only employees within a few timezones had completed the questionnaire with a solid third of the company being asleep, and therefore ignored, during collection.
Clearly, a large sample isn’t inherently representative of the population. To obtain a representative sample, product teams first need to clearly identify a target persona. This may seem obvious, but it’s often not explicitly done, creating quite a bit of miscommunication for researchers and other stakeholders. What one person may mean by a ‘frequent customer’ could mean something different entirely to another person.
After a persona is clearly identified, there are a few sampling techniques that one can follow, including probability sampling and nonprobability sampling techniques. A carefully-selected sample size of 100 may be considerably more representative of a target population than a thrown-together sample of 2,000.
Research teams may counter with the need to meet statistical assumptions that are necessary for conducting popular tests such as a t-test or Analysis of Variance (ANOVA). These types of tests assume a normal distribution, which generally occurs as a sample size increases. But statistics has a solution for when this assumption is violated and provides other options, such as non-parametric testing, which work well for small sample sizes.
In fact, the strongest argument left in favor of large sample sizes has already been discounted. Statisticians know that the larger the sample size, the easier it is to detect small effect sizes at a statistically significant level (digital product managers and marketers have become soberly aware that even a test comparing two identical versions can find a statistically significant difference between the two). But a focused product development process should be immune to this distraction because small effect sizes are of little concern. Not only that, but large effect sizes are almost as easily discovered in small samples as in large samples.
For example, suppose you want to test ideas to improve a form on your website that currently gets filled out by 10% of visitors. For simplicity’s sake, let’s use a confidence level of 95% to accept any changes. To identify just a 1% absolute increase to 11%, you’d need more than 12,000 users, according to Optimizely’s stats engine formula! If you were looking for a 5% absolute increase, you’d only need 223 users.
But depending on what you’re looking for, even that many users may not be needed, especially if conducting qualitative research. When identifying usability problems across your site, leading UX researchers have concluded that “elaborate usability tests are a waste of resources” because the overwhelming majority of usability issues are discovered with just five testers.
An emphasis on large sample sizes can be a red herring for product stakeholders. Organizations should not be misled away from the real objective of any sample, which is an accurate representation of the identified, target population. Research teams can help product teams identify necessary sample sizes and appropriate statistical tests to ensure that findings are indeed meaningful and cost-effectively attained.
Expand capacity for learning
It might sound like semantics, but data should not drive decision-making. Insights should. And there can be quite a gap between the two, especially when it comes to user insights.
In a recent talk on the topic of big data, Malcolm Gladwell argued that “data can tell us about the immediate environment of consumer attitudes, but it can’t tell us much about the context in which those attitudes were formed.” Essentially, statistics can be a powerful tool for obtaining and processing data, but it doesn’t have a monopoly on research.
Product teams can become obsessed with their Omniture and Optimizely dashboards, but there’s a lot of rich information that can’t be captured with these tools alone. There is simply no replacement for sitting down and talking with a user or customer. Open-ended feedback in particular can lead to insights that simply cannot be discovered by other means. The focus shouldn’t be on interviewing every single user though, but rather on finding a pattern or theme from the interviews you do conduct.
One of the core principles of the scientific method is the concept of replicability—that the results of any single experiment can be reproduced by another experiment. In product management, the importance of this principle cannot be overstated. You’ll presumably need any data from your research to hold true once you engineer the product or feature and release it to a user base, so reproducibility is an inherent requirement when it comes to collecting and acting on user insights.
We’ve far too often seen a product team wielding a single data point to defend a dubious intuition or pet project. But there are a number of factors that could and almost always do bias the results of a test without any intentional wrongdoing. Mistakenly asking a leading question or sourcing a user panel that doesn’t exactly represent your target customer can skew individual test results.
Similarly, and in digital product management especially, customer perceptions and trends evolve rapidly, further complicating data. Look no further than the handful of mobile operating systems which undergo yearly redesigns and updates, leading to constantly elevated user expectations. It’s perilously easy to imitate Homer Simpson’s lapse in thinking, “This year, I invested in pumpkins. They’ve been going up the whole month of October and I got a feeling they’re going to peak right around January. Then, bang! That’s when I’ll cash in.”
So how can product and research teams safely transition from data to insights? Fortunately, we believe statistics offers insight into the answer.
The central limit theorem is one of the foundational concepts taught in every introductory statistics class. It states that the distribution of averages tends to be Normal even when the distribution of the population from which the samples were taken is decidedly not Normal.
Put as simply as possible, the theorem acknowledges that individual samples will almost invariably be skewed, but offers statisticians a way to combine them to collectively generate valid data. Regardless of how confusing or complex the underlying data may be, by performing relatively simple individual experiments, the culminating result can cut through the noise.
This theorem provides a useful analogy for product management. To derive value from individual experiments and customer data points, product teams need to practice substantiation through iteration. Even if the results of any given experiment are skewed or outdated, they can be offset by a robust user research process that incorporates both quantitative and qualitative techniques across a variety of environments. The safeguard against pursuing insignificant findings, if you will, is to be mindful not to consider data to be an insight until a pattern has been rigorously established.
Divide no more
The moral of the story is that the nuances in statistics actually do matter. Dogmatically adopting textbook statistics can stifle an organization’s ability to innovate and operate competitively, but ignoring the value and perspective provided by statistics altogether can be similarly catastrophic. By understanding and appropriately applying the core tenets of statistics, product and research teams can begin with a framework for productive dialog about the risks they’re willing to take, the research methodologies they can efficiently but rigorously conduct, and the customer insights they’ll act upon.
Share this:
Planning a Taxonomy Project
Taxonomy of Spices and Pantries: Part 2
by Grace G Lau
October 20th, 2015 No Comments
This is part 2 of “Taxonomy of Spices and Pantries,” in which I will be exploring the what, why, and how of taxonomy planning, design, and implementation:
Building the business case for taxonomy
Planning a taxonomy
The many uses of taxonomy
Card sorting to validate a taxonomy
Tree testing a taxonomy
Taxonomy governance
Best practices of enterprise taxonomies
In part 1, I enumerated the business reasons for a taxonomy focus in a site redesign and gave a fun way to explain taxonomy. The kitchen isn’t going to organize itself, so the analogy continues.
I’ve moved every couple of years and it shows in the kitchen. Half-used containers of ground pepper. Scattered bags of star anise. Multiple bags of ground and whole cumin. After a while, people are quick to stuff things into the nearest crammable crevice (until we move again and the IA is called upon to organize the kitchen).
Planning a taxonomy covers the same questions as planning any UX project. Understanding the users and their tasks and needs is a foundation for all things UX. This article will go through the questions you should consider when planning a kitchen, er, um…, a taxonomy project.
Rumination of stuff in my kitchen and the kinds of users and stakeholders the taxonomy needs to be mindful of.
Rumination of stuff in my kitchen and the kinds of users and stakeholders the taxonomy needs to be mindful of. Source: Grace Lau.
Same as a designing any software, application, or website, you’ll need to meet with the stakeholders and ask questions:
Purpose: Why? What will the taxonomy be used for?
Users: Who’s using this taxonomy? Who will it affect?
Content: What will be covered by this taxonomy?
Scope: What’s the topic area and limits?
Resources: What are the project resources and constraints?
(Thanks to Heather Hedden, “The Accidental Taxonomist,” p.292)
What’s your primary purpose?
Why are you doing this?
Are you moving, or planning to move? Is your kitchen so disorganized that you can’t find the sugar you needed for soy braised chicken? Is your content misplaced and hard to search?
How often have you found just plain old salt in a different spot? How many kinds of salt do you have anyway–Kosher salt, sea salt, iodized salt, Hawaiian pink salt? (Why do you have so many different kinds anyway? One of my favorite recipe books recommended using red Hawaiian sea salt for kalua pig. Of course, I got it.)
You might be using the taxonomy for tagging or, in librarian terms, indexing or cataloging. Maybe it’s for information search and retrieval. Are you building a faceted search results page? Perhaps this taxonomy is being used for organizing the site content and guiding the end users through the site navigation.
Establishing a taxonomy as a common language also helps build consensus and creates smarter conversations. On making baozi (steamed buns), I overheard a conversation between fathers:
Father-in-law: We need 酵母 [Jiàomǔ] {noun}.
Dad: Yi-see? (Cantonese transliteration of yeast)
Father-in-law: (confused look)
Dad: Baking pow-daa? (Cantonese transliteration of baking powder)
Meanwhile, I look up the Chinese translation of “yeast” in Google Translate while mother-in-law opens her go-to Chinese dictionary tool. I discover that the dictionary word for “yeast” is 发酵粉 [fājiàofěn] {noun}.
Father-in-law: Ah, so it rises flour: 发面的 [fāmiànde] {verb}
This discovery ensues more discussion about what it does and how it is used. There was at least 15 more minutes of discussing yeast in five different ways before the fathers agreed that they were talking about the same ingredient and its purpose. Eventually, we have this result in our bellies.
Homemade steamed baozi. Apparently, they’re still investigating how much yeast is required for the amount of flour they used. Source: Grace Lau.
Homemade steamed baozi. Apparently, they’re still investigating how much yeast is required for the amount of flour they used. Source: Grace Lau.
Who are the users?
Are they internal? Content creators or editors, working in the CMS?
Are they external users? What’s their range of experience in the domain? Are we speaking with homemakers and amateur cooks or seasoned cooks with many years at various Chinese restaurants?
Looking at the users of my kitchen, I identified the following stakeholders:
Content creators: the people who do the shopping and have to put away the stuff
People who are always in the kitchen: my in-laws
People who are sometimes in the kitchen: me
Visiting users: my parents and friends who often come over for a BBQ/grill party
The cleanup crew: my husband who can’t stand the mess we all make
How do I create a taxonomy for them? First, I attempt to understand their mental models by watching them work in their natural environment and observing their everyday hacks as they complete their tasks. Having empathy for users’ end game—making food for the people they care for—makes a difference in developing the style, consistency, and breadth and depth of the taxonomy.
What content will be covered by the taxonomy?
In my kitchen, we’ll be covering sugars, salts, spices, and staples used for cooking, baking, braising, grilling, smoking, steaming, simmering, and frying.
How did I determine that?
Terminology from existing content. I opened up every cabinet and door in my kitchen and made an inventory.
Search logs. How were users accessing my kitchen? Why? How were users referring to things? What were they looking for?
Storytelling with users. How did you make this? People like to share recipes and I like to watch friends cook. Doing user interviews has never been more fun!
What’s the scope?
Scope can easily get out of hand. Notice that I have not included in my discussion any cookbooks, kitchen hardware and appliances, pots and pans, or anything that’s in the refrigerator or freezer.
You may need a scope document early on to plan releases (if you need them). Perhaps for the first release, I’ll just deal with the frequent use items. Then I’ll move on to occasional use items (soups and desserts).
If the taxonomy you’re developing is faceted—for example, allowing your users to browse your cupboards by particular attributes such as taste, canned vs dried, or weight—your scope should include only those attributes relevant to the search process. For instance, no one really searches for canned goods in my kitchen, so that’s out of scope.
What resources do you have available?
My kitchen taxonomy will be limited. Stakeholders are multilingual so items will need labelling in English, Simplified Chinese, and pinyin romanization. I had considered building a Drupal site to manage an inventory, but I have neither the funding or time to implement such a complex site.
At the same time, what are users’ expectations for the taxonomy? Considering the context in the taxonomy’s usage is important. How will (or should) a taxonomy empower its users? It needs to be invisible; as an indication of a good taxonomy, it shouldn’t affect their current workflow but make it more efficient. Both fathers and my mom are unlikely to stop and use any digital technology to find and look things up.
Most importantly, the completed taxonomy and actual content migration should not conflict with the preparation of the next meal. My baby needs a packed lunch for school, and it’s 6 a.m. when I’m preparing it. There’s no time to rush around looking for things. Time is limited and a complete displacement of spices and condiments would disrupt the high-traffic flow in any household. Meanwhile, we’re out of soy sauce again and I’d rather it not be stashed in yet a new home and forgotten. That’s why we ended up with three open bottles of soy sauce from different brands.
What else should you consider for the taxonomy?
Understanding the scope of the taxonomy you’re building can help prevent scope creep in a taxonomy project. In time, you’ll realize that the 80% of your time and effort is devoted to research while 20% of the time and effort is actually developing the taxonomy. So, making time for iterations and validation through card sorting and other testing is important in your planning.
In my next article, I will explore the many uses of taxonomy outside of tagging.The Freelance Studio
Denver, Co. User Experience Agency
Ending the UX Designer Drought
Part 2 - Laying the Foundation
by Fred Beecher
June 23rd, 2015 11 Comments
The first article in this series, “A New Apprenticeship Architecture,” laid out a high-level framework for using the ancient model of apprenticeship to solve the modern problem of the UX talent drought. In this article, I get into details. Specifically, I discuss how to make the business case for apprenticeship and what to look for in potential apprentices. Let’s get started!
Defining the business value of apprenticeship
Apprenticeship is an investment. It requires an outlay of cash upfront for a return at a later date. Apprenticeship requires the support of budget-approving levels of your organization. For you to get that support, you need to clearly show its return by demonstrating how it addresses some of your organization’s pain points. What follows is a discussion of common pain points and how apprenticeship assuages them.
Hit growth targets
If your company is trying to grow but can’t find enough qualified people to do the work that growth requires, that’s the sweet spot for apprenticeship. Apprenticeship allows you to make the designers you’re having trouble finding. This is going to be a temporal argument, so you need to come armed with measurements to make it. And you’ll need help from various leaders in your organization to get them.
UX team growth targets for the past 2-3 years (UX leadership)
Actual UX team growth for the past 2-3 years (UX leadership)
Average time required to identify and hire a UX designer (HR leadership)
Then you need to estimate how apprenticeship will improve these measurements. (Part 3 of this series, which will deal with the instructional design of apprenticeship, will offer details on how to make these estimates.)
How many designers per year can apprenticeship contribute?
How much time will be required from the design team to mentor apprentices?
Growth targets typically do not exist in a vacuum. You’ll likely need to combine this argument with one of the others.
Take advantage of more revenue opportunities
One of the financial implications of missing growth targets is not having enough staff to capitalize on all the revenue opportunities you have. For agencies, you might have to pass up good projects because your design team has a six-week lead time. For product companies, your release schedule might fall behind due to a UX bottleneck and push you behind your competition.
The data you need to make this argument differ depending on whether your company sells time (agency) or stuff (product company).
When doing the math about an apprenticeship program, agencies should consider:
What number of projects have been lost in the past year due to UX lead time? (Sales leadership should have this information.)
What is the estimated value of UX work on lost projects? (Sales leadership)
What is the estimated value of other (development, strategy, management, etc.) work on lost projects? (Sales leadership)
Then, contrast these numbers with some of the benefits of apprenticeship:
What is the estimated number of designers per year apprenticeship could contribute?
What is the estimated amount of work these “extra” designers would be able to contribute in both hours and cash?
What is the estimated profitability of junior designers (more) versus senior designers (less), assuming the same hourly rate?
Product companies should consider:
The ratio of innovative features versus “catch-up” features your competitors released last year. (Sales or marketing leadership should have this information.)
The ratio of innovative features versus “catch-up” features you released in the past year. (Sales or marketing leadership)
Any customer service and/or satisfaction metrics. (Customer service leadership)
Contrast this data with…
The estimated number of designers per year you could add through apprenticeship.
The estimated number of features they could’ve completed for release.
The estimated impact this would have on customer satisfaction.
Avoid high recruiting costs
Recruiting a mid- to senior-level UX designer typically means finding them and poaching them from somewhere else. This requires paying significant headhunting fees on top of the person-hours involved in reviewing resumes and portfolios and interviewing candidates. All the data you need to make this argument can come from UX leadership and HR.
Average cost per UX designer recruit
Average number of hours spent recruiting a UX designer
Contrast this data with:
Estimated cost per apprentice
To estimate this, factor in:
Overhead per employee
Salary (and benefits if the apprenticeship is long enough to qualify while still an apprentice)
Software and service licenses
Mentorship time from the current design team
Mentorship/management time from the designer leading the program
Increase designer engagement
This one is tricky because most places don’t measure engagement directly. Measuring engagement accurately requires professional quantitative research. However, there are some signs that can point to low engagement.
High turnover is the number one sign of low engagement. What kind of people are leaving—junior designers, seniors, or both? If possible, try to get exit interview data (as raw as possible) to develop hypotheses about how apprenticeship could help. Maybe junior designers don’t feel like their growth is supported… allowing them to leverage elements of an apprenticeship program for further professional development could fix that. Maybe senior designers are feeling burnt out. Consistent mentorship, like that required by apprenticeship, can be reinvigorating.
Other signs of low engagement include frequently missing deadlines, using more sick time, missing or being late to meetings, and more. Investigate any signs you see, validate any assumptions you might take on, and hypothesize about how apprenticeship can help address these issues.
Help others
If your organization is motivated by altruism, that is wonderful! At least one organization with an apprenticeship program actually tries very hard not to hire their apprentices. Boston’s Fresh Tilled Soil places their graduated apprentices with their clients, which creates a very strong relationship with those clients. Additionally, this helps them raise the caliber and capacity of the Boston metro area when it comes to UX design.
Hiring great UX apprentices
Hiring apprentices requires a different approach to evaluating candidates than hiring established UX designers. Most candidates will have little to no actual UX design skills, so you have to evaluate them for their potential to acquire and hone those skills. Additionally, not everyone learns effectively through apprenticeship. Identifying the traits of a good apprentice in candidates will help your program run smoothly.
Evaluating for skill potential
Portfolio. Even though you’re evaluating someone who may never have designed a user experience before, you still need them to bring some examples of something they’ve made. Without this, it’s impossible to get a sense of what kind of process they go through to make things. For example, one apprentice candidate brought in a print brochure she designed. Her description of how she designed it included identifying business goals, balancing competing stakeholder needs, working within constraints, and getting feedback along the way, all of which are relevant to the process of UX design.
Mindset. The number one thing you must identify in a candidate is whether they already possess the UX mindset, the point of view that things are designed better when they’re designed with people in mind. This is usually the light bulb that goes off in people’s heads when they discover UX design. If that light hasn’t gone off, UX might not be the right path for that person. Apprenticeship is too much of an investment to risk that. Evaluating for this is fairly simple. It usually comes out in the course of a conversation. If not, asking outright “What does user experience design mean to you” can be helpful. Pay careful attention to how people talk about how they’ve approached their work. Is it consistent with their stated philosophy? If not, that could be a red flag.
Intrinsic motivation. When people talk about having a “passion” for something, what that means is that they are intrinsically motivated to do that thing. This is pretty easy to evaluate for. What have they done to learn UX? Have they taken a class? That’s a positive sign. Have they identified and worked through a UX problem on their own? Even better! If a candidate hasn’t put in the effort to explore UX on their own, they are likely not motivated enough to do well in the field.
Self-education. While self-education is a sign of intrinsic motivation, it’s also important in its own right. Apprenticeship relies heavily on mentorship, but the responsibility for the direction and nature of that mentorship lies with the apprentice themselves. If someone is a self-educator, that’s a good predictor that they’ll be able to get the most out of mentorship. This is another fairly easy one to evaluate. Ask them to tell you about the most recent UX-related blog post or article they read. It doesn’t matter what it actually is, only whether they can quickly bring something to mind.
Professional skills. UX design is not a back-office field. UX designers talk with clients, customers, stakeholders, developers, and more. To be an effective UX designer a candidate must possess basic professional skills such as dressing appropriately and communicating well. Simple things like sending a “thank you” email are a great indication of good professional skills. (Physically mailed thank you notes get extra bonus points. One-off letterpressed mailed thank you notes get even more!)
Collaboration. UX design is a collaborative discipline. If a candidate struggles with collaboration, they’ll struggle in the field. When discussing their work (especially class project work), be sure to ask what role they played on the project and how they interacted with other people. Complaining about others and taking on too much work themselves are some warning signs that could indicate that a candidate has trouble with collaboration.
Evaluating for apprenticeship fit
Learning pattern. Some people learn best by gradually being exposed to a topic. I call these people toe-dippers, as they prefer to dip their toes into something before diving in. Others prefer to barrel off the dock straight into the deep end and then struggle to the surface. I call these people deep-enders. While apprenticeship can be modified to work better for deep-enders, its gradual exposure can often frustrate them. It is much better suited for toe-dippers. Evaluating for this is tricky, though. Asking people whether they prefer to dive in or learn gradually, they’ll say “dive in” because they think that’s what you want to hear. Asking them how they’ve approached learning other skills can give some insight, but this is not 100% reliable.
Learning by doing. Apprenticeship helps people acquire skills through experiential learning. If this is not how a person learns, apprenticeship may not be for them. Evaluating for this is very much like evaluating for intrinsic motivation. Has someone gone to the trouble of identifying and solving a design problem themselves? Have they practiced UX methods they have learned about? If so, it’s likely that learning by doing is effective for them.
Receptiveness to critique. Apprenticeship is a period of sustained critique. Someone whose response to criticism is defensiveness or despondency will not be successful as an apprentice. This is easy to identify in an interview within the context of discussing the work examples the candidate has brought. My favorite technique for doing this is to find something insignificant to critique and then hammer on it. This is not how I normally critique, of course; it’s a pressure test. If a candidate responds with openness and a desire to learn from this encounter, that’s a very positive sign. If they launch into a monologue defending their decisions, the interview is pretty much over.
If you’re fired up about UX apprenticeship (and how could you not be?), start making it happen in your organization! Do the research, find the data, and share your vision with your company’s leadership so they can see it too! When you get the go-ahead, you’ll be all ready to start looking for apprentices. If you follow these guidelines, you’ll get great apprentices who will grow into great designers. Stay tuned for Part 3 of this series where I’ll get detailed about the instructional design of apprenticeship, pedagogy, mentorship, and tracking!
Share this:
EmailTwitter206RedditLinkedIn229Facebook20Google
Posted in Big Ideas, Business Design, Education, Workplace and Career | 11 Comments »
11 Comments
Building the Business Case for Taxonomy
Taxonomy of Spices and Pantries: Part 1
by Grace G Lau
September 1st, 2015 9 Comments
XKCD comic strip about not being able to name all seven dwarfs from Snow White.
How often have you found yourself on an ill-defined site redesign project? You know, the ones that you end up redesigning and restructuring every few years as you add new content. Or perhaps you spin up a new microsite because the new product/solution doesn’t fit in with the current structure, not because you want to create a new experience around it. Maybe your site has vaguely labelled navigation buckets like “More Magic”—which is essentially your junk drawer, your “everything else.”
Your top concerns on such projects are:
You can’t find anything.
Your users can’t find anything.
The navigation isn’t consistent.
You have too much content.
Your hopeful answer to everything is to rely on an external search engine, not the one that’s on your site. Google will find everything for you.
A typical site redesign project might include refreshing the visual design, considering the best interaction practices, and conducting usability testing. But what’s missing? Creating the taxonomy.
“Taxonomy is just tagging, right? Sharepoint/AEM has it—we’re covered!”
In the coming months, I will be exploring the what, why, and how of taxonomy planning, design, and implementation:
Building the business case for taxonomy
Planning a taxonomy
The many uses of taxonomy
Card sorting to validate a taxonomy
Tree testing a taxonomy
Taxonomy governance
Best practices of enterprise taxonomies
Are you ready?
ROI of taxonomy
Although the word “taxonomy” is often used interchangeably with tagging, building an enterprise taxonomy means more than tagging content. It’s essentially a knowledge organization system, and its purpose is to enable the user to browse, find, and discover content.
Spending the time on building that taxonomy empowers your site to
better manage your content at scale,
allow for meaningful navigation,
expose long-tail content,
reuse content assets,
bridge across subjects, and
provide more efficient product/brand alignment.
In addition, a sound taxonomy in the long run will improve your content’s findability, support social sharing, and improve your site’s search engine optimization. (Thanks to Mike Atherton’s “Modeling Structured Content” workshop, presented at IA Summit 2013, for outlining the benefits.)
How do you explain taxonomy to get stakeholders on board? No worries, we won’t be going back to high school biology.
Explaining taxonomy
Imagine a household kitchen. How would you organize the spices?
Consider the cooks: In-laws from northern China, mom from Hong Kong, and American-born Grace. I’ve moved four times in the past five years. My husband, son, and I live with my in-laws. I have a mother who still comes over to make her Cantonese herbal soups.
We all speak different languages: English, Mandarin Chinese, and Cantonese Chinese.
I have the unique need of organizing my kitchen for multiple users. For my in-laws, they need to be able to find their star anise, peppercorn, tree ear mushrooms, and sesame oil. My mom needs a space to store her dried figs, dried shiitake mushrooms, dried goji berries, and snow fungus. I need to find a space for dried thyme and rosemary for the “American” food I try to make. Oh, and we all need a consistent place for salt and sugar.
People can organize their kitchen by activity zones: baking, canning, preparing, and cooking. Other ways to organize a kitchen successfully could include:
attributes (shelf-life, weight, temperature requirements)
usage (frequency, type of use)
seasonality (organic, what’s in season, local)
occasion (hot pot dinners, BBQ parties)
You can also consider organizing by audience such as for the five year old helper. I keep refining how the kitchen is organized each time we move. I have used sticky notes in Chinese and English with my in-laws and my mom as part of a card sorting exercise; I’ve tested the navigation around the kitchen to validate the results.
A photo of pantry shelves labeled noodles, rice, garlic, and the like.
Early attempts at organizing my pantry.
If this is to be a data-driven taxonomy, I could consider attaching RFID tags to each spice container to track frequency and type of usage for a period of time to obtain some kitchen analytics. On the other hand, I could try guesstimating frequency by looking at the amount of grime or dust collected on the container. How often are we using chicken bouillon and to make what dishes? Does it need to be within easy reach of the stovetop or can it be relegated to a pantry closet three feet away?
Photo of labeled spice jars in a drawer.
From Home Depot.
Understanding the users and their tasks and needs is a foundation for all things UX. Taxonomy building is not any different. How people think about and use their kitchen brings with it a certain closeness that makes taxonomy concepts easier to grasp.
Who are the users? What are they trying to do? How do they currently tackle this problem? What works and what doesn’t? Watch, observe, and listen to their experience.
Helping the business understand the underlying concepts is one of the challenges I’ve faced with developing a solid taxonomy. We’re not just talking about tagging but breaking down the content by its attributes and metadata as well as by its potential usage and relation to other content. The biggest challenge is building the consensus and understanding around that taxonomy—taxonomy governance—and keeping the system you’ve designed well-seasoned!
Now, back to that site redesign project that you were thinking of: How about starting on that taxonomy? My next post will cover taxonomy planning.
How to determine when customer feedback is actionable
Merging statistics with product management
by Naira Musallam, Nis Frome, Michael Williams, and Tim Lawton
October 13th, 2015 1 Comments
One of the riskiest assumptions for any new product or feature is that customers actually want it.
Although product leaders can propose numerous ‘lean’ methodologies to experiment inexpensively with new concepts before fully engineering them, anything short of launching a product or feature and monitoring its performance over time in the market is, by definition, not 100% accurate. That leaves us with a dangerously wide spectrum of user research strategies, and an even wider range of opinions for determining when customer feedback is actionable.
To the dismay of product teams desiring to ‘move fast and break things,’ their counterparts in data science and research advocate a slower, more traditional approach. These proponents of caution often emphasize an evaluation of statistical signals before considering customer insights valid enough to act upon.
This dynamic has meaningful ramifications. For those who care about making data-driven business decisions, the challenge that presents itself is: How do we adhere to rigorous scientific standards in a world that demands adaptability and agility to survive? Having frequently witnessed the back-and-forth between product teams and research groups, it is clear that there is no shortage of misconceptions and miscommunication between the two. Only a thorough analysis of some critical nuances in statistics and product management can help us bridge the gap.
Quantify risk tolerance
You’ve probably been on one end of an argument that cited a “statistically significant” finding to support a course of action. The problem is that statistical significance is often equated to having relevant and substantive results, but neither is necessarily the case.
Simply put, statistical significance exclusively refers to the level of confidence (measured from 0 to 1, or 0% to 100%) you have that the results you obtained from a given experiment are not due to chance. Statistical significance alone tells you nothing about the appropriateness of the confidence level selected nor the importance of the results.
To begin, confidence levels should be context-dependent, and determining the appropriate confidence threshold is an oft-overlooked proposition that can have profound consequences. In statistics, confidence levels are closely linked to two concepts: type I and type II errors.
A type I error, or false-positive, refers to believing that a variable has an effect that it actually doesn’t.
Some industries, like pharmaceuticals and aeronautics, must be exceedingly cautious against false-positives. Medical researchers for example cannot afford to mistakenly think a drug has an intended benefit when in reality it does not. Side effects can be lethal so the FDA’s threshold for proof that a drug’s health benefits outweigh their known risks is intentionally onerous.
A type II error, or false-negative, has to do with the flip side of the coin: concluding that a variable doesn’t have an effect when it actually does.
Historically though, statistical significance has been primarily focused on avoiding false-positives (even if it means missing out on some likely opportunities) with the default confidence level at 95% for any finding to be considered actionable. The reality that this value was arbitrarily determined by scientists speaks more to their comfort level of being wrong than it does to its appropriateness in any given context. Unfortunately, this particular confidence level is used today by the vast majority of research teams at large organizations and remains generally unchallenged in contexts far different than the ones for which it was formulated.
Matrix visualising Type I and Type II errors as described in text.
But confidence levels should be representative of the amount of risk that an organization is willing to take to realize a potential opportunity. There are many reasons for product teams in particular to be more concerned with avoiding false-negatives than false-positives. Mistakenly missing an opportunity due to caution can have a more negative impact than building something no one really wants. Digital product teams don’t share many of the concerns of an aerospace engineering team and therefore need to calculate and quantify their own tolerance for risk.
To illustrate the ramifications that confidence levels can have on business decisions, consider this thought exercise. Imagine two companies, one with outrageously profitable 90% margins, and one with painfully narrow 5% margins. Suppose each of these businesses are considering a new line of business.
In the case of the high margin business, the amount of capital they have to risk to pursue the opportunity is dwarfed by the potential reward. If executives get even the weakest indication that the business might work they should pursue the new business line aggressively. In fact, waiting for perfect information before acting might be the difference between capturing a market and allowing a competitor to get there first.
In the case of the narrow margin business, however, the buffer before going into the red is so small that going after the new business line wouldn’t make sense with anything except the most definitive signal.
Although these two examples are obviously allegorical, they demonstrate the principle at hand. To work together effectively, research analysts and their commercially-driven counterparts should have a conversation around their organization’s particular level of comfort and to make statistical decisions accordingly.
Focus on impact
Confidence levels only tell half the story. They don’t address the magnitude to which the results of an experiment are meaningful to your business. Product teams need to combine the detection of an effect (i.e., the likelihood that there is an effect) with the size of that effect (i.e., the potential impact to the business), but this is often forgotten on the quest for the proverbial holy grail of statistical significance.
Many teams mistakenly focus energy and resources acting on statistically significant but inconsequential findings. A meta-analysis of hundreds of consumer behavior experiments sought to qualify how seriously effect sizes are considered when evaluating research results. They found that an astonishing three-quarters of the findings didn’t even bother reporting effect sizes “because of their small values” or because of “a general lack of interest in discovering the extent to which an effect is significant…”
This is troubling, because without considering effect size, there’s virtually no way to determine what opportunities are worth pursuing and in what order. Limited development resources prevent product teams from realistically tackling every single opportunity. Consider for example how the answer to this question, posed by a MECLABS data scientist, changes based on your perspective:
In terms of size, what does a 0.2% difference mean? For Amazon.com, that lift might mean an extra 2,000 sales and be worth a $100,000 investment…For a mom-and-pop Yahoo! store, that increase might just equate to an extra two sales and not be worth a $100 investment.
Unless you’re operating at a Google-esque scale for which an incremental lift in a conversion rate could result in literally millions of dollars in additional revenue, product teams should rely on statistics and research teams to help them prioritize the largest opportunities in front of them.
Sample size constraints
One of the most critical constraints on product teams that want to generate user insights is the ability to source users for experiments. With enough traffic, it’s certainly possible to generate a sample size large enough to pass traditional statistical requirements for a production split test. But it can be difficult to drive enough traffic to new product concepts, and it can also put a brand unnecessarily at risk, especially in heavily regulated industries. For product teams that can’t easily access or run tests in production environments, simulated environments offer a compelling alternative.
That leaves product teams stuck between a rock and a hard place. Simulated environments require standing user panels that can get expensive quickly, especially if research teams seek sample sizes in the hundreds or thousands. Unfortunately, strategies like these again overlook important nuances in statistics and place undue hardship on the user insight generation process.
A larger sample does not necessarily mean a better or more insightful sample. The objective of any sample is for it to be representative of the population of interest, so that conclusions about the sample can be extrapolated to the population. It’s assumed that the larger the sample, the more likely it is going to be representative of the population. But that’s not inherently true, especially if the sampling methodology is biased.
Years ago, a client fired an entire research team in the human resources department for making this assumption. The client sought to gather feedback about employee engagement and tasked this research team with distributing a survey to the entire company of more than 20,000 global employees. From a statistical significance standpoint, only 1,000 employees needed to take the survey for the research team to derive defensible insights.
Within hours after sending out the survey on a Tuesday morning, they had collected enough data and closed the survey. The problem was that only employees within a few timezones had completed the questionnaire with a solid third of the company being asleep, and therefore ignored, during collection.
Clearly, a large sample isn’t inherently representative of the population. To obtain a representative sample, product teams first need to clearly identify a target persona. This may seem obvious, but it’s often not explicitly done, creating quite a bit of miscommunication for researchers and other stakeholders. What one person may mean by a ‘frequent customer’ could mean something different entirely to another person.
After a persona is clearly identified, there are a few sampling techniques that one can follow, including probability sampling and nonprobability sampling techniques. A carefully-selected sample size of 100 may be considerably more representative of a target population than a thrown-together sample of 2,000.
Research teams may counter with the need to meet statistical assumptions that are necessary for conducting popular tests such as a t-test or Analysis of Variance (ANOVA). These types of tests assume a normal distribution, which generally occurs as a sample size increases. But statistics has a solution for when this assumption is violated and provides other options, such as non-parametric testing, which work well for small sample sizes.
In fact, the strongest argument left in favor of large sample sizes has already been discounted. Statisticians know that the larger the sample size, the easier it is to detect small effect sizes at a statistically significant level (digital product managers and marketers have become soberly aware that even a test comparing two identical versions can find a statistically significant difference between the two). But a focused product development process should be immune to this distraction because small effect sizes are of little concern. Not only that, but large effect sizes are almost as easily discovered in small samples as in large samples.
For example, suppose you want to test ideas to improve a form on your website that currently gets filled out by 10% of visitors. For simplicity’s sake, let’s use a confidence level of 95% to accept any changes. To identify just a 1% absolute increase to 11%, you’d need more than 12,000 users, according to Optimizely’s stats engine formula! If you were looking for a 5% absolute increase, you’d only need 223 users.
But depending on what you’re looking for, even that many users may not be needed, especially if conducting qualitative research. When identifying usability problems across your site, leading UX researchers have concluded that “elaborate usability tests are a waste of resources” because the overwhelming majority of usability issues are discovered with just five testers.
An emphasis on large sample sizes can be a red herring for product stakeholders. Organizations should not be misled away from the real objective of any sample, which is an accurate representation of the identified, target population. Research teams can help product teams identify necessary sample sizes and appropriate statistical tests to ensure that findings are indeed meaningful and cost-effectively attained.
Expand capacity for learning
It might sound like semantics, but data should not drive decision-making. Insights should. And there can be quite a gap between the two, especially when it comes to user insights.
In a recent talk on the topic of big data, Malcolm Gladwell argued that “data can tell us about the immediate environment of consumer attitudes, but it can’t tell us much about the context in which those attitudes were formed.” Essentially, statistics can be a powerful tool for obtaining and processing data, but it doesn’t have a monopoly on research.
Product teams can become obsessed with their Omniture and Optimizely dashboards, but there’s a lot of rich information that can’t be captured with these tools alone. There is simply no replacement for sitting down and talking with a user or customer. Open-ended feedback in particular can lead to insights that simply cannot be discovered by other means. The focus shouldn’t be on interviewing every single user though, but rather on finding a pattern or theme from the interviews you do conduct.
One of the core principles of the scientific method is the concept of replicability—that the results of any single experiment can be reproduced by another experiment. In product management, the importance of this principle cannot be overstated. You’ll presumably need any data from your research to hold true once you engineer the product or feature and release it to a user base, so reproducibility is an inherent requirement when it comes to collecting and acting on user insights.
We’ve far too often seen a product team wielding a single data point to defend a dubious intuition or pet project. But there are a number of factors that could and almost always do bias the results of a test without any intentional wrongdoing. Mistakenly asking a leading question or sourcing a user panel that doesn’t exactly represent your target customer can skew individual test results.
Similarly, and in digital product management especially, customer perceptions and trends evolve rapidly, further complicating data. Look no further than the handful of mobile operating systems which undergo yearly redesigns and updates, leading to constantly elevated user expectations. It’s perilously easy to imitate Homer Simpson’s lapse in thinking, “This year, I invested in pumpkins. They’ve been going up the whole month of October and I got a feeling they’re going to peak right around January. Then, bang! That’s when I’ll cash in.”
So how can product and research teams safely transition from data to insights? Fortunately, we believe statistics offers insight into the answer.
The central limit theorem is one of the foundational concepts taught in every introductory statistics class. It states that the distribution of averages tends to be Normal even when the distribution of the population from which the samples were taken is decidedly not Normal.
Put as simply as possible, the theorem acknowledges that individual samples will almost invariably be skewed, but offers statisticians a way to combine them to collectively generate valid data. Regardless of how confusing or complex the underlying data may be, by performing relatively simple individual experiments, the culminating result can cut through the noise.
This theorem provides a useful analogy for product management. To derive value from individual experiments and customer data points, product teams need to practice substantiation through iteration. Even if the results of any given experiment are skewed or outdated, they can be offset by a robust user research process that incorporates both quantitative and qualitative techniques across a variety of environments. The safeguard against pursuing insignificant findings, if you will, is to be mindful not to consider data to be an insight until a pattern has been rigorously established.
Divide no more
The moral of the story is that the nuances in statistics actually do matter. Dogmatically adopting textbook statistics can stifle an organization’s ability to innovate and operate competitively, but ignoring the value and perspective provided by statistics altogether can be similarly catastrophic. By understanding and appropriately applying the core tenets of statistics, product and research teams can begin with a framework for productive dialog about the risks they’re willing to take, the research methodologies they can efficiently but rigorously conduct, and the customer insights they’ll act upon.
Share this:
Planning a Taxonomy Project
Taxonomy of Spices and Pantries: Part 2
by Grace G Lau
October 20th, 2015 No Comments
This is part 2 of “Taxonomy of Spices and Pantries,” in which I will be exploring the what, why, and how of taxonomy planning, design, and implementation:
Building the business case for taxonomy
Planning a taxonomy
The many uses of taxonomy
Card sorting to validate a taxonomy
Tree testing a taxonomy
Taxonomy governance
Best practices of enterprise taxonomies
In part 1, I enumerated the business reasons for a taxonomy focus in a site redesign and gave a fun way to explain taxonomy. The kitchen isn’t going to organize itself, so the analogy continues.
I’ve moved every couple of years and it shows in the kitchen. Half-used containers of ground pepper. Scattered bags of star anise. Multiple bags of ground and whole cumin. After a while, people are quick to stuff things into the nearest crammable crevice (until we move again and the IA is called upon to organize the kitchen).
Planning a taxonomy covers the same questions as planning any UX project. Understanding the users and their tasks and needs is a foundation for all things UX. This article will go through the questions you should consider when planning a kitchen, er, um…, a taxonomy project.
Rumination of stuff in my kitchen and the kinds of users and stakeholders the taxonomy needs to be mindful of.
Rumination of stuff in my kitchen and the kinds of users and stakeholders the taxonomy needs to be mindful of. Source: Grace Lau.
Same as a designing any software, application, or website, you’ll need to meet with the stakeholders and ask questions:
Purpose: Why? What will the taxonomy be used for?
Users: Who’s using this taxonomy? Who will it affect?
Content: What will be covered by this taxonomy?
Scope: What’s the topic area and limits?
Resources: What are the project resources and constraints?
(Thanks to Heather Hedden, “The Accidental Taxonomist,” p.292)
What’s your primary purpose?
Why are you doing this?
Are you moving, or planning to move? Is your kitchen so disorganized that you can’t find the sugar you needed for soy braised chicken? Is your content misplaced and hard to search?
How often have you found just plain old salt in a different spot? How many kinds of salt do you have anyway–Kosher salt, sea salt, iodized salt, Hawaiian pink salt? (Why do you have so many different kinds anyway? One of my favorite recipe books recommended using red Hawaiian sea salt for kalua pig. Of course, I got it.)
You might be using the taxonomy for tagging or, in librarian terms, indexing or cataloging. Maybe it’s for information search and retrieval. Are you building a faceted search results page? Perhaps this taxonomy is being used for organizing the site content and guiding the end users through the site navigation.
Establishing a taxonomy as a common language also helps build consensus and creates smarter conversations. On making baozi (steamed buns), I overheard a conversation between fathers:
Father-in-law: We need 酵母 [Jiàomǔ] {noun}.
Dad: Yi-see? (Cantonese transliteration of yeast)
Father-in-law: (confused look)
Dad: Baking pow-daa? (Cantonese transliteration of baking powder)
Meanwhile, I look up the Chinese translation of “yeast” in Google Translate while mother-in-law opens her go-to Chinese dictionary tool. I discover that the dictionary word for “yeast” is 发酵粉 [fājiàofěn] {noun}.
Father-in-law: Ah, so it rises flour: 发面的 [fāmiànde] {verb}
This discovery ensues more discussion about what it does and how it is used. There was at least 15 more minutes of discussing yeast in five different ways before the fathers agreed that they were talking about the same ingredient and its purpose. Eventually, we have this result in our bellies.
Homemade steamed baozi. Apparently, they’re still investigating how much yeast is required for the amount of flour they used. Source: Grace Lau.
Homemade steamed baozi. Apparently, they’re still investigating how much yeast is required for the amount of flour they used. Source: Grace Lau.
Who are the users?
Are they internal? Content creators or editors, working in the CMS?
Are they external users? What’s their range of experience in the domain? Are we speaking with homemakers and amateur cooks or seasoned cooks with many years at various Chinese restaurants?
Looking at the users of my kitchen, I identified the following stakeholders:
Content creators: the people who do the shopping and have to put away the stuff
People who are always in the kitchen: my in-laws
People who are sometimes in the kitchen: me
Visiting users: my parents and friends who often come over for a BBQ/grill party
The cleanup crew: my husband who can’t stand the mess we all make
How do I create a taxonomy for them? First, I attempt to understand their mental models by watching them work in their natural environment and observing their everyday hacks as they complete their tasks. Having empathy for users’ end game—making food for the people they care for—makes a difference in developing the style, consistency, and breadth and depth of the taxonomy.
What content will be covered by the taxonomy?
In my kitchen, we’ll be covering sugars, salts, spices, and staples used for cooking, baking, braising, grilling, smoking, steaming, simmering, and frying.
How did I determine that?
Terminology from existing content. I opened up every cabinet and door in my kitchen and made an inventory.
Search logs. How were users accessing my kitchen? Why? How were users referring to things? What were they looking for?
Storytelling with users. How did you make this? People like to share recipes and I like to watch friends cook. Doing user interviews has never been more fun!
What’s the scope?
Scope can easily get out of hand. Notice that I have not included in my discussion any cookbooks, kitchen hardware and appliances, pots and pans, or anything that’s in the refrigerator or freezer.
You may need a scope document early on to plan releases (if you need them). Perhaps for the first release, I’ll just deal with the frequent use items. Then I’ll move on to occasional use items (soups and desserts).
If the taxonomy you’re developing is faceted—for example, allowing your users to browse your cupboards by particular attributes such as taste, canned vs dried, or weight—your scope should include only those attributes relevant to the search process. For instance, no one really searches for canned goods in my kitchen, so that’s out of scope.
What resources do you have available?
My kitchen taxonomy will be limited. Stakeholders are multilingual so items will need labelling in English, Simplified Chinese, and pinyin romanization. I had considered building a Drupal site to manage an inventory, but I have neither the funding or time to implement such a complex site.
At the same time, what are users’ expectations for the taxonomy? Considering the context in the taxonomy’s usage is important. How will (or should) a taxonomy empower its users? It needs to be invisible; as an indication of a good taxonomy, it shouldn’t affect their current workflow but make it more efficient. Both fathers and my mom are unlikely to stop and use any digital technology to find and look things up.
Most importantly, the completed taxonomy and actual content migration should not conflict with the preparation of the next meal. My baby needs a packed lunch for school, and it’s 6 a.m. when I’m preparing it. There’s no time to rush around looking for things. Time is limited and a complete displacement of spices and condiments would disrupt the high-traffic flow in any household. Meanwhile, we’re out of soy sauce again and I’d rather it not be stashed in yet a new home and forgotten. That’s why we ended up with three open bottles of soy sauce from different brands.
What else should you consider for the taxonomy?
Understanding the scope of the taxonomy you’re building can help prevent scope creep in a taxonomy project. In time, you’ll realize that the 80% of your time and effort is devoted to research while 20% of the time and effort is actually developing the taxonomy. So, making time for iterations and validation through card sorting and other testing is important in your planning.
In my next article, I will explore the many uses of taxonomy outside of tagging.The Freelance Studio
Denver, Co. User Experience Agency
Ending the UX Designer Drought
Part 2 - Laying the Foundation
by Fred Beecher
June 23rd, 2015 11 Comments
The first article in this series, “A New Apprenticeship Architecture,” laid out a high-level framework for using the ancient model of apprenticeship to solve the modern problem of the UX talent drought. In this article, I get into details. Specifically, I discuss how to make the business case for apprenticeship and what to look for in potential apprentices. Let’s get started!
Defining the business value of apprenticeship
Apprenticeship is an investment. It requires an outlay of cash upfront for a return at a later date. Apprenticeship requires the support of budget-approving levels of your organization. For you to get that support, you need to clearly show its return by demonstrating how it addresses some of your organization’s pain points. What follows is a discussion of common pain points and how apprenticeship assuages them.
Hit growth targets
If your company is trying to grow but can’t find enough qualified people to do the work that growth requires, that’s the sweet spot for apprenticeship. Apprenticeship allows you to make the designers you’re having trouble finding. This is going to be a temporal argument, so you need to come armed with measurements to make it. And you’ll need help from various leaders in your organization to get them.
UX team growth targets for the past 2-3 years (UX leadership)
Actual UX team growth for the past 2-3 years (UX leadership)
Average time required to identify and hire a UX designer (HR leadership)
Then you need to estimate how apprenticeship will improve these measurements. (Part 3 of this series, which will deal with the instructional design of apprenticeship, will offer details on how to make these estimates.)
How many designers per year can apprenticeship contribute?
How much time will be required from the design team to mentor apprentices?
Growth targets typically do not exist in a vacuum. You’ll likely need to combine this argument with one of the others.
Take advantage of more revenue opportunities
One of the financial implications of missing growth targets is not having enough staff to capitalize on all the revenue opportunities you have. For agencies, you might have to pass up good projects because your design team has a six-week lead time. For product companies, your release schedule might fall behind due to a UX bottleneck and push you behind your competition.
The data you need to make this argument differ depending on whether your company sells time (agency) or stuff (product company).
When doing the math about an apprenticeship program, agencies should consider:
What number of projects have been lost in the past year due to UX lead time? (Sales leadership should have this information.)
What is the estimated value of UX work on lost projects? (Sales leadership)
What is the estimated value of other (development, strategy, management, etc.) work on lost projects? (Sales leadership)
Then, contrast these numbers with some of the benefits of apprenticeship:
What is the estimated number of designers per year apprenticeship could contribute?
What is the estimated amount of work these “extra” designers would be able to contribute in both hours and cash?
What is the estimated profitability of junior designers (more) versus senior designers (less), assuming the same hourly rate?
Product companies should consider:
The ratio of innovative features versus “catch-up” features your competitors released last year. (Sales or marketing leadership should have this information.)
The ratio of innovative features versus “catch-up” features you released in the past year. (Sales or marketing leadership)
Any customer service and/or satisfaction metrics. (Customer service leadership)
Contrast this data with…
The estimated number of designers per year you could add through apprenticeship.
The estimated number of features they could’ve completed for release.
The estimated impact this would have on customer satisfaction.
Avoid high recruiting costs
Recruiting a mid- to senior-level UX designer typically means finding them and poaching them from somewhere else. This requires paying significant headhunting fees on top of the person-hours involved in reviewing resumes and portfolios and interviewing candidates. All the data you need to make this argument can come from UX leadership and HR.
Average cost per UX designer recruit
Average number of hours spent recruiting a UX designer
Contrast this data with:
Estimated cost per apprentice
To estimate this, factor in:
Overhead per employee
Salary (and benefits if the apprenticeship is long enough to qualify while still an apprentice)
Software and service licenses
Mentorship time from the current design team
Mentorship/management time from the designer leading the program
Increase designer engagement
This one is tricky because most places don’t measure engagement directly. Measuring engagement accurately requires professional quantitative research. However, there are some signs that can point to low engagement.
High turnover is the number one sign of low engagement. What kind of people are leaving—junior designers, seniors, or both? If possible, try to get exit interview data (as raw as possible) to develop hypotheses about how apprenticeship could help. Maybe junior designers don’t feel like their growth is supported… allowing them to leverage elements of an apprenticeship program for further professional development could fix that. Maybe senior designers are feeling burnt out. Consistent mentorship, like that required by apprenticeship, can be reinvigorating.
Other signs of low engagement include frequently missing deadlines, using more sick time, missing or being late to meetings, and more. Investigate any signs you see, validate any assumptions you might take on, and hypothesize about how apprenticeship can help address these issues.
Help others
If your organization is motivated by altruism, that is wonderful! At least one organization with an apprenticeship program actually tries very hard not to hire their apprentices. Boston’s Fresh Tilled Soil places their graduated apprentices with their clients, which creates a very strong relationship with those clients. Additionally, this helps them raise the caliber and capacity of the Boston metro area when it comes to UX design.
Hiring great UX apprentices
Hiring apprentices requires a different approach to evaluating candidates than hiring established UX designers. Most candidates will have little to no actual UX design skills, so you have to evaluate them for their potential to acquire and hone those skills. Additionally, not everyone learns effectively through apprenticeship. Identifying the traits of a good apprentice in candidates will help your program run smoothly.
Evaluating for skill potential
Portfolio. Even though you’re evaluating someone who may never have designed a user experience before, you still need them to bring some examples of something they’ve made. Without this, it’s impossible to get a sense of what kind of process they go through to make things. For example, one apprentice candidate brought in a print brochure she designed. Her description of how she designed it included identifying business goals, balancing competing stakeholder needs, working within constraints, and getting feedback along the way, all of which are relevant to the process of UX design.
Mindset. The number one thing you must identify in a candidate is whether they already possess the UX mindset, the point of view that things are designed better when they’re designed with people in mind. This is usually the light bulb that goes off in people’s heads when they discover UX design. If that light hasn’t gone off, UX might not be the right path for that person. Apprenticeship is too much of an investment to risk that. Evaluating for this is fairly simple. It usually comes out in the course of a conversation. If not, asking outright “What does user experience design mean to you” can be helpful. Pay careful attention to how people talk about how they’ve approached their work. Is it consistent with their stated philosophy? If not, that could be a red flag.
Intrinsic motivation. When people talk about having a “passion” for something, what that means is that they are intrinsically motivated to do that thing. This is pretty easy to evaluate for. What have they done to learn UX? Have they taken a class? That’s a positive sign. Have they identified and worked through a UX problem on their own? Even better! If a candidate hasn’t put in the effort to explore UX on their own, they are likely not motivated enough to do well in the field.
Self-education. While self-education is a sign of intrinsic motivation, it’s also important in its own right. Apprenticeship relies heavily on mentorship, but the responsibility for the direction and nature of that mentorship lies with the apprentice themselves. If someone is a self-educator, that’s a good predictor that they’ll be able to get the most out of mentorship. This is another fairly easy one to evaluate. Ask them to tell you about the most recent UX-related blog post or article they read. It doesn’t matter what it actually is, only whether they can quickly bring something to mind.
Professional skills. UX design is not a back-office field. UX designers talk with clients, customers, stakeholders, developers, and more. To be an effective UX designer a candidate must possess basic professional skills such as dressing appropriately and communicating well. Simple things like sending a “thank you” email are a great indication of good professional skills. (Physically mailed thank you notes get extra bonus points. One-off letterpressed mailed thank you notes get even more!)
Collaboration. UX design is a collaborative discipline. If a candidate struggles with collaboration, they’ll struggle in the field. When discussing their work (especially class project work), be sure to ask what role they played on the project and how they interacted with other people. Complaining about others and taking on too much work themselves are some warning signs that could indicate that a candidate has trouble with collaboration.
Evaluating for apprenticeship fit
Learning pattern. Some people learn best by gradually being exposed to a topic. I call these people toe-dippers, as they prefer to dip their toes into something before diving in. Others prefer to barrel off the dock straight into the deep end and then struggle to the surface. I call these people deep-enders. While apprenticeship can be modified to work better for deep-enders, its gradual exposure can often frustrate them. It is much better suited for toe-dippers. Evaluating for this is tricky, though. Asking people whether they prefer to dive in or learn gradually, they’ll say “dive in” because they think that’s what you want to hear. Asking them how they’ve approached learning other skills can give some insight, but this is not 100% reliable.
Learning by doing. Apprenticeship helps people acquire skills through experiential learning. If this is not how a person learns, apprenticeship may not be for them. Evaluating for this is very much like evaluating for intrinsic motivation. Has someone gone to the trouble of identifying and solving a design problem themselves? Have they practiced UX methods they have learned about? If so, it’s likely that learning by doing is effective for them.
Receptiveness to critique. Apprenticeship is a period of sustained critique. Someone whose response to criticism is defensiveness or despondency will not be successful as an apprentice. This is easy to identify in an interview within the context of discussing the work examples the candidate has brought. My favorite technique for doing this is to find something insignificant to critique and then hammer on it. This is not how I normally critique, of course; it’s a pressure test. If a candidate responds with openness and a desire to learn from this encounter, that’s a very positive sign. If they launch into a monologue defending their decisions, the interview is pretty much over.
If you’re fired up about UX apprenticeship (and how could you not be?), start making it happen in your organization! Do the research, find the data, and share your vision with your company’s leadership so they can see it too! When you get the go-ahead, you’ll be all ready to start looking for apprentices. If you follow these guidelines, you’ll get great apprentices who will grow into great designers. Stay tuned for Part 3 of this series where I’ll get detailed about the instructional design of apprenticeship, pedagogy, mentorship, and tracking!
Share this:
EmailTwitter206RedditLinkedIn229Facebook20Google
Posted in Big Ideas, Business Design, Education, Workplace and Career | 11 Comments »
11 Comments
Building the Business Case for Taxonomy
Taxonomy of Spices and Pantries: Part 1
by Grace G Lau
September 1st, 2015 9 Comments
XKCD comic strip about not being able to name all seven dwarfs from Snow White.
How often have you found yourself on an ill-defined site redesign project? You know, the ones that you end up redesigning and restructuring every few years as you add new content. Or perhaps you spin up a new microsite because the new product/solution doesn’t fit in with the current structure, not because you want to create a new experience around it. Maybe your site has vaguely labelled navigation buckets like “More Magic”—which is essentially your junk drawer, your “everything else.”
Your top concerns on such projects are:
You can’t find anything.
Your users can’t find anything.
The navigation isn’t consistent.
You have too much content.
Your hopeful answer to everything is to rely on an external search engine, not the one that’s on your site. Google will find everything for you.
A typical site redesign project might include refreshing the visual design, considering the best interaction practices, and conducting usability testing. But what’s missing? Creating the taxonomy.
“Taxonomy is just tagging, right? Sharepoint/AEM has it—we’re covered!”
In the coming months, I will be exploring the what, why, and how of taxonomy planning, design, and implementation:
Building the business case for taxonomy
Planning a taxonomy
The many uses of taxonomy
Card sorting to validate a taxonomy
Tree testing a taxonomy
Taxonomy governance
Best practices of enterprise taxonomies
Are you ready?
ROI of taxonomy
Although the word “taxonomy” is often used interchangeably with tagging, building an enterprise taxonomy means more than tagging content. It’s essentially a knowledge organization system, and its purpose is to enable the user to browse, find, and discover content.
Spending the time on building that taxonomy empowers your site to
better manage your content at scale,
allow for meaningful navigation,
expose long-tail content,
reuse content assets,
bridge across subjects, and
provide more efficient product/brand alignment.
In addition, a sound taxonomy in the long run will improve your content’s findability, support social sharing, and improve your site’s search engine optimization. (Thanks to Mike Atherton’s “Modeling Structured Content” workshop, presented at IA Summit 2013, for outlining the benefits.)
How do you explain taxonomy to get stakeholders on board? No worries, we won’t be going back to high school biology.
Explaining taxonomy
Imagine a household kitchen. How would you organize the spices?
Consider the cooks: In-laws from northern China, mom from Hong Kong, and American-born Grace. I’ve moved four times in the past five years. My husband, son, and I live with my in-laws. I have a mother who still comes over to make her Cantonese herbal soups.
We all speak different languages: English, Mandarin Chinese, and Cantonese Chinese.
I have the unique need of organizing my kitchen for multiple users. For my in-laws, they need to be able to find their star anise, peppercorn, tree ear mushrooms, and sesame oil. My mom needs a space to store her dried figs, dried shiitake mushrooms, dried goji berries, and snow fungus. I need to find a space for dried thyme and rosemary for the “American” food I try to make. Oh, and we all need a consistent place for salt and sugar.
People can organize their kitchen by activity zones: baking, canning, preparing, and cooking. Other ways to organize a kitchen successfully could include:
attributes (shelf-life, weight, temperature requirements)
usage (frequency, type of use)
seasonality (organic, what’s in season, local)
occasion (hot pot dinners, BBQ parties)
You can also consider organizing by audience such as for the five year old helper. I keep refining how the kitchen is organized each time we move. I have used sticky notes in Chinese and English with my in-laws and my mom as part of a card sorting exercise; I’ve tested the navigation around the kitchen to validate the results.
A photo of pantry shelves labeled noodles, rice, garlic, and the like.
Early attempts at organizing my pantry.
If this is to be a data-driven taxonomy, I could consider attaching RFID tags to each spice container to track frequency and type of usage for a period of time to obtain some kitchen analytics. On the other hand, I could try guesstimating frequency by looking at the amount of grime or dust collected on the container. How often are we using chicken bouillon and to make what dishes? Does it need to be within easy reach of the stovetop or can it be relegated to a pantry closet three feet away?
Photo of labeled spice jars in a drawer.
From Home Depot.
Understanding the users and their tasks and needs is a foundation for all things UX. Taxonomy building is not any different. How people think about and use their kitchen brings with it a certain closeness that makes taxonomy concepts easier to grasp.
Who are the users? What are they trying to do? How do they currently tackle this problem? What works and what doesn’t? Watch, observe, and listen to their experience.
Helping the business understand the underlying concepts is one of the challenges I’ve faced with developing a solid taxonomy. We’re not just talking about tagging but breaking down the content by its attributes and metadata as well as by its potential usage and relation to other content. The biggest challenge is building the consensus and understanding around that taxonomy—taxonomy governance—and keeping the system you’ve designed well-seasoned!
Now, back to that site redesign project that you were thinking of: How about starting on that taxonomy? My next post will cover taxonomy planning.
How to determine when customer feedback is actionable
Merging statistics with product management
by Naira Musallam, Nis Frome, Michael Williams, and Tim Lawton
October 13th, 2015 1 Comments
One of the riskiest assumptions for any new product or feature is that customers actually want it.
Although product leaders can propose numerous ‘lean’ methodologies to experiment inexpensively with new concepts before fully engineering them, anything short of launching a product or feature and monitoring its performance over time in the market is, by definition, not 100% accurate. That leaves us with a dangerously wide spectrum of user research strategies, and an even wider range of opinions for determining when customer feedback is actionable.
To the dismay of product teams desiring to ‘move fast and break things,’ their counterparts in data science and research advocate a slower, more traditional approach. These proponents of caution often emphasize an evaluation of statistical signals before considering customer insights valid enough to act upon.
This dynamic has meaningful ramifications. For those who care about making data-driven business decisions, the challenge that presents itself is: How do we adhere to rigorous scientific standards in a world that demands adaptability and agility to survive? Having frequently witnessed the back-and-forth between product teams and research groups, it is clear that there is no shortage of misconceptions and miscommunication between the two. Only a thorough analysis of some critical nuances in statistics and product management can help us bridge the gap.
Quantify risk tolerance
You’ve probably been on one end of an argument that cited a “statistically significant” finding to support a course of action. The problem is that statistical significance is often equated to having relevant and substantive results, but neither is necessarily the case.
Simply put, statistical significance exclusively refers to the level of confidence (measured from 0 to 1, or 0% to 100%) you have that the results you obtained from a given experiment are not due to chance. Statistical significance alone tells you nothing about the appropriateness of the confidence level selected nor the importance of the results.
To begin, confidence levels should be context-dependent, and determining the appropriate confidence threshold is an oft-overlooked proposition that can have profound consequences. In statistics, confidence levels are closely linked to two concepts: type I and type II errors.
A type I error, or false-positive, refers to believing that a variable has an effect that it actually doesn’t.
Some industries, like pharmaceuticals and aeronautics, must be exceedingly cautious against false-positives. Medical researchers for example cannot afford to mistakenly think a drug has an intended benefit when in reality it does not. Side effects can be lethal so the FDA’s threshold for proof that a drug’s health benefits outweigh their known risks is intentionally onerous.
A type II error, or false-negative, has to do with the flip side of the coin: concluding that a variable doesn’t have an effect when it actually does.
Historically though, statistical significance has been primarily focused on avoiding false-positives (even if it means missing out on some likely opportunities) with the default confidence level at 95% for any finding to be considered actionable. The reality that this value was arbitrarily determined by scientists speaks more to their comfort level of being wrong than it does to its appropriateness in any given context. Unfortunately, this particular confidence level is used today by the vast majority of research teams at large organizations and remains generally unchallenged in contexts far different than the ones for which it was formulated.
Matrix visualising Type I and Type II errors as described in text.
But confidence levels should be representative of the amount of risk that an organization is willing to take to realize a potential opportunity. There are many reasons for product teams in particular to be more concerned with avoiding false-negatives than false-positives. Mistakenly missing an opportunity due to caution can have a more negative impact than building something no one really wants. Digital product teams don’t share many of the concerns of an aerospace engineering team and therefore need to calculate and quantify their own tolerance for risk.
To illustrate the ramifications that confidence levels can have on business decisions, consider this thought exercise. Imagine two companies, one with outrageously profitable 90% margins, and one with painfully narrow 5% margins. Suppose each of these businesses are considering a new line of business.
In the case of the high margin business, the amount of capital they have to risk to pursue the opportunity is dwarfed by the potential reward. If executives get even the weakest indication that the business might work they should pursue the new business line aggressively. In fact, waiting for perfect information before acting might be the difference between capturing a market and allowing a competitor to get there first.
In the case of the narrow margin business, however, the buffer before going into the red is so small that going after the new business line wouldn’t make sense with anything except the most definitive signal.
Although these two examples are obviously allegorical, they demonstrate the principle at hand. To work together effectively, research analysts and their commercially-driven counterparts should have a conversation around their organization’s particular level of comfort and to make statistical decisions accordingly.
Focus on impact
Confidence levels only tell half the story. They don’t address the magnitude to which the results of an experiment are meaningful to your business. Product teams need to combine the detection of an effect (i.e., the likelihood that there is an effect) with the size of that effect (i.e., the potential impact to the business), but this is often forgotten on the quest for the proverbial holy grail of statistical significance.
Many teams mistakenly focus energy and resources acting on statistically significant but inconsequential findings. A meta-analysis of hundreds of consumer behavior experiments sought to qualify how seriously effect sizes are considered when evaluating research results. They found that an astonishing three-quarters of the findings didn’t even bother reporting effect sizes “because of their small values” or because of “a general lack of interest in discovering the extent to which an effect is significant…”
This is troubling, because without considering effect size, there’s virtually no way to determine what opportunities are worth pursuing and in what order. Limited development resources prevent product teams from realistically tackling every single opportunity. Consider for example how the answer to this question, posed by a MECLABS data scientist, changes based on your perspective:
In terms of size, what does a 0.2% difference mean? For Amazon.com, that lift might mean an extra 2,000 sales and be worth a $100,000 investment…For a mom-and-pop Yahoo! store, that increase might just equate to an extra two sales and not be worth a $100 investment.
Unless you’re operating at a Google-esque scale for which an incremental lift in a conversion rate could result in literally millions of dollars in additional revenue, product teams should rely on statistics and research teams to help them prioritize the largest opportunities in front of them.
Sample size constraints
One of the most critical constraints on product teams that want to generate user insights is the ability to source users for experiments. With enough traffic, it’s certainly possible to generate a sample size large enough to pass traditional statistical requirements for a production split test. But it can be difficult to drive enough traffic to new product concepts, and it can also put a brand unnecessarily at risk, especially in heavily regulated industries. For product teams that can’t easily access or run tests in production environments, simulated environments offer a compelling alternative.
That leaves product teams stuck between a rock and a hard place. Simulated environments require standing user panels that can get expensive quickly, especially if research teams seek sample sizes in the hundreds or thousands. Unfortunately, strategies like these again overlook important nuances in statistics and place undue hardship on the user insight generation process.
A larger sample does not necessarily mean a better or more insightful sample. The objective of any sample is for it to be representative of the population of interest, so that conclusions about the sample can be extrapolated to the population. It’s assumed that the larger the sample, the more likely it is going to be representative of the population. But that’s not inherently true, especially if the sampling methodology is biased.
Years ago, a client fired an entire research team in the human resources department for making this assumption. The client sought to gather feedback about employee engagement and tasked this research team with distributing a survey to the entire company of more than 20,000 global employees. From a statistical significance standpoint, only 1,000 employees needed to take the survey for the research team to derive defensible insights.
Within hours after sending out the survey on a Tuesday morning, they had collected enough data and closed the survey. The problem was that only employees within a few timezones had completed the questionnaire with a solid third of the company being asleep, and therefore ignored, during collection.
Clearly, a large sample isn’t inherently representative of the population. To obtain a representative sample, product teams first need to clearly identify a target persona. This may seem obvious, but it’s often not explicitly done, creating quite a bit of miscommunication for researchers and other stakeholders. What one person may mean by a ‘frequent customer’ could mean something different entirely to another person.
After a persona is clearly identified, there are a few sampling techniques that one can follow, including probability sampling and nonprobability sampling techniques. A carefully-selected sample size of 100 may be considerably more representative of a target population than a thrown-together sample of 2,000.
Research teams may counter with the need to meet statistical assumptions that are necessary for conducting popular tests such as a t-test or Analysis of Variance (ANOVA). These types of tests assume a normal distribution, which generally occurs as a sample size increases. But statistics has a solution for when this assumption is violated and provides other options, such as non-parametric testing, which work well for small sample sizes.
In fact, the strongest argument left in favor of large sample sizes has already been discounted. Statisticians know that the larger the sample size, the easier it is to detect small effect sizes at a statistically significant level (digital product managers and marketers have become soberly aware that even a test comparing two identical versions can find a statistically significant difference between the two). But a focused product development process should be immune to this distraction because small effect sizes are of little concern. Not only that, but large effect sizes are almost as easily discovered in small samples as in large samples.
For example, suppose you want to test ideas to improve a form on your website that currently gets filled out by 10% of visitors. For simplicity’s sake, let’s use a confidence level of 95% to accept any changes. To identify just a 1% absolute increase to 11%, you’d need more than 12,000 users, according to Optimizely’s stats engine formula! If you were looking for a 5% absolute increase, you’d only need 223 users.
But depending on what you’re looking for, even that many users may not be needed, especially if conducting qualitative research. When identifying usability problems across your site, leading UX researchers have concluded that “elaborate usability tests are a waste of resources” because the overwhelming majority of usability issues are discovered with just five testers.
An emphasis on large sample sizes can be a red herring for product stakeholders. Organizations should not be misled away from the real objective of any sample, which is an accurate representation of the identified, target population. Research teams can help product teams identify necessary sample sizes and appropriate statistical tests to ensure that findings are indeed meaningful and cost-effectively attained.
Expand capacity for learning
It might sound like semantics, but data should not drive decision-making. Insights should. And there can be quite a gap between the two, especially when it comes to user insights.
In a recent talk on the topic of big data, Malcolm Gladwell argued that “data can tell us about the immediate environment of consumer attitudes, but it can’t tell us much about the context in which those attitudes were formed.” Essentially, statistics can be a powerful tool for obtaining and processing data, but it doesn’t have a monopoly on research.
Product teams can become obsessed with their Omniture and Optimizely dashboards, but there’s a lot of rich information that can’t be captured with these tools alone. There is simply no replacement for sitting down and talking with a user or customer. Open-ended feedback in particular can lead to insights that simply cannot be discovered by other means. The focus shouldn’t be on interviewing every single user though, but rather on finding a pattern or theme from the interviews you do conduct.
One of the core principles of the scientific method is the concept of replicability—that the results of any single experiment can be reproduced by another experiment. In product management, the importance of this principle cannot be overstated. You’ll presumably need any data from your research to hold true once you engineer the product or feature and release it to a user base, so reproducibility is an inherent requirement when it comes to collecting and acting on user insights.
We’ve far too often seen a product team wielding a single data point to defend a dubious intuition or pet project. But there are a number of factors that could and almost always do bias the results of a test without any intentional wrongdoing. Mistakenly asking a leading question or sourcing a user panel that doesn’t exactly represent your target customer can skew individual test results.
Similarly, and in digital product management especially, customer perceptions and trends evolve rapidly, further complicating data. Look no further than the handful of mobile operating systems which undergo yearly redesigns and updates, leading to constantly elevated user expectations. It’s perilously easy to imitate Homer Simpson’s lapse in thinking, “This year, I invested in pumpkins. They’ve been going up the whole month of October and I got a feeling they’re going to peak right around January. Then, bang! That’s when I’ll cash in.”
So how can product and research teams safely transition from data to insights? Fortunately, we believe statistics offers insight into the answer.
The central limit theorem is one of the foundational concepts taught in every introductory statistics class. It states that the distribution of averages tends to be Normal even when the distribution of the population from which the samples were taken is decidedly not Normal.
Put as simply as possible, the theorem acknowledges that individual samples will almost invariably be skewed, but offers statisticians a way to combine them to collectively generate valid data. Regardless of how confusing or complex the underlying data may be, by performing relatively simple individual experiments, the culminating result can cut through the noise.
This theorem provides a useful analogy for product management. To derive value from individual experiments and customer data points, product teams need to practice substantiation through iteration. Even if the results of any given experiment are skewed or outdated, they can be offset by a robust user research process that incorporates both quantitative and qualitative techniques across a variety of environments. The safeguard against pursuing insignificant findings, if you will, is to be mindful not to consider data to be an insight until a pattern has been rigorously established.
Divide no more
The moral of the story is that the nuances in statistics actually do matter. Dogmatically adopting textbook statistics can stifle an organization’s ability to innovate and operate competitively, but ignoring the value and perspective provided by statistics altogether can be similarly catastrophic. By understanding and appropriately applying the core tenets of statistics, product and research teams can begin with a framework for productive dialog about the risks they’re willing to take, the research methodologies they can efficiently but rigorously conduct, and the customer insights they’ll act upon.
Share this:
Planning a Taxonomy Project
Taxonomy of Spices and Pantries: Part 2
by Grace G Lau
October 20th, 2015 No Comments
This is part 2 of “Taxonomy of Spices and Pantries,” in which I will be exploring the what, why, and how of taxonomy planning, design, and implementation:
Building the business case for taxonomy
Planning a taxonomy
The many uses of taxonomy
Card sorting to validate a taxonomy
Tree testing a taxonomy
Taxonomy governance
Best practices of enterprise taxonomies
In part 1, I enumerated the business reasons for a taxonomy focus in a site redesign and gave a fun way to explain taxonomy. The kitchen isn’t going to organize itself, so the analogy continues.
I’ve moved every couple of years and it shows in the kitchen. Half-used containers of ground pepper. Scattered bags of star anise. Multiple bags of ground and whole cumin. After a while, people are quick to stuff things into the nearest crammable crevice (until we move again and the IA is called upon to organize the kitchen).
Planning a taxonomy covers the same questions as planning any UX project. Understanding the users and their tasks and needs is a foundation for all things UX. This article will go through the questions you should consider when planning a kitchen, er, um…, a taxonomy project.
Rumination of stuff in my kitchen and the kinds of users and stakeholders the taxonomy needs to be mindful of.
Rumination of stuff in my kitchen and the kinds of users and stakeholders the taxonomy needs to be mindful of. Source: Grace Lau.
Same as a designing any software, application, or website, you’ll need to meet with the stakeholders and ask questions:
Purpose: Why? What will the taxonomy be used for?
Users: Who’s using this taxonomy? Who will it affect?
Content: What will be covered by this taxonomy?
Scope: What’s the topic area and limits?
Resources: What are the project resources and constraints?
(Thanks to Heather Hedden, “The Accidental Taxonomist,” p.292)
What’s your primary purpose?
Why are you doing this?
Are you moving, or planning to move? Is your kitchen so disorganized that you can’t find the sugar you needed for soy braised chicken? Is your content misplaced and hard to search?
How often have you found just plain old salt in a different spot? How many kinds of salt do you have anyway–Kosher salt, sea salt, iodized salt, Hawaiian pink salt? (Why do you have so many different kinds anyway? One of my favorite recipe books recommended using red Hawaiian sea salt for kalua pig. Of course, I got it.)
You might be using the taxonomy for tagging or, in librarian terms, indexing or cataloging. Maybe it’s for information search and retrieval. Are you building a faceted search results page? Perhaps this taxonomy is being used for organizing the site content and guiding the end users through the site navigation.
Establishing a taxonomy as a common language also helps build consensus and creates smarter conversations. On making baozi (steamed buns), I overheard a conversation between fathers:
Father-in-law: We need 酵母 [Jiàomǔ] {noun}.
Dad: Yi-see? (Cantonese transliteration of yeast)
Father-in-law: (confused look)
Dad: Baking pow-daa? (Cantonese transliteration of baking powder)
Meanwhile, I look up the Chinese translation of “yeast” in Google Translate while mother-in-law opens her go-to Chinese dictionary tool. I discover that the dictionary word for “yeast” is 发酵粉 [fājiàofěn] {noun}.
Father-in-law: Ah, so it rises flour: 发面的 [fāmiànde] {verb}
This discovery ensues more discussion about what it does and how it is used. There was at least 15 more minutes of discussing yeast in five different ways before the fathers agreed that they were talking about the same ingredient and its purpose. Eventually, we have this result in our bellies.
Homemade steamed baozi. Apparently, they’re still investigating how much yeast is required for the amount of flour they used. Source: Grace Lau.
Homemade steamed baozi. Apparently, they’re still investigating how much yeast is required for the amount of flour they used. Source: Grace Lau.
Who are the users?
Are they internal? Content creators or editors, working in the CMS?
Are they external users? What’s their range of experience in the domain? Are we speaking with homemakers and amateur cooks or seasoned cooks with many years at various Chinese restaurants?
Looking at the users of my kitchen, I identified the following stakeholders:
Content creators: the people who do the shopping and have to put away the stuff
People who are always in the kitchen: my in-laws
People who are sometimes in the kitchen: me
Visiting users: my parents and friends who often come over for a BBQ/grill party
The cleanup crew: my husband who can’t stand the mess we all make
How do I create a taxonomy for them? First, I attempt to understand their mental models by watching them work in their natural environment and observing their everyday hacks as they complete their tasks. Having empathy for users’ end game—making food for the people they care for—makes a difference in developing the style, consistency, and breadth and depth of the taxonomy.
What content will be covered by the taxonomy?
In my kitchen, we’ll be covering sugars, salts, spices, and staples used for cooking, baking, braising, grilling, smoking, steaming, simmering, and frying.
How did I determine that?
Terminology from existing content. I opened up every cabinet and door in my kitchen and made an inventory.
Search logs. How were users accessing my kitchen? Why? How were users referring to things? What were they looking for?
Storytelling with users. How did you make this? People like to share recipes and I like to watch friends cook. Doing user interviews has never been more fun!
What’s the scope?
Scope can easily get out of hand. Notice that I have not included in my discussion any cookbooks, kitchen hardware and appliances, pots and pans, or anything that’s in the refrigerator or freezer.
You may need a scope document early on to plan releases (if you need them). Perhaps for the first release, I’ll just deal with the frequent use items. Then I’ll move on to occasional use items (soups and desserts).
If the taxonomy you’re developing is faceted—for example, allowing your users to browse your cupboards by particular attributes such as taste, canned vs dried, or weight—your scope should include only those attributes relevant to the search process. For instance, no one really searches for canned goods in my kitchen, so that’s out of scope.
What resources do you have available?
My kitchen taxonomy will be limited. Stakeholders are multilingual so items will need labelling in English, Simplified Chinese, and pinyin romanization. I had considered building a Drupal site to manage an inventory, but I have neither the funding or time to implement such a complex site.
At the same time, what are users’ expectations for the taxonomy? Considering the context in the taxonomy’s usage is important. How will (or should) a taxonomy empower its users? It needs to be invisible; as an indication of a good taxonomy, it shouldn’t affect their current workflow but make it more efficient. Both fathers and my mom are unlikely to stop and use any digital technology to find and look things up.
Most importantly, the completed taxonomy and actual content migration should not conflict with the preparation of the next meal. My baby needs a packed lunch for school, and it’s 6 a.m. when I’m preparing it. There’s no time to rush around looking for things. Time is limited and a complete displacement of spices and condiments would disrupt the high-traffic flow in any household. Meanwhile, we’re out of soy sauce again and I’d rather it not be stashed in yet a new home and forgotten. That’s why we ended up with three open bottles of soy sauce from different brands.
What else should you consider for the taxonomy?
Understanding the scope of the taxonomy you’re building can help prevent scope creep in a taxonomy project. In time, you’ll realize that the 80% of your time and effort is devoted to research while 20% of the time and effort is actually developing the taxonomy. So, making time for iterations and validation through card sorting and other testing is important in your planning.
The Freelance Studio
Denver, Co. User Experience Agency
Ending the UX Designer Drought
Part 2 - Laying the Foundation
by Fred Beecher
June 23rd, 2015 11 Comments
The first article in this series, “A New Apprenticeship Architecture,” laid out a high-level framework for using the ancient model of apprenticeship to solve the modern problem of the UX talent drought. In this article, I get into details. Specifically, I discuss how to make the business case for apprenticeship and what to look for in potential apprentices. Let’s get started!
Defining the business value of apprenticeship
Apprenticeship is an investment. It requires an outlay of cash upfront for a return at a later date. Apprenticeship requires the support of budget-approving levels of your organization. For you to get that support, you need to clearly show its return by demonstrating how it addresses some of your organization’s pain points. What follows is a discussion of common pain points and how apprenticeship assuages them.
Hit growth targets
If your company is trying to grow but can’t find enough qualified people to do the work that growth requires, that’s the sweet spot for apprenticeship. Apprenticeship allows you to make the designers you’re having trouble finding. This is going to be a temporal argument, so you need to come armed with measurements to make it. And you’ll need help from various leaders in your organization to get them.
UX team growth targets for the past 2-3 years (UX leadership)
Actual UX team growth for the past 2-3 years (UX leadership)
Average time required to identify and hire a UX designer (HR leadership)
Then you need to estimate how apprenticeship will improve these measurements. (Part 3 of this series, which will deal with the instructional design of apprenticeship, will offer details on how to make these estimates.)
How many designers per year can apprenticeship contribute?
How much time will be required from the design team to mentor apprentices?
Growth targets typically do not exist in a vacuum. You’ll likely need to combine this argument with one of the others.
Take advantage of more revenue opportunities
One of the financial implications of missing growth targets is not having enough staff to capitalize on all the revenue opportunities you have. For agencies, you might have to pass up good projects because your design team has a six-week lead time. For product companies, your release schedule might fall behind due to a UX bottleneck and push you behind your competition.
The data you need to make this argument differ depending on whether your company sells time (agency) or stuff (product company).
When doing the math about an apprenticeship program, agencies should consider:
What number of projects have been lost in the past year due to UX lead time? (Sales leadership should have this information.)
What is the estimated value of UX work on lost projects? (Sales leadership)
What is the estimated value of other (development, strategy, management, etc.) work on lost projects? (Sales leadership)
Then, contrast these numbers with some of the benefits of apprenticeship:
What is the estimated number of designers per year apprenticeship could contribute?
What is the estimated amount of work these “extra” designers would be able to contribute in both hours and cash?
What is the estimated profitability of junior designers (more) versus senior designers (less), assuming the same hourly rate?
Product companies should consider:
The ratio of innovative features versus “catch-up” features your competitors released last year. (Sales or marketing leadership should have this information.)
The ratio of innovative features versus “catch-up” features you released in the past year. (Sales or marketing leadership)
Any customer service and/or satisfaction metrics. (Customer service leadership)
Contrast this data with…
The estimated number of designers per year you could add through apprenticeship.
The estimated number of features they could’ve completed for release.
The estimated impact this would have on customer satisfaction.
Avoid high recruiting costs
Recruiting a mid- to senior-level UX designer typically means finding them and poaching them from somewhere else. This requires paying significant headhunting fees on top of the person-hours involved in reviewing resumes and portfolios and interviewing candidates. All the data you need to make this argument can come from UX leadership and HR.
Average cost per UX designer recruit
Average number of hours spent recruiting a UX designer
Contrast this data with:
Estimated cost per apprentice
To estimate this, factor in:
Overhead per employee
Salary (and benefits if the apprenticeship is long enough to qualify while still an apprentice)
Software and service licenses
Mentorship time from the current design team
Mentorship/management time from the designer leading the program
Increase designer engagement
This one is tricky because most places don’t measure engagement directly. Measuring engagement accurately requires professional quantitative research. However, there are some signs that can point to low engagement.
High turnover is the number one sign of low engagement. What kind of people are leaving—junior designers, seniors, or both? If possible, try to get exit interview data (as raw as possible) to develop hypotheses about how apprenticeship could help. Maybe junior designers don’t feel like their growth is supported… allowing them to leverage elements of an apprenticeship program for further professional development could fix that. Maybe senior designers are feeling burnt out. Consistent mentorship, like that required by apprenticeship, can be reinvigorating.
Other signs of low engagement include frequently missing deadlines, using more sick time, missing or being late to meetings, and more. Investigate any signs you see, validate any assumptions you might take on, and hypothesize about how apprenticeship can help address these issues.
Help others
If your organization is motivated by altruism, that is wonderful! At least one organization with an apprenticeship program actually tries very hard not to hire their apprentices. Boston’s Fresh Tilled Soil places their graduated apprentices with their clients, which creates a very strong relationship with those clients. Additionally, this helps them raise the caliber and capacity of the Boston metro area when it comes to UX design.
Hiring great UX apprentices
Hiring apprentices requires a different approach to evaluating candidates than hiring established UX designers. Most candidates will have little to no actual UX design skills, so you have to evaluate them for their potential to acquire and hone those skills. Additionally, not everyone learns effectively through apprenticeship. Identifying the traits of a good apprentice in candidates will help your program run smoothly.
Evaluating for skill potential
Portfolio. Even though you’re evaluating someone who may never have designed a user experience before, you still need them to bring some examples of something they’ve made. Without this, it’s impossible to get a sense of what kind of process they go through to make things. For example, one apprentice candidate brought in a print brochure she designed. Her description of how she designed it included identifying business goals, balancing competing stakeholder needs, working within constraints, and getting feedback along the way, all of which are relevant to the process of UX design.
Mindset. The number one thing you must identify in a candidate is whether they already possess the UX mindset, the point of view that things are designed better when they’re designed with people in mind. This is usually the light bulb that goes off in people’s heads when they discover UX design. If that light hasn’t gone off, UX might not be the right path for that person. Apprenticeship is too much of an investment to risk that. Evaluating for this is fairly simple. It usually comes out in the course of a conversation. If not, asking outright “What does user experience design mean to you” can be helpful. Pay careful attention to how people talk about how they’ve approached their work. Is it consistent with their stated philosophy? If not, that could be a red flag.
Intrinsic motivation. When people talk about having a “passion” for something, what that means is that they are intrinsically motivated to do that thing. This is pretty easy to evaluate for. What have they done to learn UX? Have they taken a class? That’s a positive sign. Have they identified and worked through a UX problem on their own? Even better! If a candidate hasn’t put in the effort to explore UX on their own, they are likely not motivated enough to do well in the field.
Self-education. While self-education is a sign of intrinsic motivation, it’s also important in its own right. Apprenticeship relies heavily on mentorship, but the responsibility for the direction and nature of that mentorship lies with the apprentice themselves. If someone is a self-educator, that’s a good predictor that they’ll be able to get the most out of mentorship. This is another fairly easy one to evaluate. Ask them to tell you about the most recent UX-related blog post or article they read. It doesn’t matter what it actually is, only whether they can quickly bring something to mind.
Professional skills. UX design is not a back-office field. UX designers talk with clients, customers, stakeholders, developers, and more. To be an effective UX designer a candidate must possess basic professional skills such as dressing appropriately and communicating well. Simple things like sending a “thank you” email are a great indication of good professional skills. (Physically mailed thank you notes get extra bonus points. One-off letterpressed mailed thank you notes get even more!)
Collaboration. UX design is a collaborative discipline. If a candidate struggles with collaboration, they’ll struggle in the field. When discussing their work (especially class project work), be sure to ask what role they played on the project and how they interacted with other people. Complaining about others and taking on too much work themselves are some warning signs that could indicate that a candidate has trouble with collaboration.
Evaluating for apprenticeship fit
Learning pattern. Some people learn best by gradually being exposed to a topic. I call these people toe-dippers, as they prefer to dip their toes into something before diving in. Others prefer to barrel off the dock straight into the deep end and then struggle to the surface. I call these people deep-enders. While apprenticeship can be modified to work better for deep-enders, its gradual exposure can often frustrate them. It is much better suited for toe-dippers. Evaluating for this is tricky, though. Asking people whether they prefer to dive in or learn gradually, they’ll say “dive in” because they think that’s what you want to hear. Asking them how they’ve approached learning other skills can give some insight, but this is not 100% reliable.
Learning by doing. Apprenticeship helps people acquire skills through experiential learning. If this is not how a person learns, apprenticeship may not be for them. Evaluating for this is very much like evaluating for intrinsic motivation. Has someone gone to the trouble of identifying and solving a design problem themselves? Have they practiced UX methods they have learned about? If so, it’s likely that learning by doing is effective for them.
Receptiveness to critique. Apprenticeship is a period of sustained critique. Someone whose response to criticism is defensiveness or despondency will not be successful as an apprentice. This is easy to identify in an interview within the context of discussing the work examples the candidate has brought. My favorite technique for doing this is to find something insignificant to critique and then hammer on it. This is not how I normally critique, of course; it’s a pressure test. If a candidate responds with openness and a desire to learn from this encounter, that’s a very positive sign. If they launch into a monologue defending their decisions, the interview is pretty much over.
If you’re fired up about UX apprenticeship (and how could you not be?), start making it happen in your organization! Do the research, find the data, and share your vision with your company’s leadership so they can see it too! When you get the go-ahead, you’ll be all ready to start looking for apprentices. If you follow these guidelines, you’ll get great apprentices who will grow into great designers. Stay tuned for Part 3 of this series where I’ll get detailed about the instructional design of apprenticeship, pedagogy, mentorship, and tracking!
Share this:
EmailTwitter206RedditLinkedIn229Facebook20Google
Posted in Big Ideas, Business Design, Education, Workplace and Career | 11 Comments »
11 Comments
Building the Business Case for Taxonomy
Taxonomy of Spices and Pantries: Part 1
by Grace G Lau
September 1st, 2015 9 Comments
XKCD comic strip about not being able to name all seven dwarfs from Snow White.
How often have you found yourself on an ill-defined site redesign project? You know, the ones that you end up redesigning and restructuring every few years as you add new content. Or perhaps you spin up a new microsite because the new product/solution doesn’t fit in with the current structure, not because you want to create a new experience around it. Maybe your site has vaguely labelled navigation buckets like “More Magic”—which is essentially your junk drawer, your “everything else.”
Your top concerns on such projects are:
You can’t find anything.
Your users can’t find anything.
The navigation isn’t consistent.
You have too much content.
Your hopeful answer to everything is to rely on an external search engine, not the one that’s on your site. Google will find everything for you.
A typical site redesign project might include refreshing the visual design, considering the best interaction practices, and conducting usability testing. But what’s missing? Creating the taxonomy.
“Taxonomy is just tagging, right? Sharepoint/AEM has it—we’re covered!”
In the coming months, I will be exploring the what, why, and how of taxonomy planning, design, and implementation:
Building the business case for taxonomy
Planning a taxonomy
The many uses of taxonomy
Card sorting to validate a taxonomy
Tree testing a taxonomy
Taxonomy governance
Best practices of enterprise taxonomies
Are you ready?
ROI of taxonomy
Although the word “taxonomy” is often used interchangeably with tagging, building an enterprise taxonomy means more than tagging content. It’s essentially a knowledge organization system, and its purpose is to enable the user to browse, find, and discover content.
Spending the time on building that taxonomy empowers your site to
better manage your content at scale,
allow for meaningful navigation,
expose long-tail content,
reuse content assets,
bridge across subjects, and
provide more efficient product/brand alignment.
In addition, a sound taxonomy in the long run will improve your content’s findability, support social sharing, and improve your site’s search engine optimization. (Thanks to Mike Atherton’s “Modeling Structured Content” workshop, presented at IA Summit 2013, for outlining the benefits.)
How do you explain taxonomy to get stakeholders on board? No worries, we won’t be going back to high school biology.
Explaining taxonomy
Imagine a household kitchen. How would you organize the spices?
Consider the cooks: In-laws from northern China, mom from Hong Kong, and American-born Grace. I’ve moved four times in the past five years. My husband, son, and I live with my in-laws. I have a mother who still comes over to make her Cantonese herbal soups.
We all speak different languages: English, Mandarin Chinese, and Cantonese Chinese.
I have the unique need of organizing my kitchen for multiple users. For my in-laws, they need to be able to find their star anise, peppercorn, tree ear mushrooms, and sesame oil. My mom needs a space to store her dried figs, dried shiitake mushrooms, dried goji berries, and snow fungus. I need to find a space for dried thyme and rosemary for the “American” food I try to make. Oh, and we all need a consistent place for salt and sugar.
People can organize their kitchen by activity zones: baking, canning, preparing, and cooking. Other ways to organize a kitchen successfully could include:
attributes (shelf-life, weight, temperature requirements)
usage (frequency, type of use)
seasonality (organic, what’s in season, local)
occasion (hot pot dinners, BBQ parties)
You can also consider organizing by audience such as for the five year old helper. I keep refining how the kitchen is organized each time we move. I have used sticky notes in Chinese and English with my in-laws and my mom as part of a card sorting exercise; I’ve tested the navigation around the kitchen to validate the results.
A photo of pantry shelves labeled noodles, rice, garlic, and the like.
Early attempts at organizing my pantry.
If this is to be a data-driven taxonomy, I could consider attaching RFID tags to each spice container to track frequency and type of usage for a period of time to obtain some kitchen analytics. On the other hand, I could try guesstimating frequency by looking at the amount of grime or dust collected on the container. How often are we using chicken bouillon and to make what dishes? Does it need to be within easy reach of the stovetop or can it be relegated to a pantry closet three feet away?
Photo of labeled spice jars in a drawer.
From Home Depot.
Understanding the users and their tasks and needs is a foundation for all things UX. Taxonomy building is not any different. How people think about and use their kitchen brings with it a certain closeness that makes taxonomy concepts easier to grasp.
Who are the users? What are they trying to do? How do they currently tackle this problem? What works and what doesn’t? Watch, observe, and listen to their experience.
Helping the business understand the underlying concepts is one of the challenges I’ve faced with developing a solid taxonomy. We’re not just talking about tagging but breaking down the content by its attributes and metadata as well as by its potential usage and relation to other content. The biggest challenge is building the consensus and understanding around that taxonomy—taxonomy governance—and keeping the system you’ve designed well-seasoned!
Now, back to that site redesign project that you were thinking of: How about starting on that taxonomy? My next post will cover taxonomy planning.
How to determine when customer feedback is actionable
Merging statistics with product management
by Naira Musallam, Nis Frome, Michael Williams, and Tim Lawton
October 13th, 2015 1 Comments
One of the riskiest assumptions for any new product or feature is that customers actually want it.
Although product leaders can propose numerous ‘lean’ methodologies to experiment inexpensively with new concepts before fully engineering them, anything short of launching a product or feature and monitoring its performance over time in the market is, by definition, not 100% accurate. That leaves us with a dangerously wide spectrum of user research strategies, and an even wider range of opinions for determining when customer feedback is actionable.
To the dismay of product teams desiring to ‘move fast and break things,’ their counterparts in data science and research advocate a slower, more traditional approach. These proponents of caution often emphasize an evaluation of statistical signals before considering customer insights valid enough to act upon.
This dynamic has meaningful ramifications. For those who care about making data-driven business decisions, the challenge that presents itself is: How do we adhere to rigorous scientific standards in a world that demands adaptability and agility to survive? Having frequently witnessed the back-and-forth between product teams and research groups, it is clear that there is no shortage of misconceptions and miscommunication between the two. Only a thorough analysis of some critical nuances in statistics and product management can help us bridge the gap.
Quantify risk tolerance
You’ve probably been on one end of an argument that cited a “statistically significant” finding to support a course of action. The problem is that statistical significance is often equated to having relevant and substantive results, but neither is necessarily the case.
Simply put, statistical significance exclusively refers to the level of confidence (measured from 0 to 1, or 0% to 100%) you have that the results you obtained from a given experiment are not due to chance. Statistical significance alone tells you nothing about the appropriateness of the confidence level selected nor the importance of the results.
To begin, confidence levels should be context-dependent, and determining the appropriate confidence threshold is an oft-overlooked proposition that can have profound consequences. In statistics, confidence levels are closely linked to two concepts: type I and type II errors.
A type I error, or false-positive, refers to believing that a variable has an effect that it actually doesn’t.
Some industries, like pharmaceuticals and aeronautics, must be exceedingly cautious against false-positives. Medical researchers for example cannot afford to mistakenly think a drug has an intended benefit when in reality it does not. Side effects can be lethal so the FDA’s threshold for proof that a drug’s health benefits outweigh their known risks is intentionally onerous.
A type II error, or false-negative, has to do with the flip side of the coin: concluding that a variable doesn’t have an effect when it actually does.
Historically though, statistical significance has been primarily focused on avoiding false-positives (even if it means missing out on some likely opportunities) with the default confidence level at 95% for any finding to be considered actionable. The reality that this value was arbitrarily determined by scientists speaks more to their comfort level of being wrong than it does to its appropriateness in any given context. Unfortunately, this particular confidence level is used today by the vast majority of research teams at large organizations and remains generally unchallenged in contexts far different than the ones for which it was formulated.
Matrix visualising Type I and Type II errors as described in text.
But confidence levels should be representative of the amount of risk that an organization is willing to take to realize a potential opportunity. There are many reasons for product teams in particular to be more concerned with avoiding false-negatives than false-positives. Mistakenly missing an opportunity due to caution can have a more negative impact than building something no one really wants. Digital product teams don’t share many of the concerns of an aerospace engineering team and therefore need to calculate and quantify their own tolerance for risk.
To illustrate the ramifications that confidence levels can have on business decisions, consider this thought exercise. Imagine two companies, one with outrageously profitable 90% margins, and one with painfully narrow 5% margins. Suppose each of these businesses are considering a new line of business.
In the case of the high margin business, the amount of capital they have to risk to pursue the opportunity is dwarfed by the potential reward. If executives get even the weakest indication that the business might work they should pursue the new business line aggressively. In fact, waiting for perfect information before acting might be the difference between capturing a market and allowing a competitor to get there first.
In the case of the narrow margin business, however, the buffer before going into the red is so small that going after the new business line wouldn’t make sense with anything except the most definitive signal.
Although these two examples are obviously allegorical, they demonstrate the principle at hand. To work together effectively, research analysts and their commercially-driven counterparts should have a conversation around their organization’s particular level of comfort and to make statistical decisions accordingly.
Focus on impact
Confidence levels only tell half the story. They don’t address the magnitude to which the results of an experiment are meaningful to your business. Product teams need to combine the detection of an effect (i.e., the likelihood that there is an effect) with the size of that effect (i.e., the potential impact to the business), but this is often forgotten on the quest for the proverbial holy grail of statistical significance.
Many teams mistakenly focus energy and resources acting on statistically significant but inconsequential findings. A meta-analysis of hundreds of consumer behavior experiments sought to qualify how seriously effect sizes are considered when evaluating research results. They found that an astonishing three-quarters of the findings didn’t even bother reporting effect sizes “because of their small values” or because of “a general lack of interest in discovering the extent to which an effect is significant…”
This is troubling, because without considering effect size, there’s virtually no way to determine what opportunities are worth pursuing and in what order. Limited development resources prevent product teams from realistically tackling every single opportunity. Consider for example how the answer to this question, posed by a MECLABS data scientist, changes based on your perspective:
In terms of size, what does a 0.2% difference mean? For Amazon.com, that lift might mean an extra 2,000 sales and be worth a $100,000 investment…For a mom-and-pop Yahoo! store, that increase might just equate to an extra two sales and not be worth a $100 investment.
Unless you’re operating at a Google-esque scale for which an incremental lift in a conversion rate could result in literally millions of dollars in additional revenue, product teams should rely on statistics and research teams to help them prioritize the largest opportunities in front of them.
Sample size constraints
One of the most critical constraints on product teams that want to generate user insights is the ability to source users for experiments. With enough traffic, it’s certainly possible to generate a sample size large enough to pass traditional statistical requirements for a production split test. But it can be difficult to drive enough traffic to new product concepts, and it can also put a brand unnecessarily at risk, especially in heavily regulated industries. For product teams that can’t easily access or run tests in production environments, simulated environments offer a compelling alternative.
That leaves product teams stuck between a rock and a hard place. Simulated environments require standing user panels that can get expensive quickly, especially if research teams seek sample sizes in the hundreds or thousands. Unfortunately, strategies like these again overlook important nuances in statistics and place undue hardship on the user insight generation process.
A larger sample does not necessarily mean a better or more insightful sample. The objective of any sample is for it to be representative of the population of interest, so that conclusions about the sample can be extrapolated to the population. It’s assumed that the larger the sample, the more likely it is going to be representative of the population. But that’s not inherently true, especially if the sampling methodology is biased.
Years ago, a client fired an entire research team in the human resources department for making this assumption. The client sought to gather feedback about employee engagement and tasked this research team with distributing a survey to the entire company of more than 20,000 global employees. From a statistical significance standpoint, only 1,000 employees needed to take the survey for the research team to derive defensible insights.
Within hours after sending out the survey on a Tuesday morning, they had collected enough data and closed the survey. The problem was that only employees within a few timezones had completed the questionnaire with a solid third of the company being asleep, and therefore ignored, during collection.
Clearly, a large sample isn’t inherently representative of the population. To obtain a representative sample, product teams first need to clearly identify a target persona. This may seem obvious, but it’s often not explicitly done, creating quite a bit of miscommunication for researchers and other stakeholders. What one person may mean by a ‘frequent customer’ could mean something different entirely to another person.
After a persona is clearly identified, there are a few sampling techniques that one can follow, including probability sampling and nonprobability sampling techniques. A carefully-selected sample size of 100 may be considerably more representative of a target population than a thrown-together sample of 2,000.
Research teams may counter with the need to meet statistical assumptions that are necessary for conducting popular tests such as a t-test or Analysis of Variance (ANOVA). These types of tests assume a normal distribution, which generally occurs as a sample size increases. But statistics has a solution for when this assumption is violated and provides other options, such as non-parametric testing, which work well for small sample sizes.
In fact, the strongest argument left in favor of large sample sizes has already been discounted. Statisticians know that the larger the sample size, the easier it is to detect small effect sizes at a statistically significant level (digital product managers and marketers have become soberly aware that even a test comparing two identical versions can find a statistically significant difference between the two). But a focused product development process should be immune to this distraction because small effect sizes are of little concern. Not only that, but large effect sizes are almost as easily discovered in small samples as in large samples.
For example, suppose you want to test ideas to improve a form on your website that currently gets filled out by 10% of visitors. For simplicity’s sake, let’s use a confidence level of 95% to accept any changes. To identify just a 1% absolute increase to 11%, you’d need more than 12,000 users, according to Optimizely’s stats engine formula! If you were looking for a 5% absolute increase, you’d only need 223 users.
But depending on what you’re looking for, even that many users may not be needed, especially if conducting qualitative research. When identifying usability problems across your site, leading UX researchers have concluded that “elaborate usability tests are a waste of resources” because the overwhelming majority of usability issues are discovered with just five testers.
An emphasis on large sample sizes can be a red herring for product stakeholders. Organizations should not be misled away from the real objective of any sample, which is an accurate representation of the identified, target population. Research teams can help product teams identify necessary sample sizes and appropriate statistical tests to ensure that findings are indeed meaningful and cost-effectively attained.
Expand capacity for learning
It might sound like semantics, but data should not drive decision-making. Insights should. And there can be quite a gap between the two, especially when it comes to user insights.
In a recent talk on the topic of big data, Malcolm Gladwell argued that “data can tell us about the immediate environment of consumer attitudes, but it can’t tell us much about the context in which those attitudes were formed.” Essentially, statistics can be a powerful tool for obtaining and processing data, but it doesn’t have a monopoly on research.
Product teams can become obsessed with their Omniture and Optimizely dashboards, but there’s a lot of rich information that can’t be captured with these tools alone. There is simply no replacement for sitting down and talking with a user or customer. Open-ended feedback in particular can lead to insights that simply cannot be discovered by other means. The focus shouldn’t be on interviewing every single user though, but rather on finding a pattern or theme from the interviews you do conduct.
One of the core principles of the scientific method is the concept of replicability—that the results of any single experiment can be reproduced by another experiment. In product management, the importance of this principle cannot be overstated. You’ll presumably need any data from your research to hold true once you engineer the product or feature and release it to a user base, so reproducibility is an inherent requirement when it comes to collecting and acting on user insights.
We’ve far too often seen a product team wielding a single data point to defend a dubious intuition or pet project. But there are a number of factors that could and almost always do bias the results of a test without any intentional wrongdoing. Mistakenly asking a leading question or sourcing a user panel that doesn’t exactly represent your target customer can skew individual test results.
Similarly, and in digital product management especially, customer perceptions and trends evolve rapidly, further complicating data. Look no further than the handful of mobile operating systems which undergo yearly redesigns and updates, leading to constantly elevated user expectations. It’s perilously easy to imitate Homer Simpson’s lapse in thinking, “This year, I invested in pumpkins. They’ve been going up the whole month of October and I got a feeling they’re going to peak right around January. Then, bang! That’s when I’ll cash in.”
So how can product and research teams safely transition from data to insights? Fortunately, we believe statistics offers insight into the answer.
The central limit theorem is one of the foundational concepts taught in every introductory statistics class. It states that the distribution of averages tends to be Normal even when the distribution of the population from which the samples were taken is decidedly not Normal.
Put as simply as possible, the theorem acknowledges that individual samples will almost invariably be skewed, but offers statisticians a way to combine them to collectively generate valid data. Regardless of how confusing or complex the underlying data may be, by performing relatively simple individual experiments, the culminating result can cut through the noise.
This theorem provides a useful analogy for product management. To derive value from individual experiments and customer data points, product teams need to practice substantiation through iteration. Even if the results of any given experiment are skewed or outdated, they can be offset by a robust user research process that incorporates both quantitative and qualitative techniques across a variety of environments. The safeguard against pursuing insignificant findings, if you will, is to be mindful not to consider data to be an insight until a pattern has been rigorously established.
Divide no more
The moral of the story is that the nuances in statistics actually do matter. Dogmatically adopting textbook statistics can stifle an organization’s ability to innovate and operate competitively, but ignoring the value and perspective provided by statistics altogether can be similarly catastrophic. By understanding and appropriately applying the core tenets of statistics, product and research teams can begin with a framework for productive dialog about the risks they’re willing to take, the research methodologies they can efficiently but rigorously conduct, and the customer insights they’ll act upon.
Share this:
Planning a Taxonomy Project
Taxonomy of Spices and Pantries: Part 2
by Grace G Lau
October 20th, 2015 No Comments
This is part 2 of “Taxonomy of Spices and Pantries,” in which I will be exploring the what, why, and how of taxonomy planning, design, and implementation:
Building the business case for taxonomy
Planning a taxonomy
The many uses of taxonomy
Card sorting to validate a taxonomy
Tree testing a taxonomy
Taxonomy governance
Best practices of enterprise taxonomies
In part 1, I enumerated the business reasons for a taxonomy focus in a site redesign and gave a fun way to explain taxonomy. The kitchen isn’t going to organize itself, so the analogy continues.
I’ve moved every couple of years and it shows in the kitchen. Half-used containers of ground pepper. Scattered bags of star anise. Multiple bags of ground and whole cumin. After a while, people are quick to stuff things into the nearest crammable crevice (until we move again and the IA is called upon to organize the kitchen).
Planning a taxonomy covers the same questions as planning any UX project. Understanding the users and their tasks and needs is a foundation for all things UX. This article will go through the questions you should consider when planning a kitchen, er, um…, a taxonomy project.
Rumination of stuff in my kitchen and the kinds of users and stakeholders the taxonomy needs to be mindful of.
Rumination of stuff in my kitchen and the kinds of users and stakeholders the taxonomy needs to be mindful of. Source: Grace Lau.
Same as a designing any software, application, or website, you’ll need to meet with the stakeholders and ask questions:
Purpose: Why? What will the taxonomy be used for?
Users: Who’s using this taxonomy? Who will it affect?
Content: What will be covered by this taxonomy?
Scope: What’s the topic area and limits?
Resources: What are the project resources and constraints?
(Thanks to Heather Hedden, “The Accidental Taxonomist,” p.292)
What’s your primary purpose?
Why are you doing this?
Are you moving, or planning to move? Is your kitchen so disorganized that you can’t find the sugar you needed for soy braised chicken? Is your content misplaced and hard to search?
How often have you found just plain old salt in a different spot? How many kinds of salt do you have anyway–Kosher salt, sea salt, iodized salt, Hawaiian pink salt? (Why do you have so many different kinds anyway? One of my favorite recipe books recommended using red Hawaiian sea salt for kalua pig. Of course, I got it.)
You might be using the taxonomy for tagging or, in librarian terms, indexing or cataloging. Maybe it’s for information search and retrieval. Are you building a faceted search results page? Perhaps this taxonomy is being used for organizing the site content and guiding the end users through the site navigation.
Establishing a taxonomy as a common language also helps build consensus and creates smarter conversations. On making baozi (steamed buns), I overheard a conversation between fathers:
Father-in-law: We need 酵母 [Jiàomǔ] {noun}.
Dad: Yi-see? (Cantonese transliteration of yeast)
Father-in-law: (confused look)
Dad: Baking pow-daa? (Cantonese transliteration of baking powder)
Meanwhile, I look up the Chinese translation of “yeast” in Google Translate while mother-in-law opens her go-to Chinese dictionary tool. I discover that the dictionary word for “yeast” is 发酵粉 [fājiàofěn] {noun}.
Father-in-law: Ah, so it rises flour: 发面的 [fāmiànde] {verb}
This discovery ensues more discussion about what it does and how it is used. There was at least 15 more minutes of discussing yeast in five different ways before the fathers agreed that they were talking about the same ingredient and its purpose. Eventually, we have this result in our bellies.
Homemade steamed baozi. Apparently, they’re still investigating how much yeast is required for the amount of flour they used. Source: Grace Lau.
Homemade steamed baozi. Apparently, they’re still investigating how much yeast is required for the amount of flour they used. Source: Grace Lau.
Who are the users?
Are they internal? Content creators or editors, working in the CMS?
Are they external users? What’s their range of experience in the domain? Are we speaking with homemakers and amateur cooks or seasoned cooks with many years at various Chinese restaurants?
Looking at the users of my kitchen, I identified the following stakeholders:
Content creators: the people who do the shopping and have to put away the stuff
People who are always in the kitchen: my in-laws
People who are sometimes in the kitchen: me
Visiting users: my parents and friends who often come over for a BBQ/grill party
The cleanup crew: my husband who can’t stand the mess we all make
How do I create a taxonomy for them? First, I attempt to understand their mental models by watching them work in their natural environment and observing their everyday hacks as they complete their tasks. Having empathy for users’ end game—making food for the people they care for—makes a difference in developing the style, consistency, and breadth and depth of the taxonomy.
What content will be covered by the taxonomy?
In my kitchen, we’ll be covering sugars, salts, spices, and staples used for cooking, baking, braising, grilling, smoking, steaming, simmering, and frying.
How did I determine that?
Terminology from existing content. I opened up every cabinet and door in my kitchen and made an inventory.
Search logs. How were users accessing my kitchen? Why? How were users referring to things? What were they looking for?
Storytelling with users. How did you make this? People like to share recipes and I like to watch friends cook. Doing user interviews has never been more fun!
What’s the scope?
Scope can easily get out of hand. Notice that I have not included in my discussion any cookbooks, kitchen hardware and appliances, pots and pans, or anything that’s in the refrigerator or freezer.
You may need a scope document early on to plan releases (if you need them). Perhaps for the first release, I’ll just deal with the frequent use items. Then I’ll move on to occasional use items (soups and desserts).
If the taxonomy you’re developing is faceted—for example, allowing your users to browse your cupboards by particular attributes such as taste, canned vs dried, or weight—your scope should include only those attributes relevant to the search process. For instance, no one really searches for canned goods in my kitchen, so that’s out of scope.
What resources do you have available?
My kitchen taxonomy will be limited. Stakeholders are multilingual so items will need labelling in English, Simplified Chinese, and pinyin romanization. I had considered building a Drupal site to manage an inventory, but I have neither the funding or time to implement such a complex site.
At the same time, what are users’ expectations for the taxonomy? Considering the context in the taxonomy’s usage is important. How will (or should) a taxonomy empower its users? It needs to be invisible; as an indication of a good taxonomy, it shouldn’t affect their current workflow but make it more efficient. Both fathers and my mom are unlikely to stop and use any digital technology to find and look things up.
Most importantly, the completed taxonomy and actual content migration should not conflict with the preparation of the next meal. My baby needs a packed lunch for school, and it’s 6 a.m. when I’m preparing it. There’s no time to rush around looking for things. Time is limited and a complete displacement of spices and condiments would disrupt the high-traffic flow in any household. Meanwhile, we’re out of soy sauce again and I’d rather it not be stashed in yet a new home and forgotten. That’s why we ended up with three open bottles of soy sauce from different brands.
What else should you consider for the taxonomy?
Understanding the scope of the taxonomy you’re building can help prevent scope creep in a taxonomy project. In time, you’ll realize that the 80% of your time and effort is devoted to research while 20% of the time and effort is actually developing the taxonomy. So, making time for iterations and validation through card sorting and other testing is important in your planning.
In my next article, I will explore the many uses of taxonomy outside of tagging.
In my next article, I will explore the many uses of taxonomy outside of tagging.
How did I determine that?
Terminology from existing content. I opened up every cabinet and door in my kitchen and made an inventory.
Search logs. How were users accessing my kitchen? Why? How were users referring to things? What were they looking for?
Storytelling with users. How did you make this? People like to share recipes and I like to watch friends cook. Doing user interviews has never been more fun!
What’s the scope?
Scope can easily get out of hand. Notice that I have not included in my discussion any cookbooks, kitchen hardware and appliances, pots and pans, or anything that’s in the refrigerator or freezer.
You may need a scope document early on to plan releases (if you need them). Perhaps for the first release, I’ll just deal with the frequent use items. Then I’ll move on to occasional use items (soups and desserts).
If the taxonomy you’re developing is faceted—for example, allowing your users to browse your cupboards by particular attributes such as taste, canned vs dried, or weight—your scope should include only those attributes relevant to the search process. For instance, no one really searches for canned goods in my kitchen, so that’s out of scope.
What resources do you have available?
My kitchen taxonomy will be limited. Stakeholders are multilingual so items will need labelling in English, Simplified Chinese, and pinyin romanization. I had considered building a Drupal site to manage an inventory, but I have neither the funding or time to implement such a complex site.
At the same time, what are users’ expectations for the taxonomy? Considering the context in the taxonomy’s usage is important. How will (or should) a taxonomy empower its users? It needs to be invisible; as an indication of a good taxonomy, it shouldn’t affect their current workflow but make it more efficient. Both fathers and my mom are unlikely to stop and use any digital technology to find and look things up.
Most importantly, the completed taxonomy and actual content migration should not conflict with the preparation of the next meal. My baby needs a packed lunch for school, and it’s 6 a.m. when I’m preparing it. There’s no time to rush around looking for things. Time is limited and a complete displacement of spices and condiments would disrupt the high-traffic flow in any household. Meanwhile, we’re out of soy sauce again and I’d rather it not be stashed in yet a new home and forgotten. That’s why we ended up with three open bottles of soy sauce from different brands.
What else should you consider for the taxonomy?
Understanding the scope of the taxonomy you’re building can help prevent scope creep in a taxonomy project. In time, you’ll realize that the 80% of your time and effort is devoted to research while 20% of the time and effort is actually developing the taxonomy. So, making time for iterations and validation through card sorting and other testing is important in your planning.
In my next article, I will explore the many uses of taxonomy outside of tagging.level and we even feature a UX Battle of the Week between well known brands. Have a look!