UX 15

The Freelance Studio Denver, Co. User Experience Agency Enhancing the Mind-Meld A Case of UX Knowledge Transfer by Mark Richman January 20th, 2015 3 Comments Which version of the ‘suspended account’ dashboard page do you prefer? Version A Version A highlights the address with black text on a soft yellow background. Version B Version B does not highlight the service address. Perhaps you don’t really care. Each one gets the job done in a clear and obvious way. However, as the UX architect of the ‘overview’ page for a huge telecom leader, it was my job to tell the team which treatment we’d be using. I was a freelancer with only four months tenure on this job, and in a company as large, diverse, and complex as this one, four months isn’t a very long time. There are a ton of things to learn—how their teams work, the latest visual standards, expected fidelity of wireframes, and most of all, selecting the ‘current’ interaction standards from a site with thousands of pages, many of which were culled from different companies following acquisitions or created at different points in time. Since I worked off-site, I had limited access to subject matter experts. Time with the Telecom Giant’s UX leads is scarce, but Nick, my lead on this project , was a great guy with five years at the company, much of it on the Overview page and similar efforts. He and I had spent a lot of phone time going over this effort’s various challenges. Version A, the yellow note treatment, had been created to highlight the suspended location if the “Home Phone” account covered more than one address. After much team discussion, we realized that this scenario could not occur, but since the new design placed what seemed like the proper emphasis on the ‘Account Suspended’ situation, I was confident that we’d be moving forward with version A. So, why was I surprised when Nick said we’d “obviously” go with version B? Whenever I start with a new company, I try to do a mind meld with co-workers to understand their approach, why they made certain decisions, and learn their priorities. Unless I’m certain there is a better way, I don’t want to go in with my UX guns blazing—I want to know whether they’d already considered other solutions, and if so, why they were rejected. This is especially true in a company like Telecom Giant, which takes user experience seriously. I’d worked so closely with Nick on this project that I thought I knew his reasoning inside out. And when he came to a different conclusion, I wondered whether I’d ever be able to understand the company’s driving forces. If I wasn’t on the same page with someone who had the same job and a similar perspective, with whom I’d spent hours discussing the project, what chance did I have of seeing eye-to-eye with a business owner on the other side of the country or a developer halfway across the world? Historical perspective Version A (the yellow note treatment) was created by Ken, a visual designer who had an intimate knowledge of the telco’s design standards. This adhered to other instances where the yellow note was used to highlight an important situation. Version B was the existing model, which had worked well in a section of the site that had been redesigned a year ago following significant user testing. Because of its success, this section–“Home Usage”–was earmarked as the model for future redesigns. Once I had a bit of distance from the situation, I realized what the problem was. Although I had worked very closely with Nick, I didn’t have the same understanding of the company’s priorities. My priorities were: Consistency across the site Accessibility Using the most up to date and compelling interaction and design patterns Modeling redesign efforts on “Home Usage” where possible Because Nick had a background in visual design, I thought that he would want to use Ken’s design pattern, which seemed both more visually distinct and a better match for the situation. But Nick preferred the Home Usage pattern and may have had good reasons to think so. First, Home Usage had been thoroughly tested, and since this was an ecommerce site with many hard-to-disentangle components, testing could have provided insight into its success factors, especially if individual components had been tested separately. Second, by following the existing pattern, we wouldn’t wind up with two different treatments for the same situation. Even though the yellow note treatment might be more prominent, was it significant enough to shoulder the cost of changing the pattern in the existing Home Usage flow? Now that I knew at least one piece of the puzzle, I wondered how I might have achieved a more complete ‘mind meld’ with Nick, so that we were more closely in sync. Know your priorities—and check them out Just being aware of the priorities I was following would have offered me the chance to discuss them directly with Nick. With so much information to take in, I hadn’t thought to clarify my priorities and compare them with my co-workers, but this would have made it easier to sync up. Other barriers to knowledge transfer Gabriel Szulanski1 identified three major barriers to internal knowledge transfer within a business. Although these are aimed at firm-wide knowledge, they seem relevant here for individuals as well: Recipient’s lack of absorptive capacity Absorptive capacity is defined as a firm’s “ability to recognize the value of new information, assimilate it, and apply it to commercial ends.”2 To encourage this, companies are urged to embrace the value of R&D and continually evaluate new information. Szulanski notes that such capacity is “largely a function of (the recipient’s) preexisting stock of knowledge.”3 If existing knowledge might help or hinder gathering new information, how might we apply this to an individual? As information load increases, it lessens your ability to understand it and properly place it within a mental framework. While the new company may have hired you for your experience and knowledge, you might need to reevaluate some of that knowledge. For instance, it may be difficult to shed and reframe your priorities to be in sync with the new firm. Causal ambiguity Causal ambiguity refers to an inability to precisely articulate the reasons behind a process or capability. According to Szulanski, this exists “when the precise reasons for success or failure in replicating a capability in a new setting cannot be determined.” How did causal ambiguity affect this transfer? While the site’s Home Usage section was promoted because of its successful testing and rollout, the reasons behind its success were never clear. Success of an ecommerce site depends on many factors, among them navigation, length and content of copy and labels, information density, and the site’s interaction design. Since Home Usage’s advantages had never been broken down into its components, and I hadn’t been there when usability tests were conducted, I could only see it as a black box. To truly assimilate new knowledge, you need context. If none is provided, you need to know how to go out and get it. Ask about the reasons behind a model site. If possible, read any test reports. Keep asking until you understand and validate your conclusions. An arduous relationship between the source and the recipient Finally, knowledge transfer depends on the ease of communication and ‘intimacy’ between the source and recipient. Although my relationship with Nick was close, I worked off-site, which eliminated many informal opportunities for knowledge sharing. I couldn’t ask questions during a chance meeting or ‘ambush’ a manager by waiting for her to emerge from a meeting. Since I didn’t have access to Telecom Giant’s internal messaging system, I was limited to more formal methods such as email or phone calls. A model for knowledge transfer Thomas Jones offered this approach to knowledge transfer in a Quora post: “As they say in the Army: ‘an explanation, a demonstration, and a practical application.’ Storytelling, modeling, and task assignment … share your stories, model the behaviors you want to see and assign the tasks required to build competency.”4 Keeping “Home Usage” in mind, the story could be “how we came to follow this model,” the demonstration could be the research paper, and a practical application could be your work, evaluated by your lead. In conclusion Your ability to retain new information is essential to your success at a new company. However, your ability to understand the reasons behind the information and place these within a framework are even more important. Some techniques to help you do so are: Be aware of your own design priorities and how they match with the firm’s. Treat the company’s priorities like any user research problem and check them out with your leads and co-workers. To increase your absorptive capacity, evaluate your preconceptions and be prepared to change them. Ask for the reasons behind a ‘model’ design. Read research reports if available. Maximize your contact points. Follow-up emails can target ambiguous responses. If time with the UX leads is scarce, ask your co-workers about their view of priorities, patterns and the reasons behind them. Further reading 1 Szulanski, G 1996, ‘Exploring Internal Stickiness: Impediments to the Transfer of Best Practice within the Firm’, Strategic Management Journal, vol. 17, pp. 27-43. 2 Absorptive capacity. Wikipedia entry. 3 Dierickx, Ingemar and Karel Cool. 1989. “Asset stock accumulation and sustainability of competitive advantage.” Management Science. 35 (December): 1504-1511. 4 “What patterns of behavior have proven to be most helpful in knowledge transfer?” Quora post. Share this: EmailTwitter65RedditLinkedIn33Facebook21Google Posted in Learning From Others, Methods | 3 Comments » 3 Comments Auto New Cars January 26, 2015 at 12:58 pm Nice Info, Thank’s,,,, Remmert Braat January 29, 2015 at 10:55 amThe Freelance Studio Denver, Co. User Experience AgencyThe Freelance Studio Denver, Co. User Experience AgencyThe Freelance Studio Denver, Co. User Experience AgencyThe Freelance Studio Denver, Co. User Experience AgencyThe Freelance Studio Denver, Co. User Experience AgencyThe Freelance Studio Denver, Co. User Experience Agency Some interesting points here that ring (painfully) true – although working offsite like that will allways be a challenge. Indeed checking your priorities and validating them with the client is obvious but so easy to forget in a high pressured environment. SVT January 30, 2015 at 1:43 am great article Mr. Richman. Thomas Jones’ Army quote is something I plan on carrying with me in my work now. A Beginner’s Guide to Web Site Optimization—Part 2 The optimization process by Charles Shimooka February 3rd, 2015 2 Comments In the previous article we talked about why site optimization is important and presented a few important goals and philosophies to impart on your team. I’d like to switch gears now and talk about more tactical stuff, namely, process. Optimization process Establishing a well-formed, formal optimization process is beneficial for the following reasons. It organizes the workflow and sets clear expectations for completion. Establishes quality control standards to reduce bugs/errors. Adds legitimacy to the whole operation so that if questioned by stakeholders, you can explain the logic behind the process. At a high level, I suggest a weekly or bi-weekly optimization planning session to perform the following activities: Review ongoing tests to determine if they can be stopped or considered “complete” (see the boxed section below). For tests that have reached completion, the possibilities are: There is a decisive new winner. In this case, plan how to communicate and launch the change permanently to production. There is no decisive winner or the current version (control group) wins. In this case, determine if more study is required or if you should simply move on and drop the experiment. Review data sources and brainstorm new test ideas. Discuss and prioritize any externally submitted ideas. How do I know when a test has reached completion? Completion criteria are a somewhat tricky topic and seemingly guarded industry secrets. These define the minimum requirements that must be true in order for a test to be declared “completed.” My personal sense from reading/conferences is that there are no widely-accepted standards and that completion criteria really depend on how comfortable your team feels with the uncertainty that is inherent in experimentation. We created the following minimum completion criteria for my past team at DIRECTV Latin America. Keep in mind that these were bare-bones minimums, and that most of our tests actually ran much longer. Temporal: Tests must run for a minimum of two weeks to account for variation between days of the week. Statistical confidence: We used a 90-95% confidence interval for most tests. Stability over time: Variations must maintain their positions relative to each other for at least one week. Total conversions: Minimum of 200 total conversions. For further discussion of the rationale behind these completion criteria, please see Best Practices When Designing and Running Experiments later in this article. The creation of a new optimization test may follow a process that is similar to your overall product development lifecycle. I suggest the following basic structure: Process-diagram-abbreviated The following diagram shows a detailed process that I’ve used in the past. A detailed process that the author has used in the past. Step 1: Data analysis and deciding what to test Step one in the optimization process is figuring out where to first focus your efforts. We used the following list as a loose prioritization guideline: Recent product releases, or pages that have not yet undergone optimization. High “value” pages 1. High revenue (ie. shopping cart checkout pages, detail pages of your most expensive products, etc…). 2. High traffic (ie. homepage, login/logout). 3. Highly “strategic” (this might include pages that are highly visible internally or that management considers important). Poorly performing pages 1. Low conversion rate 2. High bounce rate (for an excellent discussion of bounce rate, see Avinash Kaushik’s article). Step 2: Brainstorm ideas for improvement Ideas for how to improve page performance is a topic that is as large as the field of user experience itself, and definitely greater than the scope of this article. One might consider improvements in copywriting, form design, media display, page rendering, visual design, accessibility, browser targeting… the list goes on. My only suggestion for this process is to make it collaborative – harness the power of your team to come up with new ideas for improvement, not only including designers in the brainstorming sessions, but also developers, copywriters, business analysts, marketers, QA, etc… Good ideas can (and often do) come from anywhere. Adaptive Path has a great technique of collaborative ideation that they call sketchboarding, which uses iterative rounds of group sketching. Step 3: Write the testing plan An Optimization Testing Plan acts as the backbone of every test. At a high level, it is used to plan, communicate, and document the history of the experiment, but more importantly, it fosters learning by forcing the team to clearly formulate goals and analyze results. A good testing plan should include: Test name Description Goals Opportunities (what gains will come about if the test goes well) Methodology 1. Expected dates that the test will be running in production. 2. Resources (who will be working on the test). 3. Key metrics to be tracked through the duration of the experiment. 4. Completion criteria. 5. Variations (screenshots of the different designs that you will be showing your site visitors). Here’s a sample optimization testing plan to get you started. Step 4: Design and develop the test Design and development will generally follow an abbreviated version of your organization’s product development lifecycle. Since test variations are generally simpler than full-blown product development projects, I try to use a lighter, more agile process. Be sure that if you do cut corners, only skimp on things like process artifacts or documentation, and not on design quality. For example, be sure to perform some basic usability testing and user research on your variations. This small investment will create better candidates that will be more likely to boost conversions. Step 5: Quality assurance When performing QA on your variations, be as thorough as you would with any other code release to production. I recommend at least functional, visual, and analytics QA. Even though many tools allow you to manipulate your website’s UI on the fly using interfaces that immediately display the results of your changes, the tools are not perfect and any changes that you make might not render perfectly across all browsers. Keep in mind that optimization tools provide you one additional luxury that is not usually possible with general website releases – that of targeting. You can decide to show your variations to only the target browsers, platforms, audiences, etc… for which you have performed QA. For example, let’s imagine that your team has only been able to QA a certain A/B test on desktop (but not mobile) browsers. When you actually configure this test in your optimization tool, you can decide to only display the test to visitors with those specific desktop browsers. If one of your variations has a visual bug when viewed on mobile phones, for example, that problem should not affect the accuracy of your test results. Step 6: Run the Test After QA has completed and you’ve decided how to allocate traffic to the different designs, it’s time to actually run your test. The following are a few best practices to keep in mind before pressing the “Go” button. 1. Variations must be run concurrently This first principle is almost so obvious that it goes without saying, but I’ve often heard the following story from teams that do not perform optimization: “After we launched our new design, we saw our [sales, conversions, etc…] increase by X%. So the new design must be better.” The problem with this logic is that you don’t know what other factors might have been at play before and after the new change launched. Perhaps traffic to that page increased in either quantity or quality after the new design released. Perhaps the conversion rate was on the increase anyway, due to better brand recognition, seasonal variation, or just random chance. Due to these and many other reasons, variations must be run concurrently and not sequentially. This is the only way to hold all other factors consistent and level the playing field between your different designs. 2. Always track multiple conversion metrics One A/B test that we ran on the movie detail pages of the DIRECTV Latin American sites was the following: we increased the size and prominence of the “Ver adelanto” (View trailer) call to action, guessing that if people watched the movie trailer, it might excite them to buy more pay-per-view movies from the web site. We increased the size and prominence of the “Ver adelanto” (View trailer) call to action, guessing that if people watched the movie trailer, it might excite them to buy more pay-per-view movies from the web site. Our initial hunch was right, and after a few weeks we saw that pay-per-views purchases were 4.8% higher with this variation over the control. This increase would have resulted in a revenue boost of about $18,000/year in pay-per-view purchases. Not bad for one simple test. Fortunately though, since we were also tracking other site goals, we noticed that this variation also decreased purchases of our premium channel packages (ie. HBO and Showtime packages) by a whopping 25%! This would have decreased total revenue by a much greater amount than the uptick in pay-per-views, and because of this, we did not launch this variation to production. It’s important to keep in mind that changes may affect your site in ways that you never would have expected. Always track multiple conversion metrics with every test. 3. Tests should reach a comfortable level of statistical significance I recently saw a presentation in which a consultant suggested that preliminary tests on email segmentation had yielded some very promising results. Chart showing conversion rates per 1000 emails sent. In the chart above, the last segment of users (those who had logged in more than four times in the past year) had a conversion rate of .00139% (.139 upgrades per 1000 emails sent). Even though a conversion rate of .00139% is dismally low by any standards, according to the consultant it represented an increase of 142% compared to the base segment of users, and thus, a very promising result. Aside from the obvious lack of actionable utility (does this study suggest that emails only be sent to users who have logged in more than four times?) the test contained another glaring problem. If you look at the “Upgrades” column at the top of the spreadsheet, you will see that the results were based on only five individuals purchasing an upgrade. Five total individuals out of almost eighty four thousand emails sent! So if, by pure chance, only one other person had purchased an upgrade in any of the segments, it could have completely changed the study’s implications. While this example is not actually an optimization test but rather just an email segmentation study, it does convey an important lesson: don’t declare a winner for your tests until it has reached a “comfortable” level of significance. So what does “comfortable” mean? The field of science requires strict definitions to use the terms “significant” (95% confidence level) and “highly significant” (99% confidence level) when publishing results. Even with these definitions, it still means that there is a 5% and 1% chance, respectively, of your conclusions being wrong. Also keep in mind that higher confidence intervals require more data (ie. more website traffic) which translates into longer test durations. Because of these factors, I would recommend less stringent standards for most optimization tests – somewhere around 90-95% confidence depending on the gravity of the situation (higher confidence intervals for tests with more serious consequences or implications). Ultimately, your team must decide on confidence intervals that reflect a compromise between test duration and results certainty, but I would propose that if you perform a lot of testing, the larger number of true winners will make up for the fewer (but inevitable) false positives. 4. The duration of your tests should account for any natural variations (such as between weekdays/weekends) and be stable over time In a 2012 article on AnalyticsInspector.com, Jan Petrovic brings to light an important pitfall of ending your tests too early. He discusses an A/B test that he ran for a high-traffic site in which, after only a day, the testing tool reported that a winning variation had increased the primary conversion rate by an impressive 87%, with a 100% confidence interval. The duration of your tests should account for any natural variations (such as between weekdays/weekends) and be stable over time. Jan writes, “If we stopped the test then and pat each other on the shoulder about how great we were, then we would probably make a very big mistake. The reason for that is simple: we didn’t test our variation on Friday or Monday traffic, or on weekend traffic. But, because we didn’t stop the test (because we knew it was too early), our actual result looked very different.” Chart showing new design results over time. After continuing the test for four weeks, Jan saw that the new design, although still better than the control, had leveled out to a more reasonable 10.49% improvement since it had now taken into account natural daily variation. He writes, “Let’s say you were running this test in checkout, and on the following day you say to your boss something like ‘hey boss, we just increased our site revenue by 87.25%’. If I was your boss, you would make me extremely happy and probably would increase your salary too. So we start celebrating…” Jan’s fable continues with the boss checking the bank account at the end of the month, and upon seeing that sales had actually not increased by the 87% that you had initially reported, reconsiders your salary increase. The moral of the story: Consider temporal variations in the behavior of your site visitors, including differences between weekday and weekend or even seasonal traffic. Step 7: Analyze and Report on the Results After your test has run its course and your team has decided to press the “stop” button, it’s time to compile the results into an Optimization Test Report. The Optimization Test Report can be a continuation of the Test Plan from Step 2, but with the following additional sections: Results Discussion Next steps It is helpful to include graphs and details in the Results section so that readers can visually see trends and analyze data themselves. This will add credibility to your studies and hopefully get people invested in the optimization program. The discussion section is useful for explaining details and postulating on the reasons for the observed results. This will force the team to think more deeply about user behavior and is an invaluable step towards designing future improvements. Conclusion In this article, I’ve presented a detailed and practical process that your team can customize to its own use. In the next and final article of this series, I’ll wrap things up with suggestions for communication planning, team composition, and tool selection. Share this: EmailTwitter70RedditLinkedIn17Facebook35Google Posted in Discovery, Research, and Testing, Process and Methods | 2 Comments » 2 CommentsThe Freelance Studio Denver, Co. User Experience Agency Enhancing the Mind-Meld A Case of UX Knowledge Transfer by Mark Richman January 20th, 2015 3 Comments Which version of the ‘suspended account’ dashboard page do you prefer? Version A Version A highlights the address with black text on a soft yellow background. Version B Version B does not highlight the service address. Perhaps you don’t really care. Each one gets the job done in a clear and obvious way. However, as the UX architect of the ‘overview’ page for a huge telecom leader, it was my job to tell the team which treatment we’d be using. I was a freelancer with only four months tenure on this job, and in a company as large, diverse, and complex as this one, four months isn’t a very long time. There are a ton of things to learn—how their teams work, the latest visual standards, expected fidelity of wireframes, and most of all, selecting the ‘current’ interaction standards from a site with thousands of pages, many of which were culled from different companies following acquisitions or created at different points in time. Since I worked off-site, I had limited access to subject matter experts. Time with the Telecom Giant’s UX leads is scarce, but Nick, my lead on this project , was a great guy with five years at the company, much of it on the Overview page and similar efforts. He and I had spent a lot of phone time going over this effort’s various challenges. Version A, the yellow note treatment, had been created to highlight the suspended location if the “Home Phone” account covered more than one address. After much team discussion, we realized that this scenario could not occur, but since the new design placed what seemed like the proper emphasis on the ‘Account Suspended’ situation, I was confident that we’d be moving forward with version A. So, why was I surprised when Nick said we’d “obviously” go with version B? Whenever I start with a new company, I try to do a mind meld with co-workers to understand their approach, why they made certain decisions, and learn their priorities. Unless I’m certain there is a better way, I don’t want to go in with my UX guns blazing—I want to know whether they’d already considered other solutions, and if so, why they were rejected. This is especially true in a company like Telecom Giant, which takes user experience seriously. I’d worked so closely with Nick on this project that I thought I knew his reasoning inside out. And when he came to a different conclusion, I wondered whether I’d ever be able to understand the company’s driving forces. If I wasn’t on the same page with someone who had the same job and a similar perspective, with whom I’d spent hours discussing the project, what chance did I have of seeing eye-to-eye with a business owner on the other side of the country or a developer halfway across the world? Historical perspective Version A (the yellow note treatment) was created by Ken, a visual designer who had an intimate knowledge of the telco’s design standards. This adhered to other instances where the yellow note was used to highlight an important situation. Version B was the existing model, which had worked well in a section of the site that had been redesigned a year ago following significant user testing. Because of its success, this section–“Home Usage”–was earmarked as the model for future redesigns. Once I had a bit of distance from the situation, I realized what the problem was. Although I had worked very closely with Nick, I didn’t have the same understanding of the company’s priorities. My priorities were: Consistency across the site Accessibility Using the most up to date and compelling interaction and design patterns Modeling redesign efforts on “Home Usage” where possible Because Nick had a background in visual design, I thought that he would want to use Ken’s design pattern, which seemed both more visually distinct and a better match for the situation. But Nick preferred the Home Usage pattern and may have had good reasons to think so. First, Home Usage had been thoroughly tested, and since this was an ecommerce site with many hard-to-disentangle components, testing could have provided insight into its success factors, especially if individual components had been tested separately. Second, by following the existing pattern, we wouldn’t wind up with two different treatments for the same situation. Even though the yellow note treatment might be more prominent, was it significant enough to shoulder the cost of changing the pattern in the existing Home Usage flow? Now that I knew at least one piece of the puzzle, I wondered how I might have achieved a more complete ‘mind meld’ with Nick, so that we were more closely in sync. Know your priorities—and check them out Just being aware of the priorities I was following would have offered me the chance to discuss them directly with Nick. With so much information to take in, I hadn’t thought to clarify my priorities and compare them with my co-workers, but this would have made it easier to sync up. Other barriers to knowledge transfer Gabriel Szulanski1 identified three major barriers to internal knowledge transfer within a business. Although these are aimed at firm-wide knowledge, they seem relevant here for individuals as well: Recipient’s lack of absorptive capacity Absorptive capacity is defined as a firm’s “ability to recognize the value of new information, assimilate it, and apply it to commercial ends.”2 To encourage this, companies are urged to embrace the value of R&D and continually evaluate new information. Szulanski notes that such capacity is “largely a function of (the recipient’s) preexisting stock of knowledge.”3 If existing knowledge might help or hinder gathering new information, how might we apply this to an individual? As information load increases, it lessens your ability to understand it and properly place it within a mental framework. While the new company may have hired you for your experience and knowledge, you might need to reevaluate some of that knowledge. For instance, it may be difficult to shed and reframe your priorities to be in sync with the new firm. Causal ambiguity Causal ambiguity refers to an inability to precisely articulate the reasons behind a process or capability. According to Szulanski, this exists “when the precise reasons for success or failure in replicating a capability in a new setting cannot be determined.” How did causal ambiguity affect this transfer? While the site’s Home Usage section was promoted because of its successful testing and rollout, the reasons behind its success were never clear. Success of an ecommerce site depends on many factors, among them navigation, length and content of copy and labels, information density, and the site’s interaction design. Since Home Usage’s advantages had never been broken down into its components, and I hadn’t been there when usability tests were conducted, I could only see it as a black box. To truly assimilate new knowledge, you need context. If none is provided, you need to know how to go out and get it. Ask about the reasons behind a model site. If possible, read any test reports. Keep asking until you understand and validate your conclusions. An arduous relationship between the source and the recipient Finally, knowledge transfer depends on the ease of communication and ‘intimacy’ between the source and recipient. Although my relationship with Nick was close, I worked off-site, which eliminated many informal opportunities for knowledge sharing. I couldn’t ask questions during a chance meeting or ‘ambush’ a manager by waiting for her to emerge from a meeting. Since I didn’t have access to Telecom Giant’s internal messaging system, I was limited to more formal methods such as email or phone calls. A model for knowledge transfer Thomas Jones offered this approach to knowledge transfer in a Quora post: “As they say in the Army: ‘an explanation, a demonstration, and a practical application.’ Storytelling, modeling, and task assignment … share your stories, model the behaviors you want to see and assign the tasks required to build competency.”4 Keeping “Home Usage” in mind, the story could be “how we came to follow this model,” the demonstration could be the research paper, and a practical application could be your work, evaluated by your lead. In conclusion Your ability to retain new information is essential to your success at a new company. However, your ability to understand the reasons behind the information and place these within a framework are even more important. Some techniques to help you do so are: Be aware of your own design priorities and how they match with the firm’s. Treat the company’s priorities like any user research problem and check them out with your leads and co-workers. To increase your absorptive capacity, evaluate your preconceptions and be prepared to change them. Ask for the reasons behind a ‘model’ design. Read research reports if available. Maximize your contact points. Follow-up emails can target ambiguous responses. If time with the UX leads is scarce, ask your co-workers about their view of priorities, patterns and the reasons behind them. Further reading 1 Szulanski, G 1996, ‘Exploring Internal Stickiness: Impediments to the Transfer of Best Practice within the Firm’, Strategic Management Journal, vol. 17, pp. 27-43. 2 Absorptive capacity. Wikipedia entry. 3 Dierickx, Ingemar and Karel Cool. 1989. “Asset stock accumulation and sustainability of competitive advantage.” Management Science. 35 (December): 1504-1511. 4 “What patterns of behavior have proven to be most helpful in knowledge transfer?” Quora post. Share this: EmailTwitter65RedditLinkedIn33Facebook21Google Posted in Learning From Others, Methods | 3 Comments » 3 Comments Auto New Cars January 26, 2015 at 12:58 pm Nice Info, Thank’s,,,, Remmert Braat January 29, 2015 at 10:55 am Some interesting points here that ring (painfully) true – although working offsite like that will allways be a challenge. Indeed checking your priorities and validating them with the client is obvious but so easy to forget in a high pressured environment. SVT January 30, 2015 at 1:43 am great article Mr. Richman. Thomas Jones’ Army quote is something I plan on carrying with me in my work now. A Beginner’s Guide to Web Site Optimization—Part 2 The optimization process by Charles Shimooka February 3rd, 2015 2 Comments In the previous article we talked about why site optimization is important and presented a few important goals and philosophies to impart on your team. I’d like to switch gears now and talk about more tactical stuff, namely, process. Optimization process Establishing a well-formed, formal optimization process is beneficial for the following reasons. It organizes the workflow and sets clear expectations for completion. Establishes quality control standards to reduce bugs/errors. Adds legitimacy to the whole operation so that if questioned by stakeholders, you can explain the logic behind the process. At a high level, I suggest a weekly or bi-weekly optimization planning session to perform the following activities: Review ongoing tests to determine if they can be stopped or considered “complete” (see the boxed section below). For tests that have reached completion, the possibilities are: There is a decisive new winner. In this case, plan how to communicate and launch the change permanently to production. There is no decisive winner or the current version (control group) wins. In this case, determine if more study is required or if you should simply move on and drop the experiment. Review data sources and brainstorm new test ideas. Discuss and prioritize any externally submitted ideas. How do I know when a test has reached completion? Completion criteria are a somewhat tricky topic and seemingly guarded industry secrets. These define the minimum requirements that must be true in order for a test to be declared “completed.” My personal sense from reading/conferences is that there are no widely-accepted standards and that completion criteria really depend on how comfortable your team feels with the uncertainty that is inherent in experimentation. We created the following minimum completion criteria for my past team at DIRECTV Latin America. Keep in mind that these were bare-bones minimums, and that most of our tests actually ran much longer. Temporal: Tests must run for a minimum of two weeks to account for variation between days of the week. Statistical confidence: We used a 90-95% confidence interval for most tests. Stability over time: Variations must maintain their positions relative to each other for at least one week. Total conversions: Minimum of 200 total conversions. For further discussion of the rationale behind these completion criteria, please see Best Practices When Designing and Running Experiments later in this article. The creation of a new optimization test may follow a process that is similar to your overall product development lifecycle. I suggest the following basic structure: Process-diagram-abbreviated The following diagram shows a detailed process that I’ve used in the past. A detailed process that the author has used in the past. Step 1: Data analysis and deciding what to test Step one in the optimization process is figuring out where to first focus your efforts. We used the following list as a loose prioritization guideline: Recent product releases, or pages that have not yet undergone optimization. High “value” pages 1. High revenue (ie. shopping cart checkout pages, detail pages of your most expensive products, etc…). 2. High traffic (ie. homepage, login/logout). 3. Highly “strategic” (this might include pages that are highly visible internally or that management considers important). Poorly performing pages 1. Low conversion rate 2. High bounce rate (for an excellent discussion of bounce rate, see Avinash Kaushik’s article). Step 2: Brainstorm ideas for improvement Ideas for how to improve page performance is a topic that is as large as the field of user experience itself, and definitely greater than the scope of this article. One might consider improvements in copywriting, form design, media display, page rendering, visual design, accessibility, browser targeting… the list goes on. My only suggestion for this process is to make it collaborative – harness the power of your team to come up with new ideas for improvement, not only including designers in the brainstorming sessions, but also developers, copywriters, business analysts, marketers, QA, etc… Good ideas can (and often do) come from anywhere. Adaptive Path has a great technique of collaborative ideation that they call sketchboarding, which uses iterative rounds of group sketching. Step 3: Write the testing plan An Optimization Testing Plan acts as the backbone of every test. At a high level, it is used to plan, communicate, and document the history of the experiment, but more importantly, it fosters learning by forcing the team to clearly formulate goals and analyze results. A good testing plan should include: Test name Description Goals Opportunities (what gains will come about if the test goes well) Methodology 1. Expected dates that the test will be running in production. 2. Resources (who will be working on the test). 3. Key metrics to be tracked through the duration of the experiment. 4. Completion criteria. 5. Variations (screenshots of the different designs that you will be showing your site visitors). Here’s a sample optimization testing plan to get you started. Step 4: Design and develop the test Design and development will generally follow an abbreviated version of your organization’s product development lifecycle. Since test variations are generally simpler than full-blown product development projects, I try to use a lighter, more agile process. Be sure that if you do cut corners, only skimp on things like process artifacts or documentation, and not on design quality. For example, be sure to perform some basic usability testing and user research on your variations. This small investment will create better candidates that will be more likely to boost conversions. Step 5: Quality assurance When performing QA on your variations, be as thorough as you would with any other code release to production. I recommend at least functional, visual, and analytics QA. Even though many tools allow you to manipulate your website’s UI on the fly using interfaces that immediately display the results of your changes, the tools are not perfect and any changes that you make might not render perfectly across all browsers. Keep in mind that optimization tools provide you one additional luxury that is not usually possible with general website releases – that of targeting. You can decide to show your variations to only the target browsers, platforms, audiences, etc… for which you have performed QA. For example, let’s imagine that your team has only been able to QA a certain A/B test on desktop (but not mobile) browsers. When you actually configure this test in your optimization tool, you can decide to only display the test to visitors with those specific desktop browsers. If one of your variations has a visual bug when viewed on mobile phones, for example, that problem should not affect the accuracy of your test results. Step 6: Run the Test After QA has completed and you’ve decided how to allocate traffic to the different designs, it’s time to actually run your test. The following are a few best practices to keep in mind before pressing the “Go” button. 1. Variations must be run concurrently This first principle is almost so obvious that it goes without saying, but I’ve often heard the following story from teams that do not perform optimization: “After we launched our new design, we saw our [sales, conversions, etc…] increase by X%. So the new design must be better.” The problem with this logic is that you don’t know what other factors might have been at play before and after the new change launched. Perhaps traffic to that page increased in either quantity or quality after the new design released. Perhaps the conversion rate was on the increase anyway, due to better brand recognition, seasonal variation, or just random chance. Due to these and many other reasons, variations must be run concurrently and not sequentially. This is the only way to hold all other factors consistent and level the playing field between your different designs. 2. Always track multiple conversion metrics One A/B test that we ran on the movie detail pages of the DIRECTV Latin American sites was the following: we increased the size and prominence of the “Ver adelanto” (View trailer) call to action, guessing that if people watched the movie trailer, it might excite them to buy more pay-per-view movies from the web site. We increased the size and prominence of the “Ver adelanto” (View trailer) call to action, guessing that if people watched the movie trailer, it might excite them to buy more pay-per-view movies from the web site. Our initial hunch was right, and after a few weeks we saw that pay-per-views purchases were 4.8% higher with this variation over the control. This increase would have resulted in a revenue boost of about $18,000/year in pay-per-view purchases. Not bad for one simple test. Fortunately though, since we were also tracking other site goals, we noticed that this variation also decreased purchases of our premium channel packages (ie. HBO and Showtime packages) by a whopping 25%! This would have decreased total revenue by a much greater amount than the uptick in pay-per-views, and because of this, we did not launch this variation to production. It’s important to keep in mind that changes may affect your site in ways that you never would have expected. Always track multiple conversion metrics with every test. 3. Tests should reach a comfortable level of statistical significance I recently saw a presentation in which a consultant suggested that preliminary tests on email segmentation had yielded some very promising results. Chart showing conversion rates per 1000 emails sent. In the chart above, the last segment of users (those who had logged in more than four times in the past year) had a conversion rate of .00139% (.139 upgrades per 1000 emails sent). Even though a conversion rate of .00139% is dismally low by any standards, according to the consultant it represented an increase of 142% compared to the base segment of users, and thus, a very promising result. Aside from the obvious lack of actionable utility (does this study suggest that emails only be sent to users who have logged in more than four times?) the test contained another glaring problem. If you look at the “Upgrades” column at the top of the spreadsheet, you will see that the results were based on only five individuals purchasing an upgrade. Five total individuals out of almost eighty four thousand emails sent! So if, by pure chance, only one other person had purchased an upgrade in any of the segments, it could have completely changed the study’s implications. While this example is not actually an optimization test but rather just an email segmentation study, it does convey an important lesson: don’t declare a winner for your tests until it has reached a “comfortable” level of significance. So what does “comfortable” mean? The field of science requires strict definitions to use the terms “significant” (95% confidence level) and “highly significant” (99% confidence level) when publishing results. Even with these definitions, it still means that there is a 5% and 1% chance, respectively, of your conclusions being wrong. Also keep in mind that higher confidence intervals require more data (ie. more website traffic) which translates into longer test durations. Because of these factors, I would recommend less stringent standards for most optimization tests – somewhere around 90-95% confidence depending on the gravity of the situation (higher confidence intervals for tests with more serious consequences or implications). Ultimately, your team must decide on confidence intervals that reflect a compromise between test duration and results certainty, but I would propose that if you perform a lot of testing, the larger number of true winners will make up for the fewer (but inevitable) false positives. 4. The duration of your tests should account for any natural variations (such as between weekdays/weekends) and be stable over time In a 2012 article on AnalyticsInspector.com, Jan Petrovic brings to light an important pitfall of ending your tests too early. He discusses an A/B test that he ran for a high-traffic site in which, after only a day, the testing tool reported that a winning variation had increased the primary conversion rate by an impressive 87%, with a 100% confidence interval. The duration of your tests should account for any natural variations (such as between weekdays/weekends) and be stable over time. Jan writes, “If we stopped the test then and pat each other on the shoulder about how great we were, then we would probably make a very big mistake. The reason for that is simple: we didn’t test our variation on Friday or Monday traffic, or on weekend traffic. But, because we didn’t stop the test (because we knew it was too early), our actual result looked very different.” Chart showing new design results over time. After continuing the test for four weeks, Jan saw that the new design, although still better than the control, had leveled out to a more reasonable 10.49% improvement since it had now taken into account natural daily variation. He writes, “Let’s say you were running this test in checkout, and on the following day you say to your boss something like ‘hey boss, we just increased our site revenue by 87.25%’. If I was your boss, you would make me extremely happy and probably would increase your salary too. So we start celebrating…” Jan’s fable continues with the boss checking the bank account at the end of the month, and upon seeing that sales had actually not increased by the 87% that you had initially reported, reconsiders your salary increase. The moral of the story: Consider temporal variations in the behavior of your site visitors, including differences between weekday and weekend or even seasonal traffic. Step 7: Analyze and Report on the Results After your test has run its course and your team has decided to press the “stop” button, it’s time to compile the results into an Optimization Test Report. The Optimization Test Report can be a continuation of the Test Plan from Step 2, but with the following additional sections: Results Discussion Next steps It is helpful to include graphs and details in the Results section so that readers can visually see trends and analyze data themselves. This will add credibility to your studies and hopefully get people invested in the optimization program. The discussion section is useful for explaining details and postulating on the reasons for the observed results. This will force the team to think more deeply about user behavior and is an invaluable step towards designing future improvements. Conclusion In this article, I’ve presented a detailed and practical process that your team can customize to its own use. In the next and final article of this series, I’ll wrap things up with suggestions for communication planning, team composition, and tool selection. Share this: EmailTwitter70RedditLinkedIn17Facebook35Google Posted in Discovery, Research, and Testing, Process and Methods | 2 Comments » 2 CommentsThe Freelance Studio Denver, Co. User Experience Agency Enhancing the Mind-Meld A Case of UX Knowledge Transfer by Mark Richman January 20th, 2015 3 Comments Which version of the ‘suspended account’ dashboard page do you prefer? Version A Version A highlights the address with black text on a soft yellow background. Version B Version B does not highlight the service address. Perhaps you don’t really care. Each one gets the job done in a clear and obvious way. However, as the UX architect of the ‘overview’ page for a huge telecom leader, it was my job to tell the team which treatment we’d be using. I was a freelancer with only four months tenure on this job, and in a company as large, diverse, and complex as this one, four months isn’t a very long time. There are a ton of things to learn—how their teams work, the latest visual standards, expected fidelity of wireframes, and most of all, selecting the ‘current’ interaction standards from a site with thousands of pages, many of which were culled from different companies following acquisitions or created at different points in time. Since I worked off-site, I had limited access to subject matter experts. Time with the Telecom Giant’s UX leads is scarce, but Nick, my lead on this project , was a great guy with five years at the company, much of it on the Overview page and similar efforts. He and I had spent a lot of phone time going over this effort’s various challenges. Version A, the yellow note treatment, had been created to highlight the suspended location if the “Home Phone” account covered more than one address. After much team discussion, we realized that this scenario could not occur, but since the new design placed what seemed like the proper emphasis on the ‘Account Suspended’ situation, I was confident that we’d be moving forward with version A.The Freelance Studio Denver, Co. User Experience AgencyThe Freelance Studio Denver, Co. User Experience AgencyThe Freelance Studio Denver, Co. User Experience AgencyThe Freelance Studio Denver, Co. User Experience AgencyThe Freelance Studio Denver, Co. User Experience AgencyThe Freelance Studio Denver, Co. User Experience AgencyThe Freelance Studio Denver, Co. User Experience Agency So, why was I surprised when Nick said we’d “obviously” go with version B? Whenever I start with a new company, I try to do a mind meld with co-workers to understand their approach, why they made certain decisions, and learn their priorities. Unless I’m certain there is a better way, I don’t want to go in with my UX guns blazing—I want to know whether they’d already considered other solutions, and if so, why they were rejected. This is especially true in a company like Telecom Giant, which takes user experience seriously. I’d worked so closely with Nick on this project that I thought I knew his reasoning inside out. And when he came to a different conclusion, I wondered whether I’d ever be able to understand the company’s driving forces. If I wasn’t on the same page with someone who had the same job and a similar perspective, with whom I’d spent hours discussing the project, what chance did I have of seeing eye-to-eye with a business owner on the other side of the country or a developer halfway across the world? Historical perspective Version A (the yellow note treatment) was created by Ken, a visual designer who had an intimate knowledge of the telco’s design standards. This adhered to other instances where the yellow note was used to highlight an important situation. Version B was the existing model, which had worked well in a section of the site that had been redesigned a year ago following significant user testing. Because of its success, this section–“Home Usage”–was earmarked as the model for future redesigns. Once I had a bit of distance from the situation, I realized what the problem was. Although I had worked very closely with Nick, I didn’t have the same understanding of the company’s priorities. My priorities were: Consistency across the site Accessibility Using the most up to date and compelling interaction and design patterns Modeling redesign efforts on “Home Usage” where possible Because Nick had a background in visual design, I thought that he would want to use Ken’s design pattern, which seemed both more visually distinct and a better match for the situation. But Nick preferred the Home Usage pattern and may have had good reasons to think so. First, Home Usage had been thoroughly tested, and since this was an ecommerce site with many hard-to-disentangle components, testing could have provided insight into its success factors, especially if individual components had been tested separately. Second, by following the existing pattern, we wouldn’t wind up with two different treatments for the same situation. Even though the yellow note treatment might be more prominent, was it significant enough to shoulder the cost of changing the pattern in the existing Home Usage flow? Now that I knew at least one piece of the puzzle, I wondered how I might have achieved a more complete ‘mind meld’ with Nick, so that we were more closely in sync. Know your priorities—and check them out Just being aware of the priorities I was following would have offered me the chance to discuss them directly with Nick. With so much information to take in, I hadn’t thought to clarify my priorities and compare them with my co-workers, but this would have made it easier to sync up. Other barriers to knowledge transfer Gabriel Szulanski1 identified three major barriers to internal knowledge transfer within a business. Although these are aimed at firm-wide knowledge, they seem relevant here for individuals as well: Recipient’s lack of absorptive capacity Absorptive capacity is defined as a firm’s “ability to recognize the value of new information, assimilate it, and apply it to commercial ends.”2 To encourage this, companies are urged to embrace the value of R&D and continually evaluate new information. Szulanski notes that such capacity is “largely a function of (the recipient’s) preexisting stock of knowledge.”3 If existing knowledge might help or hinder gathering new information, how might we apply this to an individual? As information load increases, it lessens your ability to understand it and properly place it within a mental framework. While the new company may have hired you for your experience and knowledge, you might need to reevaluate some of that knowledge. For instance, it may be difficult to shed and reframe your priorities to be in sync with the new firm. Causal ambiguity Causal ambiguity refers to an inability to precisely articulate the reasons behind a process or capability. According to Szulanski, this exists “when the precise reasons for success or failure in replicating a capability in a new setting cannot be determined.” How did causal ambiguity affect this transfer? While the site’s Home Usage section was promoted because of its successful testing and rollout, the reasons behind its success were never clear. Success of an ecommerce site depends on many factors, among them navigation, length and content of copy and labels, information density, and the site’s interaction design. Since Home Usage’s advantages had never been broken down into its components, and I hadn’t been there when usability tests were conducted, I could only see it as a black box. To truly assimilate new knowledge, you need context. If none is provided, you need to know how to go out and get it. Ask about the reasons behind a model site. If possible, read any test reports. Keep asking until you understand and validate your conclusions. An arduous relationship between the source and the recipient Finally, knowledge transfer depends on the ease of communication and ‘intimacy’ between the source and recipient. Although my relationship with Nick was close, I worked off-site, which eliminated many informal opportunities for knowledge sharing. I couldn’t ask questions during a chance meeting or ‘ambush’ a manager by waiting for her to emerge from a meeting. Since I didn’t have access to Telecom Giant’s internal messaging system, I was limited to more formal methods such as email or phone calls. A model for knowledge transfer Thomas Jones offered this approach to knowledge transfer in a Quora post: “As they say in the Army: ‘an explanation, a demonstration, and a practical application.’ Storytelling, modeling, and task assignment … share your stories, model the behaviors you want to see and assign the tasks required to build competency.”4 Keeping “Home Usage” in mind, the story could be “how we came to follow this model,” the demonstration could be the research paper, and a practical application could be your work, evaluated by your lead. In conclusion Your ability to retain new information is essential to your success at a new company. However, your ability to understand the reasons behind the information and place these within a framework are even more important. Some techniques to help you do so are: Be aware of your own design priorities and how they match with the firm’s. Treat the company’s priorities like any user research problem and check them out with your leads and co-workers. To increase your absorptive capacity, evaluate your preconceptions and be prepared to change them. Ask for the reasons behind a ‘model’ design. Read research reports if available. Maximize your contact points. Follow-up emails can target ambiguous responses. If time with the UX leads is scarce, ask your co-workers about their view of priorities, patterns and the reasons behind them. Further reading 1 Szulanski, G 1996, ‘Exploring Internal Stickiness: Impediments to the Transfer of Best Practice within the Firm’, Strategic Management Journal, vol. 17, pp. 27-43. 2 Absorptive capacity. Wikipedia entry. 3 Dierickx, Ingemar and Karel Cool. 1989. “Asset stock accumulation and sustainability of competitive advantage.” Management Science. 35 (December): 1504-1511. 4 “What patterns of behavior have proven to be most helpful in knowledge transfer?” Quora post. Share this: EmailTwitter65RedditLinkedIn33Facebook21Google Posted in Learning From Others, Methods | 3 Comments » 3 Comments Auto New Cars January 26, 2015 at 12:58 pm Nice Info, Thank’s,,,, Remmert Braat January 29, 2015 at 10:55 am Some interesting points here that ring (painfully) true – although working offsite like that will allways be a challenge. Indeed checking your priorities and validating them with the client is obvious but so easy to forget in a high pressured environment. SVT January 30, 2015 at 1:43 am great article Mr. Richman. Thomas Jones’ Army quote is something I plan on carrying with me in my work now. A Beginner’s Guide to Web Site Optimization—Part 2 The optimization process by Charles Shimooka February 3rd, 2015 2 Comments In the previous article we talked about why site optimization is important and presented a few important goals and philosophies to impart on your team. I’d like to switch gears now and talk about more tactical stuff, namely, process. Optimization process Establishing a well-formed, formal optimization process is beneficial for the following reasons. It organizes the workflow and sets clear expectations for completion. Establishes quality control standards to reduce bugs/errors. Adds legitimacy to the whole operation so that if questioned by stakeholders, you can explain the logic behind the process. At a high level, I suggest a weekly or bi-weekly optimization planning session to perform the following activities: Review ongoing tests to determine if they can be stopped or considered “complete” (see the boxed section below). For tests that have reached completion, the possibilities are: There is a decisive new winner. In this case, plan how to communicate and launch the change permanently to production. There is no decisive winner or the current version (control group) wins. In this case, determine if more study is required or if you should simply move on and drop the experiment. Review data sources and brainstorm new test ideas. Discuss and prioritize any externally submitted ideas. How do I know when a test has reached completion? Completion criteria are a somewhat tricky topic and seemingly guarded industry secrets. These define the minimum requirements that must be true in order for a test to be declared “completed.” My personal sense from reading/conferences is that there are no widely-accepted standards and that completion criteria really depend on how comfortable your team feels with the uncertainty that is inherent in experimentation. We created the following minimum completion criteria for my past team at DIRECTV Latin America. Keep in mind that these were bare-bones minimums, and that most of our tests actually ran much longer. Temporal: Tests must run for a minimum of two weeks to account for variation between days of the week. Statistical confidence: We used a 90-95% confidence interval for most tests. Stability over time: Variations must maintain their positions relative to each other for at least one week. Total conversions: Minimum of 200 total conversions. For further discussion of the rationale behind these completion criteria, please see Best Practices When Designing and Running Experiments later in this article. The creation of a new optimization test may follow a process that is similar to your overall product development lifecycle. I suggest the following basic structure: Process-diagram-abbreviated The following diagram shows a detailed process that I’ve used in the past. A detailed process that the author has used in the past. Step 1: Data analysis and deciding what to test Step one in the optimization process is figuring out where to first focus your efforts. We used the following list as a loose prioritization guideline: Recent product releases, or pages that have not yet undergone optimization. High “value” pages 1. High revenue (ie. shopping cart checkout pages, detail pages of your most expensive products, etc…). 2. High traffic (ie. homepage, login/logout). 3. Highly “strategic” (this might include pages that are highly visible internally or that management considers important). Poorly performing pages 1. Low conversion rate 2. High bounce rate (for an excellent discussion of bounce rate, see Avinash Kaushik’s article). Step 2: Brainstorm ideas for improvement Ideas for how to improve page performance is a topic that is as large as the field of user experience itself, and definitely greater than the scope of this article. One might consider improvements in copywriting, form design, media display, page rendering, visual design, accessibility, browser targeting… the list goes on. My only suggestion for this process is to make it collaborative – harness the power of your team to come up with new ideas for improvement, not only including designers in the brainstorming sessions, but also developers, copywriters, business analysts, marketers, QA, etc… Good ideas can (and often do) come from anywhere. Adaptive Path has a great technique of collaborative ideation that they call sketchboarding, which uses iterative rounds of group sketching. Step 3: Write the testing plan An Optimization Testing Plan acts as the backbone of every test. At a high level, it is used to plan, communicate, and document the history of the experiment, but more importantly, it fosters learning by forcing the team to clearly formulate goals and analyze results. A good testing plan should include: Test name Description Goals Opportunities (what gains will come about if the test goes well) Methodology 1. Expected dates that the test will be running in production. 2. Resources (who will be working on the test). 3. Key metrics to be tracked through the duration of the experiment. 4. Completion criteria. 5. Variations (screenshots of the different designs that you will be showing your site visitors). Here’s a sample optimization testing plan to get you started. Step 4: Design and develop the test Design and development will generally follow an abbreviated version of your organization’s product development lifecycle. Since test variations are generally simpler than full-blown product development projects, I try to use a lighter, more agile process. Be sure that if you do cut corners, only skimp on things like process artifacts or documentation, and not on design quality. For example, be sure to perform some basic usability testing and user research on your variations. This small investment will create better candidates that will be more likely to boost conversions. Step 5: Quality assurance When performing QA on your variations, be as thorough as you would with any other code release to production. I recommend at least functional, visual, and analytics QA. Even though many tools allow you to manipulate your website’s UI on the fly using interfaces that immediately display the results of your changes, the tools are not perfect and any changes that you make might not render perfectly across all browsers. Keep in mind that optimization tools provide you one additional luxury that is not usually possible with general website releases – that of targeting. You can decide to show your variations to only the target browsers, platforms, audiences, etc… for which you have performed QA. For example, let’s imagine that your team has only been able to QA a certain A/B test on desktop (but not mobile) browsers. When you actually configure this test in your optimization tool, you can decide to only display the test to visitors with those specific desktop browsers. If one of your variations has a visual bug when viewed on mobile phones, for example, that problem should not affect the accuracy of your test results. Step 6: Run the Test After QA has completed and you’ve decided how to allocate traffic to the different designs, it’s time to actually run your test. The following are a few best practices to keep in mind before pressing the “Go” button. 1. Variations must be run concurrently This first principle is almost so obvious that it goes without saying, but I’ve often heard the following story from teams that do not perform optimization: “After we launched our new design, we saw our [sales, conversions, etc…] increase by X%. So the new design must be better.” The problem with this logic is that you don’t know what other factors might have been at play before and after the new change launched. Perhaps traffic to that page increased in either quantity or quality after the new design released. Perhaps the conversion rate was on the increase anyway, due to better brand recognition, seasonal variation, or just random chance. Due to these and many other reasons, variations must be run concurrently and not sequentially. This is the only way to hold all other factors consistent and level the playing field between your different designs. 2. Always track multiple conversion metrics One A/B test that we ran on the movie detail pages of the DIRECTV Latin American sites was the following: we increased the size and prominence of the “Ver adelanto” (View trailer) call to action, guessing that if people watched the movie trailer, it might excite them to buy more pay-per-view movies from the web site. We increased the size and prominence of the “Ver adelanto” (View trailer) call to action, guessing that if people watched the movie trailer, it might excite them to buy more pay-per-view movies from the web site. Our initial hunch was right, and after a few weeks we saw that pay-per-views purchases were 4.8% higher with this variation over the control. This increase would have resulted in a revenue boost of about $18,000/year in pay-per-view purchases. Not bad for one simple test. Fortunately though, since we were also tracking other site goals, we noticed that this variation also decreased purchases of our premium channel packages (ie. HBO and Showtime packages) by a whopping 25%! This would have decreased total revenue by a much greater amount than the uptick in pay-per-views, and because of this, we did not launch this variation to production. It’s important to keep in mind that changes may affect your site in ways that you never would have expected. Always track multiple conversion metrics with every test. 3. Tests should reach a comfortable level of statistical significance I recently saw a presentation in which a consultant suggested that preliminary tests on email segmentation had yielded some very promising results. Chart showing conversion rates per 1000 emails sent. In the chart above, the last segment of users (those who had logged in more than four times in the past year) had a conversion rate of .00139% (.139 upgrades per 1000 emails sent). Even though a conversion rate of .00139% is dismally low by any standards, according to the consultant it represented an increase of 142% compared to the base segment of users, and thus, a very promising result. Aside from the obvious lack of actionable utility (does this study suggest that emails only be sent to users who have logged in more than four times?) the test contained another glaring problem. If you look at the “Upgrades” column at the top of the spreadsheet, you will see that the results were based on only five individuals purchasing an upgrade. Five total individuals out of almost eighty four thousand emails sent! So if, by pure chance, only one other person had purchased an upgrade in any of the segments, it could have completely changed the study’s implications. While this example is not actually an optimization test but rather just an email segmentation study, it does convey an important lesson: don’t declare a winner for your tests until it has reached a “comfortable” level of significance. So what does “comfortable” mean? The field of science requires strict definitions to use the terms “significant” (95% confidence level) and “highly significant” (99% confidence level) when publishing results. Even with these definitions, it still means that there is a 5% and 1% chance, respectively, of your conclusions being wrong. Also keep in mind that higher confidence intervals require more data (ie. more website traffic) which translates into longer test durations. Because of these factors, I would recommend less stringent standards for most optimization tests – somewhere around 90-95% confidence depending on the gravity of the situation (higher confidence intervals for tests with more serious consequences or implications). Ultimately, your team must decide on confidence intervals that reflect a compromise between test duration and results certainty, but I would propose that if you perform a lot of testing, the larger number of true winners will make up for the fewer (but inevitable) false positives. 4. The duration of your tests should account for any natural variations (such as between weekdays/weekends) and be stable over time In a 2012 article on AnalyticsInspector.com, Jan Petrovic brings to light an important pitfall of ending your tests too early. He discusses an A/B test that he ran for a high-traffic site in which, after only a day, the testing tool reported that a winning variation had increased the primary conversion rate by an impressive 87%, with a 100% confidence interval. The duration of your tests should account for any natural variations (such as between weekdays/weekends) and be stable over time. Jan writes, “If we stopped the test then and pat each other on the shoulder about how great we were, then we would probably make a very big mistake. The reason for that is simple: we didn’t test our variation on Friday or Monday traffic, or on weekend traffic. But, because we didn’t stop the test (because we knew it was too early), our actual result looked very different.” Chart showing new design results over time. After continuing the test for four weeks, Jan saw that the new design, although still better than the control, had leveled out to a more reasonable 10.49% improvement since it had now taken into account natural daily variation. He writes, “Let’s say you were running this test in checkout, and on the following day you say to your boss something like ‘hey boss, we just increased our site revenue by 87.25%’. If I was your boss, you would make me extremely happy and probably would increase your salary too. So we start celebrating…” Jan’s fable continues with the boss checking the bank account at the end of the month, and upon seeing that sales had actually not increased by the 87% that you had initially reported, reconsiders your salary increase. The moral of the story: Consider temporal variations in the behavior of your site visitors, including differences between weekday and weekend or even seasonal traffic. Step 7: Analyze and Report on the Results After your test has run its course and your team has decided to press the “stop” button, it’s time to compile the results into an Optimization Test Report. The Optimization Test Report can be a continuation of the Test Plan from Step 2, but with the following additional sections: Results Discussion Next steps It is helpful to include graphs and details in the Results section so that readers can visually see trends and analyze data themselves. This will add credibility to your studies and hopefully get people invested in the optimization program. The discussion section is useful for explaining details and postulating on the reasons for the observed results. This will force the team to think more deeply about user behavior and is an invaluable step towards designing future improvements. Conclusion In this article, I’ve presented a detailed and practical process that your team can customize to its own use. In the next and final article of this series, I’ll wrap things up with suggestions for communication planning, team composition, and tool selection. Share this: EmailTwitter70RedditLinkedIn17Facebook35Google Posted in Discovery, Research, and Testing, Process and Methods | 2 Comments » 2 CommentsThe Freelance Studio Denver, Co. User Experience Agency Enhancing the Mind-Meld A Case of UX Knowledge Transfer by Mark Richman January 20th, 2015 3 Comments Which version of the ‘suspended account’ dashboard page do you prefer? Version A Version A highlights the address with black text on a soft yellow background. Version B Version B does not highlight the service address. Perhaps you don’t really care. Each one gets the job done in a clear and obvious way. However, as the UX architect of the ‘overview’ page for a huge telecom leader, it was my job to tell the team which treatment we’d be using. I was a freelancer with only four months tenure on this job, and in a company as large, diverse, and complex as this one, four months isn’t a very long time. There are a ton of things to learn—how their teams work, the latest visual standards, expected fidelity of wireframes, and most of all, selecting the ‘current’ interaction standards from a site with thousands of pages, many of which were culled from different companies following acquisitions or created at different points in time. Since I worked off-site, I had limited access to subject matter experts. Time with the Telecom Giant’s UX leads is scarce, but Nick, my lead on this project , was a great guy with five years at the company, much of it on the Overview page and similar efforts. He and I had spent a lot of phone time going over this effort’s various challenges. Version A, the yellow note treatment, had been created to highlight the suspended location if the “Home Phone” account covered more than one address. After much team discussion, we realized that this scenario could not occur, but since the new design placed what seemed like the proper emphasis on the ‘Account Suspended’ situation, I was confident that we’d be moving forward with version A. So, why was I surprised when Nick said we’d “obviously” go with version B? Whenever I start with a new company, I try to do a mind meld with co-workers to understand their approach, why they made certain decisions, and learn their priorities. Unless I’m certain there is a better way, I don’t want to go in with my UX guns blazing—I want to know whether they’d already considered other solutions, and if so, why they were rejected. This is especially true in a company like Telecom Giant, which takes user experience seriously. I’d worked so closely with Nick on this project that I thought I knew his reasoning inside out. And when he came to a different conclusion, I wondered whether I’d ever be able to understand the company’s driving forces. If I wasn’t on the same page with someone who had the same job and a similar perspective, with whom I’d spent hours discussing the project, what chance did I have of seeing eye-to-eye with a business owner on the other side of the country or a developer halfway across the world? Historical perspective Version A (the yellow note treatment) was created by Ken, a visual designer who had an intimate knowledge of the telco’s design standards. This adhered to other instances where the yellow note was used to highlight an important situation. Version B was the existing model, which had worked well in a section of the site that had been redesigned a year ago following significant user testing. Because of its success, this section–“Home Usage”–was earmarked as the model for future redesigns. Once I had a bit of distance from the situation, I realized what the problem was. Although I had worked very closely with Nick, I didn’t have the same understanding of the company’s priorities. My priorities were: Consistency across the site Accessibility Using the most up to date and compelling interaction and design patterns Modeling redesign efforts on “Home Usage” where possible Because Nick had a background in visual design, I thought that he would want to use Ken’s design pattern, which seemed both more visually distinct and a better match for the situation. But Nick preferred the Home Usage pattern and may have had good reasons to think so. First, Home Usage had been thoroughly tested, and since this was an ecommerce site with many hard-to-disentangle components, testing could have provided insight into its success factors, especially if individual components had been tested separately. Second, by following the existing pattern, we wouldn’t wind up with two different treatments for the same situation. Even though the yellow note treatment might be more prominent, was it significant enough to shoulder the cost of changing the pattern in the existing Home Usage flow? Now that I knew at least one piece of the puzzle, I wondered how I might have achieved a more complete ‘mind meld’ with Nick, so that we were more closely in sync. Know your priorities—and check them out Just being aware of the priorities I was following would have offered me the chance to discuss them directly with Nick. With so much information to take in, I hadn’t thought to clarify my priorities and compare them with my co-workers, but this would have made it easier to sync up. Other barriers to knowledge transfer Gabriel Szulanski1 identified three major barriers to internal knowledge transfer within a business. Although these are aimed at firm-wide knowledge, they seem relevant here for individuals as well: Recipient’s lack of absorptive capacity Absorptive capacity is defined as a firm’s “ability to recognize the value of new information, assimilate it, and apply it to commercial ends.”2 To encourage this, companies are urged to embrace the value of R&D and continually evaluate new information. Szulanski notes that such capacity is “largely a function of (the recipient’s) preexisting stock of knowledge.”3 If existing knowledge might help or hinder gathering new information, how might we apply this to an individual? As information load increases, it lessens your ability to understand it and properly place it within a mental framework. While the new company may have hired you for your experience and knowledge, you might need to reevaluate some of that knowledge. For instance, it may be difficult to shed and reframe your priorities to be in sync with the new firm. Causal ambiguity Causal ambiguity refers to an inability to precisely articulate the reasons behind a process or capability. According to Szulanski, this exists “when the precise reasons for success or failure in replicating a capability in a new setting cannot be determined.” How did causal ambiguity affect this transfer? While the site’s Home Usage section was promoted because of its successful testing and rollout, the reasons behind its success were never clear. Success of an ecommerce site depends on many factors, among them navigation, length and content of copy and labels, information density, and the site’s interaction design. Since Home Usage’s advantages had never been broken down into its components, and I hadn’t been there when usability tests were conducted, I could only see it as a black box. To truly assimilate new knowledge, you need context. If none is provided, you need to know how to go out and get it. Ask about the reasons behind a model site. If possible, read any test reports. Keep asking until you understand and validate your conclusions. An arduous relationship between the source and the recipient Finally, knowledge transfer depends on the ease of communication and ‘intimacy’ between the source and recipient. Although my relationship with Nick was close, I worked off-site, which eliminated many informal opportunities for knowledge sharing. I couldn’t ask questions during a chance meeting or ‘ambush’ a manager by waiting for her to emerge from a meeting. Since I didn’t have access to Telecom Giant’s internal messaging system, I was limited to more formal methods such as email or phone calls. A model for knowledge transfer Thomas Jones offered this approach to knowledge transfer in a Quora post: “As they say in the Army: ‘an explanation, a demonstration, and a practical application.’ Storytelling, modeling, and task assignment … share your stories, model the behaviors you want to see and assign the tasks required to build competency.”4 Keeping “Home Usage” in mind, the story could be “how we came to follow this model,” the demonstration could be the research paper, and a practical application could be your work, evaluated by your lead. In conclusion Your ability to retain new information is essential to your success at a new company. However, your ability to understand the reasons behind the information and place these within a framework are even more important. Some techniques to help you do so are: Be aware of your own design priorities and how they match with the firm’s. Treat the company’s priorities like any user research problem and check them out with your leads and co-workers. To increase your absorptive capacity, evaluate your preconceptions and be prepared to change them. Ask for the reasons behind a ‘model’ design. Read research reports if available. Maximize your contact points. Follow-up emails can target ambiguous responses. If time with the UX leads is scarce, ask your co-workers about their view of priorities, patterns and the reasons behind them. Further reading 1 Szulanski, G 1996, ‘Exploring Internal Stickiness: Impediments to the Transfer of Best Practice within the Firm’, Strategic Management Journal, vol. 17, pp. 27-43. 2 Absorptive capacity. Wikipedia entry. 3 Dierickx, Ingemar and Karel Cool. 1989. “Asset stock accumulation and sustainability of competitive advantage.” Management Science. 35 (December): 1504-1511. 4 “What patterns of behavior have proven to be most helpful in knowledge transfer?” Quora post. Share this: EmailTwitter65RedditLinkedIn33Facebook21Google Posted in Learning From Others, Methods | 3 Comments » 3 Comments Auto New Cars January 26, 2015 at 12:58 pm Nice Info, Thank’s,,,, Remmert Braat January 29, 2015 at 10:55 am Some interesting points here that ring (painfully) true – although working offsite like that will allways be a challenge. Indeed checking your priorities and validating them with the client is obvious but so easy to forget in a high pressured environment. SVT January 30, 2015 at 1:43 am great article Mr. Richman. Thomas Jones’ Army quote is something I plan on carrying with me in my work now. A Beginner’s Guide to Web Site Optimization—Part 2 The optimization process by Charles Shimooka February 3rd, 2015 2 Comments In the previous article we talked about why site optimization is important and presented a few important goals and philosophies to impart on your team. I’d like to switch gears now and talk about more tactical stuff, namely, process. Optimization process Establishing a well-formed, formal optimization process is beneficial for the following reasons. It organizes the workflow and sets clear expectations for completion. Establishes quality control standards to reduce bugs/errors. Adds legitimacy to the whole operation so that if questioned by stakeholders, you can explain the logic behind the process. At a high level, I suggest a weekly or bi-weekly optimization planning session to perform the following activities: Review ongoing tests to determine if they can be stopped or considered “complete” (see the boxed section below). For tests that have reached completion, the possibilities are: There is a decisive new winner. In this case, plan how to communicate and launch the change permanently to production. There is no decisive winner or the current version (control group) wins. In this case, determine if more study is required or if you should simply move on and drop the experiment. Review data sources and brainstorm new test ideas. Discuss and prioritize any externally submitted ideas. How do I know when a test has reached completion? Completion criteria are a somewhat tricky topic and seemingly guarded industry secrets. These define the minimum requirements that must be true in order for a test to be declared “completed.” My personal sense from reading/conferences is that there are no widely-accepted standards and that completion criteria really depend on how comfortable your team feels with the uncertainty that is inherent in experimentation. We created the following minimum completion criteria for my past team at DIRECTV Latin America. Keep in mind that these were bare-bones minimums, and that most of our tests actually ran much longer. Temporal: Tests must run for a minimum of two weeks to account for variation between days of the week. Statistical confidence: We used a 90-95% confidence interval for most tests. Stability over time: Variations must maintain their positions relative to each other for at least one week. Total conversions: Minimum of 200 total conversions. For further discussion of the rationale behind these completion criteria, please see Best Practices When Designing and Running Experiments later in this article. The creation of a new optimization test may follow a process that is similar to your overall product development lifecycle. I suggest the following basic structure: Process-diagram-abbreviated The following diagram shows a detailed process that I’ve used in the past. A detailed process that the author has used in the past. Step 1: Data analysis and deciding what to test Step one in the optimization process is figuring out where to first focus your efforts. We used the following list as a loose prioritization guideline: Recent product releases, or pages that have not yet undergone optimization. High “value” pages 1. High revenue (ie. shopping cart checkout pages, detail pages of your most expensive products, etc…). 2. High traffic (ie. homepage, login/logout). 3. Highly “strategic” (this might include pages that are highly visible internally or that management considers important). Poorly performing pages 1. Low conversion rate 2. High bounce rate (for an excellent discussion of bounce rate, see Avinash Kaushik’s article). Step 2: Brainstorm ideas for improvement Ideas for how to improve page performance is a topic that is as large as the field of user experience itself, and definitely greater than the scope of this article. One might consider improvements in copywriting, form design, media display, page rendering, visual design, accessibility, browser targeting… the list goes on. My only suggestion for this process is to make it collaborative – harness the power of your team to come up with new ideas for improvement, not only including designers in the brainstorming sessions, but also developers, copywriters, business analysts, marketers, QA, etc… Good ideas can (and often do) come from anywhere. Adaptive Path has a great technique of collaborative ideation that they call sketchboarding, which uses iterative rounds of group sketching. Step 3: Write the testing plan An Optimization Testing Plan acts as the backbone of every test. At a high level, it is used to plan, communicate, and document the history of the experiment, but more importantly, it fosters learning by forcing the team to clearly formulate goals and analyze results. A good testing plan should include: Test name Description Goals Opportunities (what gains will come about if the test goes well) Methodology 1. Expected dates that the test will be running in production. 2. Resources (who will be working on the test). 3. Key metrics to be tracked through the duration of the experiment. 4. Completion criteria. 5. Variations (screenshots of the different designs that you will be showing your site visitors). Here’s a sample optimization testing plan to get you started. Step 4: Design and develop the test Design and development will generally follow an abbreviated version of your organization’s product development lifecycle. Since test variations are generally simpler than full-blown product development projects, I try to use a lighter, more agile process. Be sure that if you do cut corners, only skimp on things like process artifacts or documentation, and not on design quality. For example, be sure to perform some basic usability testing and user research on your variations. This small investment will create better candidates that will be more likely to boost conversions. Step 5: Quality assurance When performing QA on your variations, be as thorough as you would with any other code release to production. I recommend at least functional, visual, and analytics QA. Even though many tools allow you to manipulate your website’s UI on the fly using interfaces that immediately display the results of your changes, the tools are not perfect and any changes that you make might not render perfectly across all browsers. Keep in mind that optimization tools provide you one additional luxury that is not usually possible with general website releases – that of targeting. You can decide to show your variations to only the target browsers, platforms, audiences, etc… for which you have performed QA. For example, let’s imagine that your team has only been able to QA a certain A/B test on desktop (but not mobile) browsers. When you actually configure this test in your optimization tool, you can decide to only display the test to visitors with those specific desktop browsers. If one of your variations has a visual bug when viewed on mobile phones, for example, that problem should not affect the accuracy of your test results. Step 6: Run the Test After QA has completed and you’ve decided how to allocate traffic to the different designs, it’s time to actually run your test. The following are a few best practices to keep in mind before pressing the “Go” button. 1. Variations must be run concurrently This first principle is almost so obvious that it goes without saying, but I’ve often heard the following story from teams that do not perform optimization: “After we launched our new design, we saw our [sales, conversions, etc…] increase by X%. So the new design must be better.” The problem with this logic is that you don’t know what other factors might have been at play before and after the new change launched. Perhaps traffic to that page increased in either quantity or quality after the new design released. Perhaps the conversion rate was on the increase anyway, due to better brand recognition, seasonal variation, or just random chance. Due to these and many other reasons, variations must be run concurrently and not sequentially. This is the only way to hold all other factors consistent and level the playing field between your different designs. 2. Always track multiple conversion metrics One A/B test that we ran on the movie detail pages of the DIRECTV Latin American sites was the following: we increased the size and prominence of the “Ver adelanto” (View trailer) call to action, guessing that if people watched the movie trailer, it might excite them to buy more pay-per-view movies from the web site. We increased the size and prominence of the “Ver adelanto” (View trailer) call to action, guessing that if people watched the movie trailer, it might excite them to buy more pay-per-view movies from the web site. Our initial hunch was right, and after a few weeks we saw that pay-per-views purchases were 4.8% higher with this variation over the control. This increase would have resulted in a revenue boost of about $18,000/year in pay-per-view purchases. Not bad for one simple test. Fortunately though, since we were also tracking other site goals, we noticed that this variation also decreased purchases of our premium channel packages (ie. HBO and Showtime packages) by a whopping 25%! This would have decreased total revenue by a much greater amount than the uptick in pay-per-views, and because of this, we did not launch this variation to production. It’s important to keep in mind that changes may affect your site in ways that you never would have expected. Always track multiple conversion metrics with every test. 3. Tests should reach a comfortable level of statistical significance I recently saw a presentation in which a consultant suggested that preliminary tests on email segmentation had yielded some very promising results. Chart showing conversion rates per 1000 emails sent. In the chart above, the last segment of users (those who had logged in more than four times in the past year) had a conversion rate of .00139% (.139 upgrades per 1000 emails sent). Even though a conversion rate of .00139% is dismally low by any standards, according to the consultant it represented an increase of 142% compared to the base segment of users, and thus, a very promising result. Aside from the obvious lack of actionable utility (does this study suggest that emails only be sent to users who have logged in more than four times?) the test contained another glaring problem. If you look at the “Upgrades” column at the top of the spreadsheet, you will see that the results were based on only five individuals purchasing an upgrade. Five total individuals out of almost eighty four thousand emails sent! So if, by pure chance, only one other person had purchased an upgrade in any of the segments, it could have completely changed the study’s implications. While this example is not actually an optimization test but rather just an email segmentation study, it does convey an important lesson: don’t declare a winner for your tests until it has reached a “comfortable” level of significance. So what does “comfortable” mean? The field of science requires strict definitions to use the terms “significant” (95% confidence level) and “highly significant” (99% confidence level) when publishing results. Even with these definitions, it still means that there is a 5% and 1% chance, respectively, of your conclusions being wrong. Also keep in mind that higher confidence intervals require more data (ie. more website traffic) which translates into longer test durations. Because of these factors, I would recommend less stringent standards for most optimization tests – somewhere around 90-95% confidence depending on the gravity of the situation (higher confidence intervals for tests with more serious consequences or implications). Ultimately, your team must decide on confidence intervals that reflect a compromise between test duration and results certainty, but I would propose that if you perform a lot of testing, the larger number of true winners will make up for the fewer (but inevitable) false positives. 4. The duration of your tests should account for any natural variations (such as between weekdays/weekends) and be stable over time In a 2012 article on AnalyticsInspector.com, Jan Petrovic brings to light an important pitfall of ending your tests too early. He discusses an A/B test that he ran for a high-traffic site in which, after only a day, the testing tool reported that a winning variation had increased the primary conversion rate by an impressive 87%, with a 100% confidence interval. The duration of your tests should account for any natural variations (such as between weekdays/weekends) and be stable over time. Jan writes, “If we stopped the test then and pat each other on the shoulder about how great we were, then we would probably make a very big mistake. The reason for that is simple: we didn’t test our variation on Friday or Monday traffic, or on weekend traffic. But, because we didn’t stop the test (because we knew it was too early), our actual result looked very different.” Chart showing new design results over time. After continuing the test for four weeks, Jan saw that the new design, although still better than the control, had leveled out to a more reasonable 10.49% improvement since it had now taken into account natural daily variation. He writes, “Let’s say you were running this test in checkout, and on the following day you say to your boss something like ‘hey boss, we just increased our site revenue by 87.25%’. If I was your boss, you would make me extremely happy and probably would increase your salary too. So we start celebrating…” Jan’s fable continues with the boss checking the bank account at the end of the month, and upon seeing that sales had actually not increased by the 87% that you had initially reported, reconsiders your salary increase. The moral of the story: Consider temporal variations in the behavior of your site visitors, including differences between weekday and weekend or even seasonal traffic. Step 7: Analyze and Report on the Results After your test has run its course and your team has decided to press the “stop” button, it’s time to compile the results into an Optimization Test Report. The Optimization Test Report can be a continuation of the Test Plan from Step 2, but with the following additional sections: The Freelance Studio Denver, Co. User Experience AgencyThe Freelance Studio Denver, Co. User Experience AgencyThe Freelance Studio Denver, Co. User Experience AgencyThe Freelance Studio Denver, Co. User Experience AgencyThe Freelance Studio Denver, Co. User Experience Agency Results Discussion Next steps It is helpful to include graphs and details in the Results section so that readers can visually see trends and analyze data themselves. This will add credibility to your studies and hopefully get people invested in the optimization program. The discussion section is useful for explaining details and postulating on the reasons for the observed results. This will force the team to think more deeply about user behavior and is an invaluable step towards designing future improvements. Conclusion In this article, I’ve presented a detailed and practical process that your team can customize to its own use. In the next and final article of this series, I’ll wrap things up with suggestions for communication planning, team composition, and tool selection. Share this: EmailTwitter70RedditLinkedIn17Facebook35GoogleThe Freelance Studio Denver, Co. User Experience AgencyThe Freelance Studio Denver, Co. User Experience AgencyThe Freelance Studio Denver, Co. User Experience AgencyThe Freelance Studio Denver, Co. User Experience AgencyThe Freelance Studio Denver, Co. User Experience AgencyThe Freelance Studio Denver, Co. User Experience AgencyThe Freelance Studio Denver, Co. User Experience AgencyThe Freelance Studio Denver, Co. User Experience AgencyThe Freelance Studio Denver, Co. User Experience AgencyThe Freelance Studio Denver, Co. User Experience Agency Posted in Discovery, Research, and Testing, Process and Methods | 2 Comments » 2 Comments