UX 11

The Freelance Studio Denver, Co. User Experience Agency Mystical guidelines for creating great user experiences by Tal Bloom March 3rd, 2015 10 Comments The Jewish Torah teaches that the Creator created our world through ten utterances–for example, “let there be light.” The Jewish mystical tradition explains that these utterances correspond with ten stages in the process of creation. Every creative process in the world ultimately follows this progression, because it is really a part of the continual unfolding of the world itself, in which we are co-creators. This article aims to present an overview of the mystical process of creation and principal of co-creation and to illustrate how it can guide bringing digital product ideas into reality–although it’s easy enough to see how this could translate to other products and services–in a way that ensures a great user experience, and makes our creative process more natural and outcomes more fruitful. And a note as you read: In Jewish mysticism, the pronoun “He” is used when referring to the transcendent aspect of the Creator that is the source of creation, and “She” is used when referring to the imminent aspect that pervades creation, because they are characterized by giving and receiving, respectively. Because this article discusses the relationship of the transcendent aspect, the masculine pronoun has been used. The process of creation Ten stages, four realms Ten stages, four realms The order of creation The ten stages in the process of creation progressively create four realms. Three triads create three spiritual realms, and the tenth stage creates our tangible reality, which is the culmination of creation. It is understood that creation becomes increasingly defined and tangible as the creative power flows from one realm to the next. When we participate in creation, our efforts naturally follow the same progression. The four realms are traditionally referred to by Hebrew terms, so to make things easier I’ll refer to them using a designer’s day-to-day terms–ideation, design, implementation, and operation. Before we dive in though, one more thing to note is that within each realm there is a three-stage pattern whereby the creation first becomes revealed, then delineated, and finally consolidated in a state of equilibrium. Hang in there, you’ll shortly see what this means. The realm of ideation In the beginning there was only the Creator, alone. In the first three stages of creation, He simply created the possibility for a creation. This corresponds with the generation of business ideas. Just as before there was anything else it had to arise in the Creator’s mind to create the world, so too, the starting point of all products and services is the emergence of an idea–a simple and common example of which is “a digital channel will help our customers connect with us.” Next, the seed sprouts a series of details to define it. In creation, the details included the fact that creation will be limited and that there is an order to its unfolding. In business, the idea undergoes an extrapolation to define its reach and scope. For example, “the digital channel will need product information, a shopping cart, a customer database, and a social function for customers’ reviews.” The third stage in the process of creation is the preparation for bridging the gap between the abstract realm of potential where the Creator is still effectively alone, with a new reality of seemingly separate creations. Correspondingly, in business the third step requires bringing the idea from a place of theory to a point that it can be shared with others, such as presenting to decision makers and stakeholders, or briefing agents and consultants. The realm of design Now that it’s possible to distinguish between the Creator and His creation, the next three stages serve to coalesce the homogenous creation into spiritual templates. This corresponds with the conceptual design of how the business idea may be realized. The first stage in this realm is an expression of the Creator’s kindness, as He indiscriminately bestows life to all of creation. Correspondingly, the design process begins with telling the end-to-end story of the idea, from the user discovering the new product or service through to their consummate pleasure in using it, without our being too concerned with practical considerations. This could be captured in business process diagrams, but human-centred user journey maps or storyboards have proven more natural. Next, the Creator expressed His attribute of judgement to establish the boundaries of His evolving creations. In business, we begin addressing practical considerations, such as time, budget, and technical constraints to define the boundaries of the concept. This generally involves analyzing the desired story to establish the finite set of practical requirements for realizing it. For digital products, the requirements are often closely followed by a business case, an information architecture, and a system architecture. As mentioned, the third stage is where a consolidated state of equilibrium is reached to form the output of the realm. In creation, mystics describe the culmination of this realm as being sublime angels who are only identified by their function–for example to heal or to enact justice–and consider them to be the templates for these attributes, as they become manifest in the lower beings. Similarly, we consolidate the business idea by sketching or prototyping how we envision it will become manifest. Typically we deliver low-fidelity interaction, product or service designs, which are often accompanied by a business plan and functional and technical specifications. The realm of implementation Using the spiritual templates, the next three stages serve to create individualized spiritual beings. This corresponds with implementing our conceptual designs into an actual digital product. In creation, the life-force is now apportioned according to the ability of the created being to receive, similar to pouring hot liquid material into a statue mould. Correspondingly, we apply branding, colors, and shapes to bring the blueprint to life–the result being high-fidelity visual designs of what the digital product will actually look and feel like. Next, the life-force solidifies to form the individual spiritual being, similar to when the hot liquid cools and the mould can be removed. This corresponds with slicing the visual designs to develop the front-end, developing the database, and integrating the back-end functionality. The culmination of this realm is often depicted in artwork and poetry as being angels that have human form, wings, and individual names. They are, however, still spiritual beings, not physical beings like us. Correspondingly, at the final stage of implementation, there exists a fully functional digital product…in a staging environment. The realm of operation The culmination of the process of creation is our tangible reality, which is comprised of physical matter and its infused life-force (part of which is our physical bodies infused with our souls). Bridging the infinitely large gap between the spiritual and physical realms is often considered the most profound step in the process of creation, yet paradoxically it’s simultaneously the smallest conceptual distance from a spiritual being that looks and functions like a physical being, and an actual physical being. Correspondingly, launching a digital product into the live public domain can be the most daunting and exciting moment, yet it can be as easy as pressing a button to redirect the domain to point to the new web-server or to release the app on the app store. At this point the Creator is said to have rested, observing His creation with pleasure. Similarly, it can be very satisfying to step back at this point and soak in how our initial seed of an idea has finally evolved into an actual operational reality–which will hopefully fulfill our business goals! The principle of co-creation User feedback By now we can appreciate why there seems to be a natural and logical sequence for the activities typically involved in creating a new product or service. Jewish mysticism, however, unequivocally adds that we are co-creators with the Creator. That is: We, created beings, are able to influence what the end product of creation will be, just like users can influence our products and services when we engage with them during the creation process. Jewish mysticism relates that the Creator consults with His retinue of angels to make decisions regarding His creation. This corresponds with our soliciting user input to validate the direction of our creative efforts, such as: during ideation, conducting research to ensure the ideas indeed meet users’ needs and desires; during design, conducting user validation to ensure the sensibility and completeness of the story, correlation of the framework with users’ mental models, and usability of the blueprints; and during implementation, conducting user testing to help smooth out any remaining difficulties or doubts in the user experience. We are also taught that the Creator is monitoring human activity and makes adjustments accordingly. Similarly, at the stage of operation, it’s good practice to steer the finished product to better achieve business goals by monitoring the usage analytics. Finally, we’re taught that the Creator desires our prayers beseeching Him to change our reality, similar to how we’ve come to understand the most potent consideration is user feedback on the fully operational product. Continual improvement On the surface it still seems as though the process of creation is a cascading “waterfall,” but we see that our world is constantly evolving–for example, more efficient transport, more sophisticated communication, more effective health maintenance–seemingly through our learning from experience to improve our efforts. In a simple sense, this can be likened to the “agile” feedback loop where learnings from one round of production are used to influence and improve our approach to the next round. Jewish mysticism teaches, however, that under the surface our genuine efforts below arouse a magnanimous bestowal of ever-increasingly refined life-force into the creation. This can be understood as similar to a pleased business owner allocating increasingly more budget to continue work on an evidently improving product or service. These days, it is becoming more common for businesses to implement a continuous improvement program, whereby an ongoing budget is allocated for this purpose. The paradigm of continually looking for ways to more effectively meet user needs and achieve business goals–such that they can be fed back into the process for fleshing out the idea, designing, and then implementing–perfectly parallels the reality that we are co-creating an ever more refined world using ever-deepening resources. But how can a compounding improvement continue indefinitely? Jewish mysticism explains that as the unlimited creative power becomes exponentially more revealed within our limited reality, there will eventually come a grand crescendo with the revelation of the Creator’s essential being, which is neither unlimited or limited, but both simultaneously. This will be experienced as the messianic era–“In that era, there will be neither famine or war, envy or competition, for good will flow in abundance and all the delights will be freely available as dust. The occupation of the entire world will be solely to know their Creator.”1 Users front of mind at every stage Before we get there, however, it can be seen from the above how every stage of the creative process has a unique effect on the user experience of the end product or service, such that it would bode well if we strive to ensure: The initial business idea meets an actual need or fulfils an actual desire of our users The concept is designed to function according to the user’s understanding and expectations The product or service is implemented in a way that is appealing and easy to use The operating product or service is continually improved to meet users’ evolving needs By knowing each stage and each skill set’s proper place in the sequence and how to incorporate our learnings and user sentiment, we can achieve a more natural creative process for ourselves, our peers, and our clients and ensure the end product or service offers the best possible user experience, indefinitely. Creative activity Co-creation activity Output Ideation Innovation brainstorms Idea prioritization User research User pain points Idea pitch/brief Design Business analysis Requirements analysis Card sorting Interaction design User focus groups User interviews Tree testing User walkthroughs User journeys/storyboards Product requirements Information architecture Wireframes/prototype Implementation Visual design Front-end development Back-end development Content preparation User testing Staging product Operation Product launch Product maintenance Analytics User feedback/surveys Live product Ideas for improvement References and further reading Sefer Likutei Amarim, “The Tanya”, by the Alter Rebbe, Rabbi Schneur Zalman of Liadi Sefer HaMaamorim Melukatim, by the Lubavitcher Rebbe, Rabbi Menachem Mendel Schneerson Basi L’Gani, by the Rebbe Rayatz, Rabbi Yosef Yitzchak Shneerson. Beshaah Shehikdimu 5672, “Ayin Beis”, by the Rebbe Rashab, Rabbi Shalom DovBer Schneersohn Mishneh Torah, Sefer Shoftim, Melachim uMilchamot, Chapter 12, Halacha 5, by the Rambam, Rabbi Moses ben Maimon Share this: EmailTwitter183RedditLinkedIn141Facebook139Google Posted in Design Principles, Process and Methods | 10 Comments » 10 Comments Jonathan March 3, 2015 at 11:39 am Endearingly bonkers. Mystical guidelines for creating great user exp... March 4, 2015 a A Beginner’s Guide to Web Site Optimization—Part 3 Communication and team and tool selection by Charles Shimooka March 10th, 2015 2 Comments Web site optimization has become an essential capability in today’s conversion-driven web teams. In Part 1 of this series, we introduced the topic as well as discussed key goals and philosophies. In Part 2, I presented a detailed and customizable process. In this final article, we’ll cover communication planning and how to select the appropriate team and tools to do the job. Communication For many organizations, communicating the status of your optimization tests is an essential practice. Imagine if your team has just launched an A/B test on your company’s homepage, only to learn that another team had just released new code the previous day that had changed the homepage design entirely. Or imagine if a customer support agent was trying to help users through the website’s forgot password flow, unaware that the customer was seeing a different version due to an A/B test that your team was running. To avoid these types of problems, I recommend a three-step communication program: Pre-test notification This is an email announcing that your team has selected a certain page/section of the site to target for its next optimization test and that if anyone has any concerns, they had better voice them immediately, before your team starts working on it. Give folks a day or two to respond. The email should include: Name/brief description of the test Goals Affected pages Expected launch date Link to the task or project plan where others can track the status of the test. Here’s a sample pre-test notification. Pre-launch notification This email is sent out a day or two before a new experiment launches. It includes all of the information from the Pre-Test Notification email, plus: Expected test duration Some optimization tools create a unique dashboard page in which interested parties can monitor the results of the test in real-time. If your tool does this, you can include the link here. Any other details that you care to mention, such as variations, traffic allocation, etc… Here’s a sample pre-launch email. Test results After the test has run its course and you’ve compiled the results into the Optimization Test Results document, send out a final email to communicate this. If you have a new winner, be sure to brag about it a little in the email. Other details may include: Brief discussion A few specifics, such as conversion rates, traffic and confidence intervals Next steps Here’s a sample test results email. Team size and selection As is true with many things, good people are the most important aspect of a successful optimization program. Find competent people with curious minds who take pride in their work – this will be far more valuable than investment in any optimization tool or adherence to specific processes. The following are recommendations for organizations of varying team sizes. One person It is difficult for one person to perform optimization well unless they are dedicated full-time to the job. If your organization can only cough-up one resource, I would select either a web analytics resource with an eye for design, or a data-centric UX designer. For the latter profile, I don’t mean the type of designer who studied fine art and is only comfortable using Photoshop, but rather the type who likes wireframes, has poked around an analytics tool on their own, and is good with numbers. This person will also have to be resourceful and persuasive, since they will almost certainly have to borrow time and collaborate with others to complete the necessary work. Two to three people With a team size of three people, you are starting to get into the comfort zone. To the UX designer and web/data analytics roles, I would add either a visual designer or a front-end developer. Ideally, some of the team members would have multiple or overlapping competencies. The team will probably still have to borrow time from other resources, such as back-end developers and QA. Five people A team that is lucky enough to have five dedicated optimization resources has the potential to be completely autonomous. If your organization places such a high value on optimization, they may have also invested accordingly in sophisticated products or strategies for the job, such as complex testing software, data warehouses, etc… If so, then you’ll need folks who are specifically adept at these tools, broadening your potential team to roles such as data engineers, back-end developers, content managers, project managers, or dedicated QA resources. A team of five would ideally have some overlap with some of the skill-sets. Tool selection The optimization market is hot and tool selection may seem complicated at first. The good news is that broader interest and increased competition is fueling an all-out arms race towards simpler, more user-friendly interfaces designed for non-technical folks. Data analysis and segmentation features also seem to be evolving rapidly. My main advice if you’re new to optimization is to start small. Spend a year honing your optimization program and after you’ve proven your value, you can easily graduate to the more sophisticated (and expensive) tools. Possibly by the time you’re ready, your existing tool will have advanced to keep up with your needs. Also realize that many of the cheaper tools can do the job perfectly well for most organizations, and that some organizations with the high-powered tools are not using them to their fullest capabilities. A somewhat dated Forrester Research report from February 2013 assesses some of the big hitters, but notably absent are Visual Website Optimizer (VWO) and, for very low end, Google’s free Content Experiments tool. Conversion Rate Experts keeps an up-to-date comparison table listing virtually all of today’s popular testing tools, but it only rates them along a few specific attributes. I performed my own assessment earlier this year and here is a short list of my favorites: Entry-level Visual Website Optimizer (VWO) Optimizely Google Content Experiments Advanced Maxymiser Monetate Adobe Test & Target Here are a few factors to consider when deciding on products: Basic features Intuitive user interface Luckily, most tools now have simple, WYSIWYG type of interfaces that allow you to directly manipulate your site content when creating test variations. You can edit text, change styles, move elements around, and save these changes into a new test variation. Some products have better implementations than others, so be sure to try out a few to find the best match for your team. Targeting Targeting allows you to specify which site visitors are allowed to see your tests. Almost all tools allow you to target site visitors based on basic attributes that can be inferred from their browser, IP address, or session. These attributes may include operating system, browser type/version, geographical location, day of week, time of day, traffic source (direct vs. organic vs. referral), and first time vs. returning visitor. More advanced tools also allow you to target individuals based on attributes (variables) that you define and programmatically place in your users’ browser sessions, cookies, or URLs. This allows you to start targeting traffic based on your organization’s own customer data. The most advanced tools allow you to import custom data directly into the tool’s database, giving you direct access to these attributes through their user interface, not only for targeting, but also for segmented analysis. Analysis and reporting Tools vary widely in their analysis and reporting capabilities, with the more powerful tools generally increasing in segmentation functionality. The simplest tools only allow you to view test results compared against a single dimension, for example, you can see how your test performed on visitors with mobile vs. desktop systems. The majority of tools now allow you to perform more complicated analyses along multiple dimensions and customized user segments. For example, you might be interested in seeing how your test performed with visitors on mobile platforms, segmented by organic vs. paid vs. direct traffic. Keep in mind that as your user segments become more specific, your optimization tool must rely on fewer and fewer data points to generate the results for each segment, thereby decreasing your confidence levels. Server response time Optimization tools work by adding a small snippet of code to your pages. When a user visits that page, the code snippet calls a server somewhere that returns instructions on which test variation to display to the user. Long server response times can delay page loading and the display of your variations, thereby affecting your conversions and reporting. When shopping around, be sure to inquire about how the tool will affect your site’s performance. The more advanced tools are deployed on multiple, load-balanced CDNs and may include contractual service level agreements that guarantee specific server response times. Customer support Most optimization vendors provide a combination of online and telephone support, with some of the expensive solutions offering in-person set-up, onboarding and training. Be sure inquire about customer support when determining costs. A trick I’ve used in the past to test a vendor’s level of service is to call the customer support lines at different times of the day and see how fast they pick up the phone. Price and cost structure Your budget may largely determine your optimization tool options as prices vary tremendously, from free (for some entry tools with limited features) to six-figure annual contracts that are negotiated based on website traffic and customer support levels (Maxymiser, Monetate and Test & Target fall into this latter category). Tools also vary in their pricing model, with some basing costs on the amount of website traffic and others charging more for increased features. My preference is towards the latter model, since the former is sometimes difficult to predict and provides a disincentive to perform more testing. Advanced features Integration with CMS/analytics/marketing platforms If you are married to a single Content Management System, analytics tool, or marketing platform, be sure to inquire from your vendor about how their tool will integrate. Some vendors advertise multi-channel solutions (the ability to leverage your customer profile data to optimize across websites, email, and possibly other channels, such as social media or SMS). Enterprise-level tools seem to be trending towards all-in-one solutions that include components such as CMS, marketing, ecommerce, analytics, and optimization (ie. Adobe’s Marketing Cloud or Oracle’s Commerce Experience Manager). But for smaller organizations, integration may simply mean the ability to manage the optimization tool’s javascript tags (used for implementation) across your site’s different pages. In these situations, basic tools such as Google Tag Manager or WordPress plugins may suffice. Automated segmentation and targeting Some of the advanced tools offer automated functionality that tries to analyze your site’s conversions and notify you of high-performing segments. These segments may be defined by any combination of recognizable attributes and thus, far more complicated than your team may be able to define on their own. For example, the tool might define one segment as female users on Windows platform, living in California, and who visited your site within the past 30 days. It might define a dozen or more of these complex micro-segments and even more impressively, allow you to automatically redirect all future traffic to the winning variations specific to each of these segments. If implemented well, this intelligent segmentation has tremendous potential for your overall site conversions. The largest downside is that it usually requires a lot of traffic to make accurate predictions. Automated segmentation is often an added cost to the base price of the optimization tool. If so, consider asking for a free trial period to evaluate the utility/practicality of this functionality before making the additional investment. Synchronous vs. asynchronous page loading Most tools recommend that you implement their services in an asynchronous fashion. In other words, that you allow the rest of your page’s HTML to load first before pinging their services and potentially loading one of the test variations that you created. The benefit of this approach is that your users won’t have to wait additional time before your control page starts to render in the browser. The drawback is that once the call to the optimization’s services is returned, then your users may see a page flicker as the control page is replaced by one of your test variations. This flickering effect, along with the additional time it takes to display the test variations, could potentially skew test results or cause surprise/confusion with your users. In contrast, synchronous page loading, which is recommended by some of the more advanced tools, makes the call to the optimization tool before the rest of the page loads. This ensures that your control group and variations are all displayed in the same relative amount of time, which should allow for more accurate test results. It also eliminates the page flicker effect inherent in asynchronous deployments. Conclusion By far, the most difficult step in any web site optimization program is the first one – the simple act of starting. With this in mind, I’ve tried to present a complete and practical guide on how to get you from this first step through to a mature program. Please feel free to send me your comments as well as your own experiences. Happy optimizing. Share this: EmailTwitter42RedditLinkedIn56Facebook25Google Posted in Discovery, Research, and Testing, Process and Metho Creativity Must Guide the Data-Driven Design Process by Rameet Chawla March 17th, 2015 8 Comments Collecting data about design is easy in the digital world. We no longer have to conduct in-person experiments to track pedestrians’ behavior in an airport terminal or the movement of eyeballs across a page. New digital technologies allow us to easily measure almost anything, and apps, social media platforms, websites, and email programs come with built-in tools to track data. And, as of late, data-driven design has become increasingly popular. As a designer, you no longer need to convince your clients of your design’s “elegance,” “simplicity,” or “beauty.” Instead of those subjective measures, you can give them data: click-through and abandonment rates, statistics on the number of installs, retention and referral counts, user paths, cohort analyses, A/B comparisons, and countless other analytical riches. After you’ve mesmerized your clients with numbers, you can draw a few graphs on a whiteboard and begin claiming causalities. Those bad numbers? They’re showing up because of what you told the client was wrong with the old design. And the good numbers? They’re showing up because of the new and improved design. But what if it’s not because of the design? What if it’s just a coincidence? There are two problems with the present trend toward data-driven design: using the wrong data, and using data at the wrong time. The problem with untested hypotheses Let’s say you go through a major digital redesign. Shortly after you launch the new look, the number of users hitting the “share” button increases significantly. That’s great news, and you’re ready to celebrate the fact that your new design was such a success. But what if the new design had nothing to do with it? You’re seeing a clear correlation—two seemingly related events that happened around the same time—but that does not prove that one caused the other. Steven D. Levitt and Stephen J. Dubner, the authors of “Freakonomics,” have built a media empire on exposing the difference between correlation and causation. My favorite example is their analysis of the “broken windows” campaign carried out by New York City Mayor Rudy Giuliani and Police Commissioner William Bratton. The campaign coincided with a drop in the city’s crime rate. The officials naturally took credit for making the city safer, but Levitt and Dubner make a very strong case that the crime rate declined for reasons other than their campaign. Raw data doesn’t offer up easy conclusions. Instead, look at your data as a generator of promising hypotheses that must be tested. Is your newly implemented user flow the cause of a spike in conversion rates? It might be, but the only way you’ll know is by conducting an A/B test that isolates that single variable. Otherwise, you’re really just guessing, and all that data you have showing the spike doesn’t change that. Data can’t direct innovation Unfortunately, many designers are relying on data instead of creativity. The problem with using numbers to guide innovation is that users typically don’t know what they want, and no amount of data will tell you what they want. Instead of relying on data from the outset, you have to create something and give it to users before they can discover that they want it. Steve Jobs was a big advocate of this method. He didn’t design devices and operating systems by polling users or hosting focus groups. He innovated and created, and once users saw what he and his team had produced, they fell in love with a product they hadn’t even known they wanted. Data won’t tell you what to do during the design process. Innovation and creativity have to happen before data collection, not after. Data is best used for testing and validation. Product development and design is a cyclical process. During the innovation phase, creativity is often based on user experience and artistry — characteristics that aren’t meant to be quantified on a spreadsheet. Once a product is released, it’s time to start collecting data. Perhaps the data will reveal a broken step in the user flow. That’s good information because it directs your attention to the problem. But the data won’t tell you how to fix the problem. You have to innovate again, then test to see if you’ve finally fixed what was broken. Ultimately, data and analysis should be part of the design process. We can’t afford to rely on our instincts alone. And with the wealth of data available in the digital domain, we don’t have to. The unquantifiable riches of the creative process still have to lead design, but applying the right data at the right time is just as important to the future of design. Share this: EmailTwitter225RedditLinkedIn270Facebook41Google Posted in Design Principles, Discovery, Research, and Testing | 8 Comments » 8 Comments Rick March 17, 2015 at 2:54 pm Kind of playing both sides there – do user testing before creating a product, but users really don’t know what they want. Steve Jobs went off his gut, so be creative and let the analytics after release see where issues are. I get it is a tough, non black and white, kind of thing. Maybe building out the rationale more of testing as an influence in decision making, but not a mandate in what needs to be done, would relate more to both UX and non-UX professionals. Jonathan March 17, 2015 at 6:01 pm I’m not sure that any serious designer would say that just because the stats got better after a re-design then it means the improvement was due to the design change alone. But I’m willing to believe some might. They’ll be ex-designers pretty soon so it hardly matters much. Be that as it may, I’m not sure what your point really is here. People who don’t understand probability and statistics shouldn’t be using that in the design process. That’s obvious, isn’t it? You may as well write an article about how blind people should’t drive cars. I think in general though, all designers need to understand is that unless they personally know how to calculate a p-value from test data, then they should just ask an analyst what they think before they make any claims about it. Then after that you need to bear in mind that data can only tell you what happened, not why. Chris March 19, 2015 at 6:48 am I agree with Jonathan in that I’m not really sure of the point, either. And I especially don’t agree with the premise that we have at our fingertips all the data sources we need through web analytics and numbers, and this is all about improper understanding of causation vs. correlation. Of course such methods will not show causation; we have no idea WHY users are doing what they are doing. It’s common sense these are unreliable, but not because of how people are interpreting them – because of the type of data people are expecting from such stuff. So, isn’t that a bit like “throwing the baby out with the bathwater” when it comes to getting user data? What about reexaming the tried and true methods of UCD (user-centered design?) If we do that, it doesn’t really jive with this notion to just get all creative and wait until the product is released to test – reallly?? Getting user data does not mean you are not using your innovation chops. It is a matter of knowing when to get creative and when to get data to validate your ideas – it’s a juggling act and an important one if you want success. And user “research” doesn’t have to be a bunch of complicated methods – think “lean”. As Jakob Nielsen has always said, test simple prototypes on just 3-5 users, and if done properly (using think out loud, keeping questioning open, not leading, all best done with a skilled moderator) you will get a wealth of information – and I would add, much faster than creating a bunch of magical designs in the dark which then require rework later in development if not accepted. And pretty risky to do without – prior to release – in today’s customer-driven, usability-focused competitive landscape. Steve Jobs is touted as the “hero” of innovation – but let’s be clear what that means, and not get “correlation” wrong here. :) Innovation in that context was inventing brand new things for mass consumption. So, in a way, it was valid that the whole market was his test bed. But let’s not confuse that notion of innovation the life cycle of a typical product. 99% of the time these are not “invented” products – they are offshoots of some other familiar functionality. Testing after release is just counterintuitive to user-centered design and is one of the biggest impediments to success in a world demanding simplicity. We just can’t keep singing the praises of user-centered design methods, and then completely ignore them. And finally – I cannot let this the focus group reference slide. Focus groups is a term bandied about with user research – but let’s be clear – a focus group might be OK for coming up with a brand, but it has nothing to do with getting interface and interaction data. Becki March 30, 2015 at 2:41 pm I think this article is very valuable — for non-experts. I’ve seen plenty of people claim that “the analytics says this design wins, so it must be the right one” in situations that simply are not that black-and-white. We can’t get teams to support user-centered design if they believe that A/B testing on whole-site redesigns is the best determinant for success. Jeff March 30, 2015 at 3:11 pm How about calling it “Data-Driven Refinement” instead of “Data-Driven Design”? jrosell April 9, 2015 at 9:54 am Some days before this post was published i posted in my blog about the need of creativity in optimization process. (in catalan) http://elnostreraco.com/blog/cro/necessitat-desforc-creatiu-en-processos-doptimitzacio/ CL Bridges April 16, 2015 at 8:10 pm There’s an ethical discussion here. Critiques of mHealth apps note various challenges and research gaps within industry user requirements, of particular concern are ethical issues where technology is favored over interpersonal supports, potentially reducing access to quality health supports for people at-risk, you know like old people and kids. That’s good cause for considering industry policy within data design to be informed about your users needs. Innovations that actually solves this problem will be focused on how real people use tech, not on how tech can inform us about some people. Intent to Solve by Laura Klein April 14th, 2015 8 Comments When we’re building products for people, designers often do something called “needs finding” which translates roughly into “looking for problems in users’ lives that we can solve.” But there’s a problem with this. It’s a widely held belief that, if a company can find a problem that is bad enough, people will buy a product that solves it. That’s often true. But sometimes it isn’t. And when it isn’t true, that’s when really well designed, well intentioned products can fail to find a market. When isn’t it true? When I tell product managers and entrepreneurs that their dream customers might not buy this product—even if the product solves a problem—sometimes they get angry. “No!” the managers and entrepreneurs yell. “This is a serious problem for my users! They struggle with this thing every day! They told us this. We saw them struggling with it. We did our research!” But think about all of the problems that you encounter in a day. Some of them are almost entirely in your control, like deciding how to feed yourself. Some of them are largely out of your control, like sitting in traffic on the way to work. Some of them are almost entirely out of your control, like certain types of health problems. So, there you are. Sitting in traffic, with a migraine, and trying to figure out what you want for lunch. Which problem do you solve? Do you solve the worst one? The one that happens every day? The easiest one? Do you give up entirely and just turn around and go back to bed? The only thing that most people won’t do is to solve all of them all at once. In other words, every day, humans use their limited emotional resources to solve specific problems while they choose to live with other problems or put off solving them until another day. This tendency of humans to not always solve their worst problems is incredibly important for you to recognize when you’re doing early user research because it has implications for your product. Just because you’ve identified a serious problem doesn’t mean that anybody will pay you to solve it for them. And remember, when we’re talking about “payment,” we’re not necessarily just talking about money. Free products are often only free if your time has no value. Sure, some products cost money, but people also pay with their time, attention, and effort. If you’re asking somebody to spend hours learning how to use your product, you’ve just charged them a fairly high hourly rate for your free product. You’d better make it worth their while. You can do something about this So, how can you separate out the problems that people will pay you to solve from the problems they won’t? Sure, intensity, frequency, and difficulty of solving the problem can influence whether a user will try to solve it. But there’s an even more important thing to look for: Intent to solve. For example, if you’re a gym owner, and you talk to three women, all of whom say they want to get into better shape, which of the following sounds like the person most likely to join your gym? a) I’m in terrible shape, and it’s really affecting my health. I’ve never joined a gym, but I’m definitely going to do it this year. b) I really love running and swimming at my neighborhood pool, and I consider myself to be in pretty good shape. But I’m not a fan of gyms. c) I’m in ok shape. I’ve belonged to several gyms in the past, but I don’t currently belong to one. Did you say C? You should have. Sure, A specifically states that she is going to join a gym and her perceived problem is larger than the other two, but we’ve all declared that we’re absolutely going to do something this year and then not done it. That’s what New Year’s resolutions are. B seems perfectly happy with her routine and doesn’t really have the problem that we’re solving. C, on the other hand, shows both motivation and a past intent to solve the problem in the way that you, as a gym owner, would like. In other words, she has previously sought ways to get into better shape and has even spent money on gyms. She has shown an intent to solve in the past which is an excellent predictor of her behavior in the future. There is one notable exception But hang on. I know what you’re thinking. You’re thinking “but what about Twitter?” Or maybe Snapchat, or WhatsApp, or a dozen other products that solve problems that people didn’t know they had. It’s true, there are products that don’t solve an obvious problem. Things like Twitter create new behaviors (sort of) and don’t seem to solve anything that anybody ever intended to solve before Twitter came along. Now, we could argue all day about whether or not Twitter solves a specific problem or perhaps many problems—or even creates problems. The important thing to point out here is that, when you’re creating a product that is truly going to create a new behavior, it is just much, much harder to validate before you build. That’s doubly true if the product relies on network effects, like Twitter does. Honestly, there may simply be no way to tell if something like Twitter is going to take off before you build anything at all. That’s why, although we do have things like Twitter, we also have tens of thousands of social networking sites and apps that nobody’s ever heard of. What to look for If your product does solve a problem that people likely knows exists, though, there’s a very useful technique for figuring out if it’s a good one to solve. We’ll assume for the moment that you’re already doing user research and customer development. You’re building something, so obviously you’re talking to people who you think might be in the market for such a product—or at least people who have the problem that your product solves. Just talking to people though, isn’t enough; you have to ask them the right questions. Instead of just asking them questions designed to confirm whether or not they have a specific problem, you need to ask questions designed to find out if they have already shown an “intent to solve” that problem. What you’re looking for is not just a problem—in the case of the gym owner, a potential user wanting to get into better shape—you’re also looking for a previous behavior of trying to solve the problem. Bonus points if they have spent money trying to solve the problem. When you find a serious problem that people have tried and failed to solve, you can generally count on their trying to solve it again in the future. Ideally, you want something that they’re actively searching for a solution to right now. If you want to convince somebody to join your gym, it’s much easier to start with somebody who already wants to join a gym. At that point, you’re being compared to all other gyms. You’re not being compared to literally everything else that the user could do with her money and time. Humans encounter all sorts of problems every day. Most, we just ignore or deal with. Only a few reach a level that we will spend our precious resources to solve. If you find a problem that is serious enough that people have already shown an intent to solve, it will be far easier to convince people to try your solution. If you think you have a brilliant idea for a product that creates a brand new form of user behavior and may or may not solve a particular problem, more power to you. It’s not impossible to make it work, but it’s significantly harder to get it adopted than the millions of things that solve real problems that people encounter every day. For the rest of you who want to make sure a problem really exists before you try to solve it, try evaluating your user’s intent to solve before you build anything. It’ll give you tremendous insight into whether or not your product will be adopted. Share this: EmailTwitter203RedditLinkedIn95Facebook51Google Posted in Uncategorized | 8 Comments » 8 Comments Kelly Moran April 15, 2015 at 4:39 pm Excellent point that not all problems require solutions (surely there must be a deeper meaning here). This is where observation comes in. Potential users/customers may state they have a problem, but has anyone seen them deal with it/attempt to solve it? Online Surveys On a Shoestring: Tips and Tricks by Gabriel Biller and Lada Gorlenko April 28th, 2015 6 Comments Design research has always been about qualitative techniques. Increasingly, our clients ask us to add a “quant part” to projects, often without much or any additional budget. Luckily for us, there are plenty of tools available to conduct online surveys, from simple ones like Google Forms and SurveyMonkey to more elaborate ones like Qualtrics and Key Survey. Whichever tool you choose, there are certain pitfalls in conducting quantitative research on a shoestring budget. Based on our own experience, we’ve compiled a set of tips and tricks to help avoid some common ones, as well as make your online survey more effective. We’ve organized our thoughts around three survey phases: writing questions, finding respondents, and cleaning up data. Writing questions Writing a good questionnaire is both art and science, and we strongly encourage you to learn how to do it. Most of our tips here are relevant to all surveys, but particularly important for the low-budget ones. Having respondents who are compensated only a little, if at all, makes the need for good survey writing practices even more important. Ask (dis)qualifying questions first A sacred rule of surveys is to not waste people’s time. If there are terminating criteria, gather those up front and disqualify respondents as quickly as you can if they do not meet the profile. It is also more sensitive to terminate them with a message “Thank you for your time, but we already have enough respondents like you” rather than “Sorry, but you do not qualify for this survey.” Keep it short Little compensation means that respondents will drop out at higher rates. Only focus on what is truly important to your research questions. Ask yourself how exactly the information you collect will contribute to your research. If the answer is “not sure,” don’t ask. For example, it’s common to ask about a level of education or income, but if comparing data across different levels of education or income is not essential to your analysis, don’t waste everyone’s time asking the questions. If your client insists on having “nice to know” answers, insist on allocating more budget to pay the respondents for extra work. Keep it simple Keep your target audience in mind and be a normal human being in framing your questions. Your client may insist on slipping in industry jargon and argue that “everyone knows what it is.” It is your job to make the survey speak the language of the respondents, not the client. For example, in a survey about cameras, we changed the industry term “lifelogging” to a longer, but simpler phrase “capturing daily routines, such as commute, meals, household activities, and social interactions.” Keep it engaging People in real life don’t casually say, “I am somewhat satisfied” or “the idea is appealing to me.” To make your survey not only simple but also engaging, consider using more natural language for response choices. For example, instead of using standard Likert-scale “strongly disagree” to “strongly agree” responses to the statement “This idea appeals to me” in a concept testing survey, we offered a scale “No, thanks” – “Meh” – “It’s okay” – “It’s pretty cool” – “It’s amazing.” We don’t know for sure if our respondents found this approach more engaging (we certainly hope so), but our client showed a deeper emotional response to the results. Finding respondents Online survey tools differ in how much help they provide with recruiting respondents, but most common tools will assist in finding the sample you need, if the profile is relatively generic or simple. For true “next to nothing” surveys, we’ve used Amazon Mechanical Turk (mTurk), SurveyMonkey Audience, and our own social networks for recruiting. Be aware of quality Cheap recruiting may easily result in low quality data. While low-budget surveys will always be vulnerable to quality concerns, there are mechanisms to ensure that you keep your quality bar high. First of all, know what motivates your respondents. Amazon mTurk commonly pays $1 for the so-called “Human Intelligence Task” that may include taking an entire survey. In other words, someone is earning as little as $4 an hour if they complete four 15-minute surveys. As such, some mTurk Workers may try to cheat the system and complete multiple surveys for which they may not be qualified. SurveyMonkey, on the other hand, claims that their Audience service delivers better quality, since the respondents are not motivated by money. Instead of compensating respondents, SurveyMonkey makes a small donation to the charity of their choice, thus lowering the risk of people being motivated to cheat for money. Use social media If you don’t need thousands of respondents and your sample is pretty generic, the best resource can be your social network. For surveys with fewer than 300 respondents, we’ve had great success with tapping into our collective social network of Artefact’s members, friends, and family. Write a request and ask your colleagues to post it on their networks. Of course, volunteers still need to match the profile. When we send an announcement, we include a very brief description of who we look for and send volunteers to a qualifying survey. This approach costs little but yields high-quality results. We don’t pay our social connections for surveys, but many will be motivated to help a friend and will be very excited to hear about the outcomes. Share with them what you can as a “thank you” token. For example, we used social network recruiting in early stages of Purple development. When we revealed the product months later, we posted a “thank you” link to the article to our social networks. Surprisingly even for us, many remembered the survey they took and were grateful to see the outcomes of their contribution. Over-recruit If you are trying to hit a certain sample size for “good” data, you need to over-recruit to remove the “bad” data. No survey is perfect and all can benefit from over-recruiting, but it’s almost a must for low-budget surveys. There are no rules, but we suggest over-recruiting by at least 20% to hit the sample size you need at the end. Since the whole survey costs you little, over-recruiting will equally cost little. Cleaning up data Cleaning up your data is another essential step of any survey that is particularly important for the one on a tight budget. A few simple tricks can increase the quality of responses, particularly if you use public recruiting resources. When choosing a survey tool, check what mechanisms are available for you to clean up your data. Throw out duplicates As mentioned earlier, some people may be motivated to complete the same survey multiple times and even under multiple profiles. We’ve spotted this when working with mTurk respondents by checking their Worker IDs. We had multiple cases when the same IDs were used to complete a survey multiple times. We ended up throwing away all responses associated with the “faulty IDs” and gained more confidence in our data at the end. Check response time With SurveyMonkey, you can calculate the time spent on the survey using the StartTime and EndTime data. We benchmarked the average time of the survey by piloting the survey in the office. This can be used as a pretty robust fool-proof mechanism. If the benchmark time is eight minutes and you have surveys completed in three, you may question how carefully respondents were reading the questions. We flag such outliers as suspect and don’t include them in our analysis. Add a dummy question Dummy questions help filter out the respondent quickly answering survey questions at random. Dummy questions require the respondent to read carefully and then respond. People who click and type at random might answer it correctly, but it is unlikely. If the answer is incorrect, this is another flag we use to mark a respondent’s data as suspect. Low-budget surveys are challenging, but not necessarily bad, and with a few tricks you can make them much more robust. If they are used as an indicative, rather than definitive, mechanism to supplement other design research activities, they can bring “good enough” insights to a project. Educate your clients about the pros and cons of low-budget surveys and help them make a decision whether or not they want to invest more to get greater confidence in the quantitative results. Setting these expectations up front is critical for the client, but you never know, it could also be a good tool for negotiating a higher survey budget to begin with! Share this: EmailTwitter83RedditLinkedIn57Facebook32Google Posted in Discovery, Research, and Testing, Learning From Others, Methods | 6 Comments » 6 Comments Weekly Roundup of Web Design and Development Resources: May 1, 2015 May 1, 2015 at 4:38 pm […] Online Surveys On a Shoestring: Tips and Tricks: While Amazon Mechanical Turk (mTurk) and SurveyMonkey Audience are great for low-budget surveys, be aware of the drawbacks and plan accordingly says Gabriel Biller and Lada Gorlenko. Check response time, add a dummy question, and throw out duplicates. […] Mark May 6, 2015 at 10:35 am Don’t you think it is rude to say “Thank you for your time, but we already have enough respondents like you” to participant? It sounds like “You are worth nothing for us. Get lost” Instead I would say “Thank you for you time. If selected, we will get in touch with you”. That is more polite I think. Дайджест продуктового дизайна, апрель 2015 | Юрий Ветров об интерфейсах May 12, 2015 at 11:55 am […] Памятка Gabriel Biller и Lada Gorlenko о проведении дешевых по…. Что делать в ситуации наверняка некачественной базы респондентов. […] Surveys and the quantitative edge | masters digital May 12, 2015 at 6:57 pm […] There are tools — some elaborate, some quite simple — that can help you conduct quantitative research on a shoestring budget. See these tips and tricks for getting the most out of surveys. […] Nancy June 11, 2015 at 8:39 pm Great tip about making the labels on L Mentoring as an Investment by Chris Poteet May 5th, 2015 2 Comments Have you ever asked for an update on a project you’d invested a great deal of time and energy in, only to hear “they have completely redesigned it since then”? I did, and it left me with this very empty feeling. After some wallowing, I realized I needed to discover a new way to think about the way I work and what really matters in my consulting career. My answer: The mark of a truly good consultant is investing in people. Focusing on investing in people will ensure that your work will still continue to see results long after the application is redesigned, and that is change that matters in the long run. In the following article, I will give three areas in which we can focus our efforts: mentoring, client education, and our own team members. I hope that the reflection will help us all be better consultants and make better investments. Client mentoring as an investment There are often opportunities for us to invest in “client side” people, but they might not be readily apparent. I will give two examples of this. On a recent project, I was the designer paired with a recently-hired UX director, who was a little bewildered still by the new gig. When we talked, it became apparent that what he needed was someone to mentor him in an intentional way because he was overwhelmed and feeling lost. I spent lunches with this gentleman talking about UX strategy, how my company had handled process definition, and I eventually worked on a project where I invited him to come do user research with me. Now mind you, this mentoring was not the part of any statement of work. This was something I did because it was the right thing to do. It was an opportunity to make an investment much bigger than the project at hand—and to see someone blossom right before your eyes makes the time investment very much worthwhile. By the end of the client engagement, he was extremely thankful to have had someone invest time in him, to point him in the right direction—which allowed him to lead the UX capability much better than he was before. It turned out to be the most satisfying work I had done in ages. Fortunately, both my company and the client were extremely appreciative of the time spent with their people. A second example is on the implementation side. I was the interface developer for an intranet project, and the client had a talented UI person who had questions about the CMS and approach we were using. To complicate the situation, we came to the project after they fired another firm for an inability to deliver. This woman had been given poor advice by the previous vendor, and she naturally had lots of questions about how to do the implementation the right way. It is easy to become exhausted with external consultants, and I wanted to ensure that she and their team quickly came to trust us to deliver. I set up bi-weekly meetings with her throughout the four month implementation. Before we even started development, she and I mapped out the scope of the work and talked about all kinds of details, down to and including minutia like CSS class names. The regular meetings gave her a chance to see and give input throughout the entire process. Another advantage of this approach—beyond those that accrue through collaboration—was that there was no big knowledge handoff at the end. It was something that was built into the project from the beginning. As companies become more lean, we can get a double benefit of increased collaboration and knowledge sharing: First, we spend far less time writing copious documentation because we have been sharing all along, and second, the solution has a much greater chance for long-term success due to our time spent investing in these individuals who take over after we leave. Client education as an investment We can also educate clients even if they are not themselves in the UX world. A big intranet project I worked on was scoped to be responsive, but it became very apparent in the beginning that the design that went into it was not done in the best way for my company to implement; it was not designed in a mobile-first fashion. I had two options: Either I just let it go, do my work, and move on; or, I could take the time to reach out to the client and educate them. I knew that this project was already moving forward, but I could set up a foundation for this client’s future success. One thing to gauge is whether the client is even interested in such a relationship. Sometimes, despite your best intentions, clients are only interested in timelines and not interested in spending lots of billable time learning or re-learning. And I had to ask: What did I value? Was I only in it for the money, or could I help enact lasting change and provide real value? This client was not himself a UX practitioner, and he was looking for someone to be the expert he could trust. Working with non-UX people is a challenge, because you have to sell them a bit harder on why doing things the right way is important, even when they do not understand the implications or appreciate the time necessary to do it the right way. I pulled him aside in a couple of private meetings and talked about everything with him, from defining responsive design correctly, understanding mobile-first design, and even things like home page carousel use and abuse. In the end, it not only furthered our relationship, but it afforded me a high level of trust and rapport with the client. This particular client was open to the discussion and was even excited about extending the relationship, but if you have a hesitant client, don’t give up on them. Show them the quantifiable benefits of this increased collaboration by pointing to your experience in the past, or that the time they spend with you learning will only pay dividends in the future. Remember that even if things aren’t changeable in the short term, you can make investments in people for the next project and longer term. Teams as an investment There is one last, important group we can’t forget: our co-workers. These are the people that become like a family in ways our clients never will. Project after project, these are the people we are tasked to work with, and in some ways these are often the most strategic people to invest in but also sometimes the most difficult to do it with because we can so easily overlook them. During my firm’s adoption of the CSS preprocessor SASS, my team was mostly junior people who were looking for leadership in all kinds of areas. This time, I was given the opportunity to help others use this powerful tool. I took the lead in understanding its implications and how to use it in our teams, and then I spent concentrated time with each member of the UX team to help them understand how to use the technology in both a programmatic and process way. Taking advantage of opportunities like these furthers your relationship with your team members and demonstrates to them that you care deeply about their professional development. To this day, those team members reach out to me with questions and best practices due to the trust gained through leading in this way. It is amazing how doing this even on a detail like a CSS preprocessor can assist your team members greatly. We all have different motivations for doing the work that we do, and I imagine that for most of us money—as good as money is—is not the primary factor. Instead, very talented people tend to thrive on being an expert, enacting change, and leading others. True leaders are not given an opportunity to lead—they find those opportunities. Leading inside your organization will make you as close to irreplaceable as you can get. Share this: EmailTwitter75RedditLinkedIn51Facebook18Google Posted in Learning From Others, Workplace and Career | 2 Comments » 2 Comments Recent Publications May 22, 2015 at 8:10 pm […] Read: Mentoring as an investment […] William Singh September 6, 2015 at 3:44 pm This article boils down to saying that a lead consultant should mentor clients and team members. I don’t really see anything new or insightful about this at all. The only difference is that the term “investment” is applied as a veneer to describe basic consulting skills 101. Worse still is that the author then claims that these fundamental skills “will ENSURE your work will continue to see results.” This is pure hyperbole. Conveniently, this article only describes successful examples of collaboration and knowledge-sharing. In the real world of consulting, like in finance, there is a big difference between SPECULATION and INVESTING. A more expository discussion would have included *speculating* on people – taking chances, sharing information and best practices – *without* necessarily receiving anything in return. This happens every day, with all kinds of people and projects. You share ideas with a client and they take them to another agency. You train an employee and they leave for another company. So I am really surprised that any experienced consultant would claim that mentoring and education are “investments” that will “ensure” positive results. Really a one-sided, happy-path article in my opinion, sorry. Your Guide to Online Research and Testing Tools by Bartosz Mozyrko May 12th, 2015 20 Comments The success of every business depends on how the business will meet their customers’ needs. To do that, it is important to optimize your offer, the website, and your selling methods so your customer is satisfied. The fields of online marketing, conversion rate optimization, and user experience design have a wide range of online tools that can guide you through this process smoothly. Many companies use only one or two tools that they are familiar with, but that might not be enough to gather important data necessary for improvement. To help you better understand when and which tool is valuable to use, I created a framework that can help in your assessment. Once you broaden your horizons, it will be easier to choose the set of tools aligned to your business’s needs. The tools can be roughly divided into three basic categories: User testing: Evaluate a product by testing it with users who take the study simultaneously, in their natural context, and on their own device. Customer feedback: Capture feedback of customer’s expectations, preferences, and aversions directly from a website. Web analytics: Provide detailed statistics about a website’s traffic, traffic sources, and measurement of conversions and sales. To better understand when to use which tool, it is helpful to use the following criteria: What people say versus what people do… and what they say they do Why versus how much Existing classifications of online tools The possible services are included at the latter part of the article to help you start. What people say versus what people do… and what they say they do What people say, what people do, and what they say they do are three entirely different things. People often lack awareness or necessary knowledge which would enable them to provide correct information. Anyone who has any experience with user research or conversion rate optimization and has spent time trying to understand users has seen firsthand that more often than not user statements do not match the acquired data. People are not always able to fully articulate why they did that thing they just did. That’s the reason it’s sometimes good to compare information about opinions to information on behavior, as this mix can provide better insights. You can learn what people do by studying your website from your users’ perspective and drawing conclusions based on observations of their behavior, such as click tracking or user session recording. However, that is based on the idea that you test certain theories about people’s behavior. There is a degree of uncertainty, and to validate the data you’ve gathered, you will sometimes have to go one step further and simply ask your users, which will allow you to see the whole picture. Therefore, you can learn what people say by reaching out to your target group directly and asking them questions about your business. Why versus how much Some tools are better suited for answering questions about why or how to fix a problem, whereas tools like web analytics do a much better job at answering how many and how much types of questions. Google Analytics tells you the percentage of people who clicked what thing to through to what page, but it doesn’t tell you why they did or did not do that. Having knowledge of these differences helps you prioritize certain sets of tools and use them while fixing issues having the biggest impact on your business. The following chart illustrates how different dimensions affect the types of questions that can be asked: chart illustrates how different dimensions affect the types of questions that can be asked. Source: http://www.nngroup.com/articles/which-ux-research-methods/ Choosing the right tool—infographics There are a lot of tools out there these days that do everything from testing information architecture and remote observation. With more coming out every day, it can be really hard to pick the one that will give you the best results for your specific purpose. To alleviate some of the confusion, many experts tried to classify them according to different criteria. I decided to include some of examples for your convenience below. Which remote tool should I use? By Stuff On My Wall A flow chart to evaluate remote tool choices. Source: http://remoteresear.ch/moderated-research/ Choosing a remote user experience research tool by Nate Bolt Another chart showing evaluation criteria for remote research tools. Source: http://remoteresear.ch/categories/ The five categories of remote UX tools by Nate Bolt Five categories of user research tools. Source: http://remoteresear.ch/categories/ Four quadrants of the usability testing tools matrix by Craig Tomlin Usability testing tools arranged in a quadrant chart. Source: http://www.usefulusability.com/14-usability-testing-tools-matrix-and-comprehensive-reviews/ Tool examples The examples of tools which I list below are best suited for web services. The world of mobile applications and services is too vast to be skimmed over and has enough material to be a different article completely. The selection is narrowed down in order to not overwhelm you with choice, so worry not. User testing User testing and research is vital to creating a successful website, products and services. Nowadays using one of the many existing tools and services for user testing is a lot easier than it used to be. The important thing is to find a tool or service that works for your website and then use it to gather real-world data on what works and what does not. Survey: The most basic form of what people say. Online surveys are often used by companies to gain a better understanding of their customers’ motives and opinions. You can ask them to respond in any way they choose or ask them to select an answer from a limited number of predetermined responses. Getting feedback straight from your customers may be best used in determining their painpoints or figuring out their possible needs (or future trends). However, what you need to remember about is that people do not always communicate best what is exactly the issue they are facing. Be like Henry Ford: Do not give people faster horses when they want quicker transportation—invent a car. Examples: Typeform Survey Gizmo Card sorting: It focuses on asking your users to categorize and sort provided items in the most logical way for them or create their own possible categories for items. These two methods are called respectively closed and open card sorting. This will help you to rework the information architecture of your site thanks to the knowledge about the users’ mental models. If you aim to obtain information that balances between “what they do” and “what they say”, sorting is your best bet. Be sure to conduct this study in a larger group – some mental models might make sense, but aren’t the most intuitive for others. Focus on the responses that are aligned with each other, as it is possibly the most representative version of categories. Examples: ConceptCodify usabiliTEST Click testing/automated static/design surveys: This lets you test screenshots of your design, so you can obtain detailed information about your users’ expectations and reactions to a website in various stages of development. This enters the territory of simply gathering data about the actions of your users, so you obtain information about what they do. The study is conducted usually by asking a direct question: “Click on the button which will lead to sign-ups”. However, remember, click testing alone is not sufficient enough, you need other tools that cover the part of “why” in order to fully understand. Examples: Usaura Verify App 5-Second testing/first impression test: Because your testers have only five seconds to view a presented image, they are put under time pressure and must answer questions relying only on almost subconscious information they obtained. This enables you to improve your landing pages and calls to action, as users mostly focus only on the most eye-catching elements. Examples: UsabilityHub Optimal Workshop Chalkmark Diary studies: An extensive database of all thoughts, feelings and actions of your user, who belongs to a studied target market. All events are being recorded by the participants at their moment of occurrence. This provides insights into firsthand needs of your customers, asking them directly about their experiences. Yet, it operates in a similar fashion to surveys, therefore remember that your participants do not always clearly convey what they mean. Examples: FocusVision Revelation Blogger Moderated user studies/remote usability testing: The participants of this test are located in their natural environment, so their experiences are more genuine. Thanks to the tools and software there is no necessity for participants and facilitators to be in the same physical location. Putting the study into context of a natural/neutral environment (of whatever group you are studying) gives you insight into unmodified behaviours. Also, the study is a lot cheaper than other versions. Examples: GoToMeeting Skype Self-moderated testing: The participants of the test are expected to complete the tasks independently. After that you will obtain videos of their test sessions, along with a report containing information what problems your users were facing and what to do in order to fix them. The services offering this type of testing usually offer the responses quickly, so if you are in a dire need of feedback, this is one of possibilities. Examples: Uxeria UserTesting Automated live testing/remote scenario testing: Very similar to remote testing, yet the amount of information provided is much more extensive and organized. You get effectiveness ratios (success, error, abandonment and timeout), efficiency ratios (time on task and number of clicks), and survey comments as the results. Examples: UX Suite Loop11 Tree testing/card-based classification: It is a technique which completely removes every distracting element of the website (ads, themes etc.) and focuses only on the simplified text version. Through this you can evaluate the clarity of your scheme and pinpoint the chokepoints that present problems to users. It is a good method to test your prototypes or if you want to detect a problem with your website and suspect the basic framework is at fault. Examples: UserZoom Optimal Workshop Treejack Remote eye tracking/online eye tracking/visual attention tracking: Shows you where people focus their attention on your landing pages, layouts, and branding materials. This can tell you whether the users are focused on the page, whether they are reading it or just scanning, how intense they are, and what is the pattern of their movement. However, it cannot tell you exactly whether your users actually do see something or do not, or why exactly do they look at a given part. This can be remedied for example with voiceovers, where the participants tell you right away what they feel. a) Simulated: creates measurement reports that predict what a real person would most likely look at. Examples: VAS Eyequant b) Behavioral: finds out whether people actually notice conversion-oriented elements of the page and how much attention they pay to them. Examples: Attensee Eyetrack Shop These are the singular features which are prominent elements of the listed services. However, nowadays there is a trend to combine various tools together, so they can be offered by a single website. If you happen to find more than one tool valuable for your business, you can use services such as UsabilityTools or UserZoom. Customer feedback Successful business owners know that it’s crucial to take some time to obtain customer feedback. Understanding what your customers think about your products and services will not only help you improve quality, but will also give you insights into what new products and services your customers want. Knowing what your customers think you’re doing right or wrong also lets you make smart decisions about where to focus your energies. Live chats: an easy to understand way of communicating through the website interface in real time. Live chat enables you to provide all the answers your customers could want. By analyzing their questions and often inquired issues you can decide what needs improvement. Live chats usually focus on solving an immediate problem, so it is usually used for smaller issues. The plus is the fact that your client will feel acknowledged right away. Examples: LiveChat Live Person Insight surveys: They help you understand your customers thanks to targeted website surveys. You can create targeted surveys and prompts by focusing them on the variables such as the time on page, the number of visits, the referring search term or your own internal data. You can even target custom variables such as the number of items in a shopping cart. However, they are very specific and operate on the same principle as general surveys, so you must remember about the risk that the survey participants won’t always be able to provide you with satisfactory answers. Examples: Survicate Qualaroo Feedback forms: They are a simple website application to receive feedback from your website visitors. You can create a customized form, copy and paste code into your site’s HTML, and start getting feedback. This is a basic tool for getting feedback forms from your customer, and receiving and organizing results. If you want to know the general opinion about your website and the experiences of your visitors (and you want it to be completely voluntary), then forms are a great option. Examples: Feedbackify Kampyle Feedback forums: Users enter a forum where they can propose and vote on items which need change or need to be discussed. That information allows you to prioritize issues and decide what needs to be fixed as fast as possible. The forums can be also used for communicating with users, for example you can inform them that you introduced some improvements to the product. Remember, however, that even the most popular issues might be actually least important for imrpoving your serive and/or website, it is up to you to judge. Examples: UserVoice Get Satisfaction Online customer communities: You refer to your customer directly, peer-to-peer, and offer problem solving and feedback. Those web-based gathering places for customers, experts, and partners enable you to discuss problems, post reviews, brainstorm new product ideas, and engage with one another. Examples: Socious Lithium There are also platforms that merge some of the functionalities such as UserEcho or Freshdesk which are an extremely popular solution to the growing demands of clients who prefer to focus on single service with many features. Website analytics Just because analytics provide you with some additional data about your site doesn’t mean it’s actually valuable to your business. You want to find the errors and holes within your website and fill them with additional functionality for your users and customers. Using the information gathered you can influence your future decisions in order to improve your service. Web analytics: all movement of the users is recorded and stored. However, their privacy is safe, as the data gathered is used only for optimization, and the data is impossible to be personally identified. The data can be later used for evaluating and improving your service and website in order to achieve your goals such as increasing the amount of visitors or sales. Examples: Mint Mixpanel KISSmetrics Woopra Google Analytics In-page web analytics: They differ from traditional web analytics as they focus on the users’ movement within the page and not between them. These tools are generally used to understand behavior for the purposes of optimizing a website’s usability and conversion. a) Click tracking: This technique used to determine and record what the users are clicking with their mouse while browsing the website–it draws you a map of their movements, which allows you to see step by step the journey of your user. If there is a problem with the website, this is one of the methods to check out where that problem could’ve occured. Examples: Gemius Heatmap CrazyEgg b) Visitor recording/user session replays: Every action and event is recorded as a video. Examples: Inspectlet Fullstory c) Form testing: This allows you to evaluate the web form and identify areas that need improvement, for example which fields make your visitors leave the website before completing the form. Examples: Formisimo UsabilityTools Conversion Suite In a similar fashion to the previous groups, there is also a considerable amount of analytic Swiss army knives offering various tools in one place. The examples of such are ClickTale, UsabilityTools, or MouseStats. Conclusion This is it—the finishing line of this guide to online research tools. It is an extremely valuable asset which can provide important and surprising data. The amount of tools available at hand is indeed overwhelming, that is why you need to consider the listed factors of what, why and such. This way you will reach a conclusion about what exactly you need to test in order to improve your service or obtain required information. Knowing what you want to do will help you narrow your choices and in result choose the right tool. Hopefully, what you’ve read will help you choose the best usability tools for testing, and you will end up an expert in your research sessions. Share this: EmailTwitter323RedditLinkedIn347Facebook206Google Posted in Discovery, Research, and Testing, Process and Methods, Software and Tools | 20 Comments » 20 Comments John Weidner May 12, 2015 at 2:28 pm Nice collection of tools. Here’s another one to consider adding – https://userbob.com It’s a un-moderated remote user testing tool. Owain