ux 3
The Freelance Studio Denver, Co. User Experience Agency Preventing User Errors: Avoiding Conscious Mistakes by PAGE LAUBHEIMER on September 7, 2015 Topics: Heuristic Evaluation Interaction Design Summary: Thoughtful design is transparent and easy to understand, provides a preview, and helps users to easily correct their errors. To err is human, and errors happen when people engage with user interfaces. According to Don Norman, there are two categories of user errors: slips and mistakes. Slips occur when a user is on autopilot, and takes the wrong actions in service of a reasonable goal. We discuss slips and slip prevention in detail in the first article in this series. In this article we focus on mistakes. Mistakes occur when a user has developed a mental model of the interface that isn’t correct, and forms a goal that doesn’t suit the situation well. For example, many online children’s games start with a short video tutorial or with a video advertisement for another game; in our user testing with children, we noticed that, when the video looks too much like a real game, kids are tempted to interact with it, thinking that they can already start playing. In this situation, the users form and execute an inappropriate goal, largely because they interpret incorrectly what they see on the website (namely, they think that the video is the real game). Good design should help prevent such mismatches between the user’s expectations and the interface. The 2 Gulfs When working with a system like a website or app, users start with a goal of some kind and, based on their mental model of the system, they form an action plan to accomplish that goal. Then they take action, and look to verify that their action produced the desired results. In his book The Design of Everyday Things, Don Norman refers to this process as bridging the Gulf of Execution (“How do I work with this tool to accomplish my goal?”) and the Gulf of Evaluation (“Did this work how I wanted it to?”). A lot of user mistakes (but not slips) happen when users do not get enough help in bridging these two gulfs, and the designers’ mental model and interpretation of how the system should work does not match users’ mental models. In those situations, users either form an action plan that is incorrect or they do not understand well how the state of the system changed as a result of their actions. While preventing slips is often a simple matter of validation and enforcing constraints, preventing mistakes involves understanding users’ mental models and their expectations and making your designers match them. Do not make the error of thinking that users are going to eventually learn your designers’ mental models; while that may be the case in rare situations where users are forced to use the system regularly, with most consumer-facing apps and websites users just navigate to a different site instead of bothering to learn a tricky one. Gather User Data Discovering the specific gaps between users’ mental models and designers’ mental models is critical for avoiding mistakes, and requires gathering user data. There is a wealth of user research methods that can suit a variety of circumstances, so it is important to select a methodology that provides you with the context for why users make mistakes and what their expectations are. Methods such as contextual inquiry, field studies, and ethnographic studies are well suited for figuring out users’ mental models and expectations at an early stage, when you’re starting a new design. Once you do have a system (or at least a prototype) you can use qualitative usability testing to detect gaps between designers’ mental models and users’ expectations. Follow Design Conventions Using standard design conventions helps users bridge both the Gulf of Evaluation and the Gulf of Execution, and understand what actions they can take. This is reinforced by Jakob’s Law, which states that “users spend most of their time on other websites.” Each and every user that interacts with your website or application has been trained by thousands of other designers to expect common interactive elements to look and work a certain way, and error-prone conditions can arise when your website deviates from those conventions. Southwest Airlines mobile site calendar interface Southwest’s mobile site uses the convention of greying out dates in the past, letting you know you can’t select them when booking a flight. Unfortunately, it also uses the same design for the next month’s dates, implying unavailability. Communicate Affordances Besides using conventions that users are able to recognize from previous experiences, another method of making controls understandable (and thus helping users bridge the Gulf of Execution) is having the design communicate how it can be used. For example, users are accustomed to clickable buttons looking like they have a subtle amount of shadow on the outside. This effect makes a button look like it is rising up out of the page, and you can push it. Conversely, form fields are also rectangular, but have a small amount of shadow on the inside of the shape, to indicate that they’re empty and can be filled. This attribute of the design that indicated how it can be used is often referred to as the object’s signifier. The affordance itself is the way in which the object can be interacted with (buttons can be pushed, form fields can have typed input added), and the visual cues that communicate this to the user are known as the signifier of the affordance. If there isn’t a clear signifier that communicates the affordance, users may not understand how to use a control, and make mistakes. Polarr iOS photo editor interface Polarr is a popular photo editor on iOS. The right-hand editing controls (Temp, Tint, etc.) require that you tap on the box, and then swipe left or right to change the numeric value of that parameter. However, the controls don’t indicate how users should interact with them, so novices are likely to accidentally set those controls to the wrong values several times before learning how to interact with them properly. An additional interaction difficulty is present: since the controls are on the far right side of the display, you can easily decrease the parameters by swiping left, but you quickly run out of horizontal space to swipe right and increase the value. Preview Results Sometimes, users don’t realize they’re about to trigger an action that results in changes that are wide in scope and difficult to verify. Users may well wish to revise their goals once they have had a chance to compare the effect of the action with their goal; Preview features provide an opportunity to bridge the Gulf of Evaluation without making a mistake. A good example is rendering special effects in a video-editing software where the editing the system does in the background may take 5 or 10 minutes, and the computer will be mostly unresponsive while it’s working. In this case, even if users haven’t permanently lost any work, if the result isn’t what they’re looking for, they have lost quite a bit of time, and possibly also patience. Wherever possible, offer a preview state that users can examine to make sure that they will get what they want. This assists in avoiding time-consuming mistakes before they can be made. iOS 8 Display Zoom accessibility option In iOS 8, there is an accessibility option that allows users with low vision to zoom the display so that icons and text are larger. Applying zoom requires restarting the phone, which is a heavyweight action that will take a while, so iOS shows a preview of what things will look like before you commit to applying this change. This helpful preview allows you to evaluate whether or not your goal really is to zoom the display. Preventing Both Mistakes and Slips Some error-prevention strategies work with both slips and mistakes. The following are good general guidelines for reducing the likelihood (and the severity) of all types of errors. Remove Memory Burdens Whenever users need to keep a lot of information in their short-term memory while working on a task, they are vulnerable to slips where they can repeat steps, or fail to complete the task. Memory-lapses can also result in mistakes where users forget earlier decisions they’ve already made, and repeat the process with different outcomes. A good strategy for preventing both of these types of errors is to remove burdens on users’ memory. Whenever possible, remove conditions that require users to keep information in their own memory while they move from one step to another in complex, multistep procedures. Instead, strive to display the contextual information that users need to complete a task. Remember, users are often distracted, multitasking, or otherwise not fully focused on the website or app that they’re using. A good approach is to imagine that your users could be interrupted by a phone call after every step in a multistep process. You want to show all of the information users need to readily resume their tasks after having been interrupted for several minutes. Hipmunk.com's interface for choosing a flight midway through the process Hipmunk provides a quick glance at the contextual information needed to resume the process of choosing a flight, even after an interruption. At this second step in the booking process, it clearly shows the dates of travel, the airports in question, the fact that the lowest-priced fare for departure was chosen, and that the user is required to pick a return flight. Even users who had been distracted for quite some time could easily resume this process without accidentally deviating from their original requirements and plan for this flight, or attempting to repeat the steps already completed. Confirm Before Destructive Actions Designers typically focus on user tasks related to creation. But deleting also has to be straightforward. Remember, when users delete an item, they destroy something that had taken work to create. Before you finish removing the object of that hard work, make absolutely sure that the user really meant to Delete by showing users a confirmation dialog. This can be an effective, simple, and familiar way to give the user a final chance to stop, and double-check if they really want to delete all those vacation photos, for example. OS X Yosemite Photos app confirmation dialog Apple’s new Photos app uses a conventional dialog box to make sure that the user really intended to delete these photos of a recent hiking trip, and indicates that the scope of the action is the highlighted 24 photos. Even better, the button to confirm the delete action is clearly labeled Delete, rather than the generic Confirm. It’s important to use confirmation dialogs carefully, however, since they interrupt users’ workflow, and inherently slow them down. If a confirmation dialog asks “do you really want to do that?” after every decision, many users won’t spend the time to double-check whether they made an error before instinctively clicking the highlighted Confirm button. Counter-intuitively, a design intended to prevent errors can actually increase them, as the user starts rushing to counteract the inefficiency of constantly confirming. Like in Aesop’s famous fable, “The Boy Who Cried Wolf”, a UI can become “The Computer That Cried Confirm” a few times too many. In both cases, people will have stopped paying attention to the false alarms by the time there’s something important to cry about. Don’t use confirmation dialogs as the sole error prevention method, apply them carefully with the other techniques discussed in this article to maximize their usefulness and limit their inefficiency. Support Undo Another primary principle of preventing users from making errors is to acknowledge that they will make mistakes and slips from time to time, and provide a safety net that makes these errors less costly. Nearly everyone has experienced the utterly horrifying moment of realizing that you just accidentally deleted an entire folder or directory of important documents, when you really only meant to delete only one file. Providing the ability to undo the most recent action can help users to feel more secure and more confident to experiment with unfamiliar features, since they are aware that a mistake is low cost and can be easily fixed. Gmail Undo feature Gmail offers a nice, contextual Undo feature for destructive actions, like accidentally deleting 92 emails. This feature proved so useful that Gmail has now also made it available when sending emails, allowing one to Undo an email from being delivered up to 30 seconds after hitting Send. Warn Before Errors Are Made Presenting subtle, contextual error warnings while a user is actively making an error can help them to quickly correct it. For example, when users are typing a review into an input box on an ecommerce store, don’t wait until after they have hit Submit to show an error message that the review is 35 characters too long, let them know while they’re typing those extra 35 characters (or better yet, warn them as they get close to the limit). Twitter warning feature before reaching the character limit Twitter famously has a strict character limit for Tweets, and warns users before they exceed that limit with a remaining character count. Twitter's error message once the character limit has been reached Once a Tweet is longer than the limit, it shows a negative counter, highlights the excess characters, and deactivates the Tweet button, which lets users know exactly what they need to do to fix their mistake. Summary Even though users will always make some mistakes when using software, it’s possible to reduce overall errors by designing with the user’s experience in mind. Prevent mistakes by helping the user to build a good mental model of your interface. Use design patterns that communicate how they work to users, encourage users to double-check their work (especially before deleting), and warn before mistakes are made. These simple guidelines can help enable lower rates of user errors, and ultimately improve usability. Reference Don Norman. The Design of Everyday Things, Basic Books 2013. Share this article: Twitter | LinkedIn | Google+ | Email Test Paper Prototypes to Save Time and Money: The Mozilla Case Study by SUSAN FARRELL on August 30, 2015 Topics: Prototyping User Testing Summary: Low-fidelity user testing of several iterations of Mozilla’s Support website quickly showed which user-interface elements worked best for Firefox users. We often advocate using a parallel and iterative design process to explore design diversity, because it produces much better user experiences. A cheap way of implementing this type of design process is rapid paper prototyping. Paper prototyping is not only efficient for progressing designs quickly and allowing designers to find their best-of-breed with minimal investment, but it is also fast, fun, and easy to learn. In this article we show how Mozilla used paper prototyping as well as user research and data mining to quickly advance the UX-focused redesign of a major part of its website. (A previous article documented how this redesign had high return on investment.) If anybody says that design iterations will break your ship schedule or that user testing is too much bother, point them to this article, because some of the prototypes progressed through 7 versions during 2 weeks. Testing with users before even breaking out the HTML editor was cheap and it showed which alternative designs worked best. The Iterations One of the central goals of Mozilla’s redesign effort was to improve information discoverability and findability in order to enable users to find the information they need quickly. A key measure of that success was to reduce the number of questions in its support forums. Product-help landing pages were the top entry points for the support website, because they were accessed via the Help menu in the products. As the most-viewed page on the support website, the Firefox Help landing page pointed to a lot of useful information, but, with this design, too many people ended up in the forum asking questions. More than 30 articles were linked from the homepage, but when someone wanted information on a problem not listed, searching was the only way to find it. Instead, users needed to be able to choose a path related to their issues and find the few articles applicable to them. Firefox Help: Before redesign The Firefox Help landing page before the redesign Although there is no single answer to the question of flat vs. deep hierarchies, our many years of usability research indicate that with too many choices, people easily get overwhelmed. When people click the right thing first, they are almost 3 times as likely to succeed at their task. With that knowledge in mind, in the first iteration of the design we focused on limiting the number of choices. The new design allowed people to start with either their task or their product or service, and it offered 5 most-wanted links in a box in the bottom left corner of the page. These distinct task categories (getting started, installing, protecting, customizing, and repairing) allowed people to find what they needed or to determine that the information didn’t exist, quickly. Mozilla Support Home, first version The first version of the paper prototype for the Mozilla Support homepage: Users could start with a task (1), a product or service (2), or choose a hot topic (3). The designer made the prototypes using OmniGraffle, and we printed them onto tabloid paper and cut them to size. Because there was no code to change, this prototyping method allowed us to make design changes quickly during testing. Firefox users helped progress the designs rapidly toward better usability, by allowing us to watch them try to find answers to top questions. Where they got stuck or confused, we redesigned. In this early design stage, the intent was not to focus on graphical or layout issues, but rather to find out which choices we needed to present on each page and to test comprehension of labels. Any of these elements might have been ultimately expressed using other interaction widgets, such as menus or accordions. In a later iteration of the design, shown below, task groupings for the help material were eventually moved under the products on the next layer down in the IA, in part because all tasks were not available for all products and services, and that order of layering helped manage that necessary difference gracefully. To avoid overwhelming and distracting users, we also used progressive disclosure: the different Mozilla products were now hidden under a collapsed accordion. Iteration with collapsed accordion A later-iteration design for the Support homepage: People could choose a task (2) or expand the software row (1) to choose a product or service first. When someone clicked the question in the middle, we showed them another piece of paper like this, but with the middle section expanded (compare with the previous screenshot). In this iteration, users loved the big icons (2), but they were confused by wording in some of the choices (3): “Participate: How you can help others” (too general) and “Feedback: Give us your suggestions via Firefox input” (too specific). We tested several other phrases until we found the wording that worked best. Note about writing for the web: Throughout the research, we wanted discover how much (actually, how little) information needed to be shown in order for people to find the most-important things easily, because one of the fundamental principles of reading onscreen is that less is more. Reading onscreen is more difficult than reading on paper, and low-literacy is a challenge as well. Because people tend to scan at first rather than reading right away, they perceive fewer words as being more informative when they read online, and concise pages are more usable. Also, because Mozilla website visitors speak all the languages of the world, we wanted to make word translation and concept localization as straightforward as we could. After the paper-prototype testing sessions ended, the designer made the next version in HTML. This design used strong grouping and background colors to differentiate among activity and information types. It was still a bit too complex to scale gracefully, however, so it was simplified before implementation as products, services, and tasks were added. Screen Readers on Touchscreen Devices by KATIE SHERWIN on August 30, 2015 Topics: Accessibility Mobile & Tablet Summary: People who are blind or have low vision must rely on their memory and on a rich vocabulary of gestures to interact with touchscreen phones and tablets. Designers should strive to minimize the cognitive load for users of screen readers. A first reaction might be, if users can’t see the screen, how can they know where to touch? It might seem impossible to design touch-driven interfaces for vision-impaired users. And if it’s impossible, then you don’t need to try. Wrong: it is not impossible, and it definitely is worth the effort to make touchscreen designs accessible, particularly since touch is the interaction modality for all modern mobile devices. Not many designers, developers, and UX professionals have had the occasion to dive in and learn about how people who are blind or have low vision use touchscreens. To be honest, I hadn’t until I attended a conference on accessible technologies. That first day, in a hotel conference room, I was struck by the number of people attentively tapping, typing, and swiping on their touchscreen phones and tablets, with the screens turned off. They wore headphones as they listened to screen readers speak the text on their screens. Before the conference, I had tried out my phone’s built-in screen reader a few times, mostly as a way to have long articles read aloud to me. But I’d never taken the plunge to use the web or applications with the screen off. iPhone accessibility settings menu The iPhone’s built-in screen reader, VoiceOver, is available from the Accessibility menu in the phone’s General settings. As a sighted person unaccustomed to using a screen reader, relying only on spoken words to interact with my phone quickly became exhausting. I was impatient that I couldn’t quickly glance down to see what else was on the screen. I had to wait for the reader to announce something interesting, or slide my finger across the screen and hope to hear the keywords that I wanted. At one point, I accidently activated my browser’s settings menu and couldn’t, for the life of me, figure out what had happened. Having the screen off also tested my recall ability, especially when typing. Normally, I type fast and can even type without constantly looking at the keys. But, because there is no haptic feedback on on-screen keyboards, when you can’t see the keys at all it’s much more difficult. For example, imagine you can’t see the screen and you want to search for an extension cord online. You’re in the search box and you’re aiming for the letter X, but the screen reader tells you that your finger has landed on C. Do you move your finger left or right? Trick question — the correct answer is that you give up and use the dictation tool, because it will take you forever to type the phrase “extension cord”. Mobile Devices Are Convenient for Everyone People who are blind or have low vision rely on touchscreen mobile devices for the same reasons sighted people do: portability and on-the-go convenience. Texting, sending quick emails, making phone calls, looking up tomorrow’s weather, catching up on the latest news headlines, setting reminders and alerts for ourselves — we all appreciate the quick access to these when we’re on the go. People with low vision are no exception. Moreover, some applications designed specifically for people who have visual impairments help them identify colors, currency, labels, and even objects. Screenshots of TapTapSee app TapTapSee is an application that helps recognize objects in the real world. Users take a picture of an object with their device’s camera, and the app identifies what the object is, speaking the description aloud. In the screenshots above, it correctly identified a US $5 bill (left), and even specifically identified my “MacBook Pro and gray wireless Apple keyboard on brown wood table” (right). Pretty impressive. Screen Readers and Gestures on Touchscreen Devices For people who have low vision, using a touchscreen typically consists of listening to text read aloud by a screen reader and interacting with the elements on screen via a lexicon of multi-finger gestures. Screen readers are software programs that identify the elements displayed on a screen and repeat that information back to the user, via text–to–speech or braille output devices. While sighted people visually scan a page, people who have visual impairments use screen readers to identify text, links, images, headings, navigation elements, page regions, and so on. Listening to a screen reader requires a significant amount of focus and significantly increases the user’s cognitive load. You have to pay attention as the text is spoken, so that you can figure out what’s on the page, what is interesting to you, and whether or not an element is actionable. Unlike visual web pages, screen readers also present information in strict sequential order: users must patiently listen to the description of the page until they come across something that is interesting to them; they cannot directly select the most promising element without first attending to the elements that precede it. However, some amount of direct access is available. If users expect news headlines to be in the middle of the page, they can place a finger in that general area of the screen to have the voice reader skip the page elements preceding that position, thus saving the time of listening to the entire page. If users expect the shopping cart to be in the upper right corner, they can touch that part of the screen directly. If you miss something, you can’t glance back. Instead, you flick one finger across the screen until you hear what you missed. Listening to a page being read aloud requires that users hold a lot of information in their short-term memory. Consider the task of listening to a waiter reciting a long list of specials: you have to pay attention and remember all of them as you’re deciding your choice of entrĂ©e for the evening, The same happens when you’re listening to a screen reader, only on a larger scale. Modern mobile devices come with built-in screen readers: on Android devices, the text-to-speech program is called TalkBack, and on Apple iOS devices, it is VoiceOver. Below is a video demo of VoiceOver on an iPhone running iOS 8.3. 3:08 In most browsers, hover over the video to display the controls if they're not already visible. (You can try it yourself. Turn on your device’s screen reader in the Accessibility section. For iPhones, go to Settings > General > Accessibility > VoiceOver. For Android, go to Settings > Accessibility. Give it a try for a few hours. Good luck. It takes a while to get the hang of it, but it will come.) Browsing on touchscreen devices involves a range of gestures, many of which offer far more functionality than the tap and swipe gestures of the sighted world. To give you a better idea, here is a sample of some of the most common gestures for VoiceOver: Drag one finger over the screen to explore the interface and hear the screen reader speak what’s under your finger. Flick two fingers down the screen to hear it read the page from the top down. Single tap brings a button or link in focus (so you know what it is); double tap activates the control. 3-finger horizontal flick is the equivalent of a regular swipe. 3-finger vertical flick scrolls the screen up or down. As you can see, the vocabulary of gestures that users with low vision have to learn is quite wide. We know that gestures have low discoverability and learnability, yet for power users they do represent the only way to navigate efficiently through a system largely based on sequential access. Implications for Design A lack of visual information is taxing on the user experience, because people cannot quickly glance around a page, scan a list of menu choices, or aim with certainty at a target. They cannot use visual cues to detect page hierarchy, or groupings, or relationships between content, or the even the tone of an image. They must discern this information based on the text that is spoken by the screen reader. Add to that all the gestures they have to learn in order to interact with a website or application. It’s a lot of information to keep track of, and we haven’t even mentioned the demands of understanding the content itself. Designers and developers need to keep users with low vision in mind when creating interfaces and applications. (Of course, they should strive for inclusive design, which considers people with all types of impairments in mind, such as cognitive and motor impairments. But for this article, we’re focusing on vision impairments.) Below are a few suggestions. Remember that usability issues for sighted users are amplified for users who are nonsighted. If the user experience for sighted users is remotely challenging, the experience is going to be much more challenging for people who are blind. Very often, improving the interface for sighted people goes a long way toward improving site for people with visual impairments. Focus on cutting out extraneous copy and functionality that add little benefit to users. Less text on a page means there are fewer words for sighted users to scan, and less text for screen-reader users to listen to. Reduce interaction cost where possible by reworking workflows so that people can accomplish tasks more efficiently. Invest in writing cleaner, more accessible code. Remember, users with vision impairments may not be able to perceive visual cues such as grouping and color schemes to determine relationships between elements or to assess hierarchy. Instead, they rely on cues from the code being read: heading levels (H1, H2, etc.) convey main content sections. Text that ends with the identifier “link” or “button” tells users that an item is actionable. Incorporate alternate text for images, describing anything that isn’t apparent from the text on the page. Make pages easy to navigate with the keyboard only, and you’ll address some of the most common usability issues for people using screen readers on mobile devices. Avoid creating complex gestures for the sake of being unique. With so many gestures already in use by screen readers, sites that create additional gestures (or hijack existing ones) create huge usability issues. Conclusion Designers, developers, and usability professionals need to understand what the experience of using touchscreen devices is like for people who are blind or have low vision. Beyond that, we need remember that compliance with accessibility guidelines is not the end goal: usability is. For more tips on how to create usable, accessible designs, see our free report: Usability Guidelines for Accessible Web Design. Preventing User Errors: Avoiding Unconscious Slips by PAGE LAUBHEIMER on August 23, 2015 Topics: Heuristic Evaluation Interaction Design Summary: Users are often distracted from the task at hand, so prevent unconscious errors by offering suggestions, utilizing constraints, and being flexible. One of the 10 Usability Heuristics advises that it’s important to communicate errors to users gracefully, actionably, and clearly. However, it’s even better to prevent users from making errors in the first place A crucial point in discussing user errors is where to assign the blame for the error. The term “user error” implies that the user is at fault for having done something wrong. Not so. The designer is at fault for making it too easy for the user to commit the error. Therefore, the solution to user errors is not to scold users, to ask them to try harder, or to give them more extensive training. The answer is to redesign the system to be less error prone. Two Types of User Errors Before discussing how to prevent errors, it’s important to note that there are two types of errors that users make: slips and mistakes. (Both are discussed in much greater detail in Don Norman’s book The Design of Everyday Things.) Slips occur when users intend to perform one action, but end up doing another (often similar) action. For example, typing an “i” instead of an “o” counts as a slip; accidentally putting liquid hand soap on one’s toothbrush instead of toothpaste is also a slip. Slips are typically made when users are on autopilot, and when they do not fully devote their attention resources to the task at hand. Mistakes are made when users have goals that are inappropriate for the current problem or task; even if they take the right steps to complete their goals, the steps will result in an error. For example, if I misunderstood the meaning of the oil-pressure warning light in my car, and thought it was the tire-pressure monitor, no matter how carefully I added air to my tires, it would not fix the issue with my oil pressure. This would be a mistake, since the goal that I was attempting to accomplish was inappropriate for the situation, even though I made no errors in executing my plan. Mistakes are conscious errors, and often (though not exclusively) arise when a user has incomplete or incorrect information about the task, and develops a mental model that doesn’t match how the interface actually works. This article focuses on preventing unconscious slip-type errors, and a second article will address mistakes. General Guidelines for Preventing Slips Slips often happen when users are quite familiar with the goal that they seek to achieve and with the procedure for accomplishing that goal, but accidentally they take the wrong step when trying to achieve it. Often, when executing tasks that are well practiced, we tend to allocate fewer attentional resources, and, as a result, we can “slip” and perform the wrong action. Thus, ironically, slip-type mistakes often are made by expert users who are very familiar with the process at hand; unlike new users who are still learning how to use the system, experts feel that they have mastered the task and need to pay less attention to its actual completion. Strategies for preventing slips are centered around gently guiding users so that they stay on the right path and have fewer chances of slipping. Assist users by providing the needed level of precision, and encourage users to check for errors. Include Helpful Constraints While it’s not always a good idea to limit users’ choices, in cases where there are clear rules that define acceptable options, it can be a good strategy to constrain the types of input users can make. For example, booking a flight typically involves selecting the dates of travel, and there are a few rules that govern which dates are acceptable. One of the major rules is that a return flight cannot happen before a departure. If users aren’t limited in the dates they can choose, they may slip and accidentally select a set of dates for their flight that don’t follow the rules. A helpful constraint here will force users to pick a date range that fits. Southwest Airlines calendar picker interface Southwest’s calendar widget for picking flight dates uses helpful constraints to prevent users from accidentally setting a nonsensical date range. Even if users attempt to set the return date before the departure date, this widget forces them to pick a departure date first. In addition, it subtly uses color to provide context about which date is about to be changed (in this case, blue for departure), which helps users see which field they are selecting (instead of having to keep that information in their working memory). Offer Suggestions Similarly to how constraints guide users toward the correct use of an interface, suggestions can preempt many slips before the user has the opportunity to make them. On websites that offer thousands of products, search is an effective way of helping users find their proverbial needle in a haystack. However, typing can be inaccurate, especially on touchscreens where there isn’t any tactile (also known as haptic) feedback. While you cannot prevent a user from making typos (which are slip-type errors), you can preempt typos from turning into problems by offering contextual suggestions while the user types. Offering search suggestions has also the benefit of supporting recognition over recall in those situations when the users misremember the name of the product or content they’re looking for. Amazon predictive search suggestions Remembering how to spell Etymotic Research is difficult for users searching for high-quality headphones, and typing is likely to be low accuracy as well. Amazon’s clickable search suggestions enable users to type less, thereby making fewer slips or mistakes that would produce no results. Choose Good Defaults Another type of helpful suggestion is the good default. Especially when users have to perform repetitive actions, or in situations where they need to use precision, start by offering reasonable defaults that are likely to fit their real-world goals, and then allow them to refine their choices. For example, in a reminder app, a few typical preset options, such as Tomorrow, Next Week, In one Hour, and so on, can prevent typos in dates or times; a reminder to take dinner out of the oven that comes a day late is definitely not helpful. Google Inbox iOS app snooze function Google’s Inbox app for iOS allows you to “snooze” an email until a later time. The default options are sensible and prevent typing errors for common choices. Good defaults also help reduce mistakes, because they teach users about reasonable values for the question at hand. They help users better understand the question and sometimes make them realize that they are on the wrong track. Use Forgiving Formatting Some tasks really do require users to type very detailed or precise information, but forcing people to provide this information in a very specific format can be at odds with good usability practices: If you are asking users to input numerical information into a form, be flexible, and format that information in a way that is easily scannable (by humans, not machines) in order to prevent mistakes. For example, on account registration forms, there’s often a field requesting a phone number. However, many users have challenges scanning a long row of digits that isn’t broken up with spaces or punctuation, and are less likely to spot mistakes. This is why in the US (and in many other countries) we write phone numbers in the format “(777) 555-1212” —this format groups digits into smaller chunks that are easier to scan. While your website’s database might not allow nonnumeric characters to be stored in a phone number, you surely want your users to notice typos when they enter their phone number. One solution is to let users type in a way that’s natural to them, rather than forcing them to use the format that your application expects. Do some behind-the-scenes data scrubbing to remove parentheses or other characters that users may type, rather than frustrating them with an inflexible format. An even better solution is to format the users’ input as they type —like Uber’s website does during account creation. Once you begin typing, the form adds the spaces, parentheses, and hyphens where they normally go, and also ignores additional nonnumeric characters (which acts as a type of helpful constraint, preventing users from adding unnecessary extra parentheses, for example). This helps the user understand what characters they should type, and does the work of reformatting, making it much easier for users to read and double-check their own work. Uber.com's signup form Uber.com automatically displays the phone number in the desired format as users type, so that they can more easily scan their work to confirm that it’s correct. Summary Slips are common errors that happen when users do not pay full attention to a task or have small memory lapses. Preventing errors of this type is largely a matter of reducing burdens on users and guiding them when precision is required. In the next article in this series, we will explore strategies to prevent users from making mistakes, where their goals have been erroneously formed from a poor model of the interface. In addition, we will examine strategies that apply well to preventing both slips and mistakes. Very Large Touchscreens: UX Design Differs From Mobile Screens by KARA PERNICE on August 23, 2015 Topics: Mobile & Tablet Technology Summary: Only a few mobile-design skills and design recommendations translate well to designing for very large touchscreens, as found in kiosks and other nonmobile use cases. Users’ field of vision, arm motion, affordance, and privacy are a few of the different considerations for such screens with up to 380 times the area of a smartphone. As smartphones continue to improve at rapid rates, tasks that used to be inconceivable on small touchscreens have become (often) quite simple and acceptable. We can read, buy, work, and collaborate… well sort of. Sharing information via a phone is easy, but sharing the small screen itself with the person by your side is not. If we need to look at the same item at the same time together, a larger screen or even a large print will easily win out over a 4-inch display. But maybe in the not-too-distant future tiny powerful projectors will come built in on all of our phones and watches, facilitating group viewing on any plain wall. In this scenario, the input device may still be the phone as a remote control. Or maybe we will be able to use the human body or a gestural device, like the Myo Gesture Control Armband, for easier input, enabling us to “touch” the links on the wall and have the UI react. Myo gesture control armband Image of a black Myo Gesture Control Armband from https://store.myo.com/. Users would wear this device on their arms, “train” the UI on the computer, and apps would then recognize the users’ gestures. But until this day arrives, let’s consider very large touchscreens of today, for which the screen is both the input and the output device. I had the pleasure of interviewing Dorothy Shamonsky who is Lead UX Designer for ViewPoint, a software provider for touchscreen kiosks. Shamonsky works on the interface design for large touchscreens, including some as large as 72-inch. Dorothy Shaonsky, Lead UX Designer for ViewPoint Dorothy Shamonsky, Lead UX Designer for ViewPoint, works on the design of large touchscreen kiosks. To get a sense of how big a 72-inch screen is, stretch your arms straight out by your sides. This is about the distance from your fingertips on one hand to those on your other hand. And while the screen is about 380 times larger than that of the average smart phone, some design principles for a phone touchscreen apply to the huge touchscreen. Shamonsky provided these and many more insights and design ideas for large, immovable touchscreens. These are described in this article. Design Recommendations for Both Small and Very Large Touchscreens Whether designing for a 7- or 70-inch touchscreen, there are many guidelines that hold true. Some of the most important ones include the following: Allow natural gestures. Minimize the interaction cost of tapping, typing, and moving between screens. Offer user feedback via simple animations. Make it easy-to-decipher which elements are tappable. Make targets easy to tap. Offer legible text and graphics. What’s Different for Very Large Screens Beyond these commonalities, there are also some important differences between small and very large touchscreens, both in terms of user behavior and design recommendations. Here we discuss a few of them, directed at both the macro level (how to design the screen so users know how they’re supposed to interact with it) and the micro level (how to design the specific UI elements so that they are noticeable). Affordance and Signifiers at the Macro Level Most people have learned to touch a smartphone screens, as well as screens that are commonly touch enabled at ATMs, gas pumps, ticket kiosks, or museums. But in some environments people do not know that a large screen is touch enabled and, due to this poorly signaled affordance, they are reluctant to touch large displays when they are not sure whether they will enjoy any benefit for the effort. According to Shamonsky’s experiences, “Displays mounted on a wall or a stand tend to remind people of a TV and don’t imply that they are touch enabled. Instead users must rely on kiosk-specific cues such as location, angle of screen, and signage to figure out - that a large-screen interface is touch enabled.” Shamonsky suggests a particular signifier: position the display at a 45-degree angle on the wall, with the top of the display leaning toward the wall and the bottom toward the user. “This tends to reassure users that the screen is touch enabled, especially in the absence of a keyboard or mouse. Tabletop displays also signal touch interactivity. But if you want to entertain and attract users from across a room, and encourage shared interaction, a wall-mounted screen is much more dramatic and preferable to a tabletop display.” Large displays offer plenty of screen real estate for interface controls, so designers are not challenged to fit controls into the interface. However, since UI elements need to be larger to be seen and interacted with on a large screen, a big screen can fill up surprisingly quickly. Thus, a very large touchscreen design suffers the same threat of being over filled that smaller screens do. Designers should avoid clutter and should heed Edward Tufte’s recommendation to mind the ratio of content versus other UI elements. (Tufte’s Data-Ink Ratio is usually applied to graphs but can also be telling about screen real estate and content value.) Similarly, our eyetracking research reveals that page density accounts for 8% of the variability in how people look at web pages. In other words, good content, understandable UI elements, and less cluttered pages engage people more than cluttered pages do. Recommended signifiers to indicating that a large screen is a touchscreen: If the display is affixed to a wall, angle the screen 45 degrees. Choose a tabletop display (flat or slightly angled). Prompt to touch with words on the screen and other touchable-looking items on the screen. Offer signage near the screen prompting to touch, such as modest placard that says, “This is a touchscreen”. Implement a timeout event that, after a particular amount of idle time, starts an “attract mode” that encourages people to try the device. Make the UI engaging and interesting so there are often other people using it people learn how to use it by seeing others using the touchscreen. We could say not to clean the screen or use a material that retains fingerprints and smudges longer, but that would be gross. Still, fingerprints on the screen are signifiers. Measure the clutter amount in the screen design. Remove screen content with little or no value so the important actions and features are more visible. Touch Targets and Signifiers at the Micro Level On smartphones, interface controls are sometimes hidden in small controls or hard-to-decipher icons. But it is at least possible to take in the whole screen with one glance. With a very large screen, however, the large field of view makes it difficult to see and notice interface elements. Users must move their heads around, not just the angle of their eye gazes, and must engage their necks to see all the parts of the interface. Shamonsky describes, “It seems counterintuitive that interface controls would be harder to find on a large screen, but that is my experience observing users. So the designer has a different challenge with large screens than with small screens, which is to make interface elements noticeable, without being obnoxious.” People interacting with very large touchscreens are usually farther away from the screen than are when using a phone, so this affects what they can see. They are most likely to: reach for the interface at extended arm’s length stand up have flexibility to move very close or step back to look at the screen. Shamonsky notes, “Stepping back does provide a better ability to see the entire display more easily, and there is a tendency for users to do that as they interact with the device.” In addition to the vision considerations, designers need to consider reach and touch accuracy on a very large screen on which people can tap, swipe, flick, drag, pinch close, and pinch open. But unlike a phone that people can hold comfortably in their hands, a large screen is: mounted firmly cannot be picked up by the user often cannot be tilted or swiveled at all may be flat against the wall person is using 2 fingers to swipe on the 72-inch ViewPoint touchscreen A person is using 2 fingers to swipe on the 72-inch ViewPoint touchscreen. “As a designer, one of my goals was to take full advantage of the visceral appeal of interacting with touch on a very large display,” says Shamonsky. When designing links and buttons for computer screens and phones we already consider Fitts’s Law, which says that the time to acquire a target is a function of the distance to and size of the target. With mobile-phone design, because the screen is so small, all points are almost at the same distance from our fingertips, so we mostly focus on target size. But with very large touchscreens, the distance from the target becomes more relevant. In particular, designers must consider the human physical traits and capabilities such as: arm reach arm motion hand touch with palm or multiple fingertips height of the person Of course, the target size still remains essential in ensuring accurate reach. Shamonsky shared, “Comfortable target sizes on large displays are more complicated to determine, although the large screen gives the designer much more space to err on the side of larger target sizes. A frequently used target may need to be larger so people don’t have to struggle and slow down their arm momentum to hit it accurately. A rarely used target works at a minimum size [because the nuisance associated with the use is not experienced often].” Another factor to consider is user fatigue, observed Shamonsky. “Although an expansive screen is seductive and offers many advantages over a small screen, it can be tiring to use,” she says. “Large touchscreens engage the whole body since the large space usually requires the use of arms to reach interface controls. The physical effort involved in interacting with a very large screen is significant enough that it becomes noticeably tiring when the task goes beyond casual browsing. In an editorial in Scientific American, David Pogue alluded to the effect of extended use of touchscreen as ‘gorilla arm’.” When comparing a drag gesture on a very large screen to the same drag gesture on a small screen, interface elements feel harder to move on the large screen. Shamonsky observed that, “Tap is not an issue. But since swipe, flick, drag, and pinch and unpinch require continuous effort applied to a screen element, those gestures can feel more strenuous on a large display.” Finally, the angle at which the screen is positioned can also alter how easily these gestures can be made, according to Shamonsky. “Especially when the screen is positioned at a 90-degree angle [hung so it is flat on the wall], it can negatively affect the user’s accuracy when dragging, pinching and unpinching, and typing on an onscreen keyboard. Allowing people to move the keyboard affords flexibility.” Recommendations for gestures and text input, and for signifiers indicating controls on a large touchscreen: Make large interface elements. Add animations and slight movement to elements to attract the user’s eye. Ensure that text, images, and buttons are legible and decipherable, especially when standing at arm’s length from the screen. Create larger targets that also account for arm motion, arm reach, and user height variables. Make the on-screen keyboard a moveable element so users may drag it and use it in an area on the screen that is most comfortable for them—independent on their height. Adjust the drag, acceleration, and deceleration on interface elements to make them feel lighter and easy to control. Privacy and Screen Sharing Sharing the screen occurs naturally when there are multiple people and a very large screen present. Shamonsky explains that sharing “begins with a user seeing a large display across a room and observing others interact with it. As the user approaches the device, she may even engage with the current users and comment to them about what she is observing.” Then, depending on the actual size of the device, there is likely enough screen space for 2 or 3 people to interact at the same time. Two people collaborate using the ViewPoint kiosk at an auto dealership Two people collaborate using the ViewPoint kiosk at an auto dealership. The 72-inch touchscreen makes used-car inventory more visible and tangible, and, in the end, easier and more fun to find the right car. The negative flipside to the easy screen sharing is that there is little or no privacy for users of large, public touch-screens. Shamonsky says that a screen angled at 45 degrees, “does offer a small amount of privacy from those standing farther back, but a vertical screen can only offer privacy of scale.” Recommendations for privacy Consider the type of information you are asking people to enter, and the environment in which they will use the touch screen. Based on this, decide whether the information should be asked for at all. If asking someone to enter something private, for example an email address, consider making the keyboard itself fairly large but displaying text that is typed as small as possible (so it is only legible for people standing close to the display). Freedom and Novelty of a Large Screen “One of the most striking qualities of a large touchscreen,” explains Shamonsky, “is the size of the space in which information is presented, and the option to ‘play’ at interacting. It is inherently satisfying to have an expansive view of something. It is also appealing to have a large space in which to gesture with your whole arm, to move things around, and to share the space with others.” Shamonsky also explained an unexpected attribute of large touchscreens: drawing out the entertainer in the user. “Obviously, others can observe what you are doing, so you become a performer of sorts with the application, which can be fun,” she says. Recommendations for supporting “performers” Ensure that the users are as comfortable as possible “performing”. You can do this by making them feel competent with designs that maximize ease of use. Adding in cool interface effects can always help a performer look impressive. Conclusion The table below summarizes some of the user behaviors with smart phones, large touch screens such as 24-in tablets, and very large touchscreens. (Besides their size, the main difference between large and very large touchescreens is that the large touchscreens can still be moved around by the user relatively easily.) Notice that large and very large touch screens present the most similar challenges. Smart Phone Large Touch Screen Very Large Touch Screen Sample device iPhone Small kiosk or Nabi Big Tab Wall-mounted display Typical size (diagonal) 4.7 inches = 12 cm 24 inches = 61 cm 72 inches = 183 cm Easy to share the screen No Yes Yes Unintended touches common Yes: Fat fingers. Avoid by designing larger targets. Yes: Unintended two-handed touches No Extra physical effort to see the whole screen No Yes, physical proximity makes it necessary to move head Yes, physical proximity makes it necessary to move head and sometimes step forward or backward Extra physical effort to type (and tap) No Yes, arm movement needed Yes, arm and neck movement needed, and sometimes stepping forward or backward Privacy issues No Yes Yes Shamonsky shares other findings based on her research experience with very large touch screens, “A large touchscreen can look beautiful and is enjoyable to interact with! At the same time, a large display will magnify a poor user experience. If you don’t like the way an interface looks at a small size, on a large screen it will be more offensive. Everything about the user experience is exaggerated at the large size—the beauty and the fun, as well as the effort and the frustration. Attempting to use touch on sites and apps that are were not designed for touch is, if nothing else, boring. Creating compelling touch interaction requires an understanding of the familiar gestures and how to use them appropriately. Use simple and clear visual and aural feedback to create a sense of tactile feedback. Tune into the joy of a good user experience. ” For more information about designing for different screen sizes, consider our Scaling User Interfaces course. Share this article: Twitter | LinkedIn | Google+ | Email