The UX Declaration of Independence from Engineering

(Co-written by Thomas Jefferson, I hope my non-American friends will indulge an American metaphor of the declaration of independence.)

When in the course of human events, it becomes necessary for one profession (UXD) to dissolve the bonds which have connected them with another profession (Software Engineering), and to assume their own powers, separate and equal from other professions to which the Laws of Nature entitle them. A decent respect to the opinions of software engineering requires that User Experience should declare the causes which impel them to the separation.

We hold these truths to be self-evident, that all products are endowed by their creators with user experiences with certain unalienable Rights, that among these are usability, satisfaction, and business feasibility. Furthermore the user has a right to a user experience which is derived from the entire company/organization not just what is technically feasible at a given moment.

That to secure these rights, User Experience professionals are engaged by companies and businesses. These professionals derive their just powers from a professional integrity that must not be compromised, otherwise the User Experience design loses whatever rights they have to exist.

Software Engineering as a process has had a tyrannical effect on the User Experience professional, forcing them to shorter and shorter deadlines with less and less available resources that the point is reached that UX Professionals often find themselves going through motions rather than truly designing professional products the way they are truly capable of creating them.

The history of Software Engineering processes are a history of repeated injuries and usurpations of UX terrain, all having in direct effect the establishment of an absolute Tyranny over this profession. To prove this, let Facts be submitted to a candid world:

    • Development methods are continually shortening their design process and their delivery deadlines
    • This makes it impossible to do a thorough and adequate design process, forcing us to take all kinds of irresponsible and inappropriate short cuts.
    • Specifically, Agile development processes attempt to preclude any upfront design or research as good UX processes demand
  • Development do not use UX metrics as a measure for their success
    • Consequently there is no business case for following UX best practices
  • Development keeps the UX bar purposefully low so that UX accountability is
    • non-existent — even when it is clear that products are failing because of their poor user experiences
    • an afterthought — the product is a success or failure and after the fact UX is blamed or ignored
    • an anecdote — the arbitrary story or urban legend of use becomes definitional for the user experience
    • unprofessional — as long as the bar is low, poor UX design will yield equal results making the establishment of UX best practices very difficult
  • Development’s near fetish-like fascination with a release puts artificial blinders on the UX processes, resulting in:
    • assuring structurally sub-optimal results
    • cutting corners when it really is not necessary
    • giving undue credence to an artificial argument against UX additional processes
    • obscuring the value of user experience design by forcing it into the release focus of software engineering.
  • UX quality is now reliant on the kindness of strangers, that will say the extent to which a Software Engineering team is or is not enlightened to the value and processes of User Experience Design.

We, therefore, the Representatives of the united User Experience Designers hold that instead of working under the hegemony of engineering, User Experience activities should work in coordination, not in tandem with Software Engineering.

Among the ongoing process which User Experience should be working on independent of Software Engineering include (partial list for the longer list of UX processes see the previous post in this blog):

  • User Research
  • Design Research
  • Requirements gathering (SE’s are needed for technical requirements but that is only one part of the whole requirements picture
  • Product design
  • Conceptual design which may cover multiple products/channels and multiple releases.

Places where software engineers and user experience should closely together is

  • translating a conceptual design to a specific product release cycle
    • product definition
    • product detailed design
    • product design reviews and iterations
  • mentoring developers through a product release
  • evaluating software engineer work for fidelity to UX concept using appropriate UX metrics
  • release planning

Software engineering in turn should act as mentors in the UX processes assuring technical feasibility for short and medium term are tracked and noted. In this way Software Engineering, Product Management and User Experience are truly equal partners in the creation of great products and product experiences.

Signed 2 March 2010

Advertisements

Halcyon days at the EuroIA Conference

Last week I attended the EuroIA conference. I was there primarily to give a talk with my former Google colleague, Greg Hochmuth, on a project we did on on-line privacy. To be honest I had low expectations for the conference, thinking it was not going to be very professional. That was my estimation of the IA movement in general. I favored the more rigorous CHI model. This reliance/faith in CHI is why I have been working so hard to bring practitioners into CHI with the design track work and of course the DUX conference series, etc. I assume that CHI was where the interesting professional UX work would be done. I did not expect any such thing at an IA conference, which I thought was too narrow and too niche to be interesting.

I was wrong and closed minded, both of which I find annoying.

I was quite surprised to attend a very fine conference with a strong practitioner focus with competent representatives from industry giving case studies and thought provoking discussions. There were, of course, more than a few missers. However, when you attend a CHI conference misser you really wasted your time at some inapplicable pedantic presentation. These were all interesting even if not earth shattering.

I was also pleased to see that the attendees had a kind of willful confusion of IA with UX. Eric Reiss one of the leaders in the conference series said early on he was proud that they would have no debates on terminology or definitions.

What is IA

It seems to me that IA (Information Architecture) and HCI (Human-Computer Interaction) are two ways to achieve the same effect. One is information driven, the other is interaction driven. Both strive for but don’t quite achieve UCD. To borrow a Mahler analogy, these two movements seem to dig from opposite sides of the mountain to reach the center.

Setting the stage for the conference was an interesting case study keynote given by Scott Thomas on his work for the Obama presidential campaign web site. A refreshing talk, one would probably never hear at CHI, charting the work he did as both designer and web developer and IA for one of the most successful and high profile web presences.

It was clear at the conference that there are those who do specialize in IA and don’t touch interaction design with a ten foot pole; however the majority seem to blissfully switch between IA, ID, and UX designer labels based on what will get them the job or the most influence. The resulting conference content is interesting and competent, usually not pedantic (there were a couple regrettable forrays into pedantia–oh I am being pedantic aren’t I?). I will hasten to add that probably 10% of these presentations would have been accepted at CHI.

CHI Bashing

Not that I am in anyway bashing CHI (well I guess i am sort of). CHI continues to be dominated by Academia, it is its reason to exist. So it makes sense that more practioner oriented organizations thrive and offer better conference experiences like EuroIA, SXSW is another such conference. However, there are some design heavy weights very active and present at CHI. People like Bill Gaver, Bill Verplank, Bill Buxton–hey are all of them named Bill? So I guess we should also include Bill Card and Bill Dray…

Still going to a CHI conference is daunting and if you do not stick to the Design or practitioner focussed papers it is really hit and miss. Then there is also the unfortunate academic who strays into a design paper and lambastes a practitioner for not holding double blind studies on a project with a limited client budget. Ah, it is always embarassing when people can’t check their egos at the door.

So, it is good there are several credible alternatives to CHI. I guess this means I need to attend the next IA Summit and see what that’s all about. I don’t think I can take anymore good stuff…

This profession

In the end, I had a friendly familiar feeling at EuroIA. A feeling like I had met these people before. It seems that regardless of whether you are at CHI or EuroIA or UPA or wherever, people of our profession(s) share this common empathic passion for our stakeholders. This makes us a particularly caring and sympathetic tribe.

Measuring the User Experience

This weeks post is a review of the book Measuring the User Experience by Tom Tullis and Bill Albert. From time to time other book reviews will follow.

Why a book review

The current state of books on UX is deplorable. Many UX books can’t make up their mind if they are about a given subject or the UX world according to Garp. Just looking at my UX bookshelves, I notice there are, for example, many books with authors who have a narrow or focussed expertise. These authors write books supposedly over a narrow subject, which they sustain for about a chapter or two before they deteriorate into their own homemade version of the User Centered Design process that has little if anything to do with the subject of the book they intended to write. The result is a book with grains of truth in a stew of platitudes. A review of just three books one claiming to be on prototyping, one on designing and another on UX communications, reveals that all of these books cover more or less the same material such as user research, task analysis, persona’s and prototyping; but it does it in such a way that they use both conflicting terminology and conflicting methods.

My more ideal UX books are those on a subject and stick to that subject. They explain their topic in a way that is process independent so that they can plug into whatever processes companies or organizations utilize. The fact of the matter is that no two organizations adopt the same software development process. What they all have in common whether they are called agile or waterfall, iterative or serial, is that they are all machiavellian. Therefore if a book’s material cannot fit into the current machiavellian software development processes, then the book is largely worthless; even if entertaining (though probably not as entertaining as E.M. Forester).

I think one of the best services I can do then is to help people navigate around these literary cataracts and start a series of book reviews. These reviews will try and highlight the best of the UX literary corpus.

Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics by Tom Tullis and Bill Albert

I want to start with one of the brighter lights in our industry Tom Tullis. I have often wondered why he had not earlier written a book, given the high quality of contributions he has made to our profession. Well the wait is over.

It’s true it is a book on usability metrics. Now I realize there are some people who hate metrics. These people particularly hate any accountability for their design work. I can’t tell you the hate mail i received, even from large design firms, when as Interactions Editor we did a special issue on measuring usability that was guest edited by Jeff Sauro. Well, I purchased Measuring the User Experience (MUX if you will) expecting a more thorough version of that special edition that went into the statistical significance of usability testing. I was in for a very welcomed surprise: this book does not just cover summative usability statistics but many different ways to collect user experience metrics and the also discuss proper analysis techniques.

The book empowers the user to make the right decision regarding what methods you can use and what you can expect the metrics to be able to tell you or not tell you. As the book states metrics can help you to answer questions such as:

  • Will the users like the product?
  • Is this new product more efficient to use than the current product?
  • How does the usability of this product compare to the competition?
  • What are the most significant usability problems with this product?
  • Are improvements being made from one design iteration to the next?

This is a refreshing change from just looking at time on task, error rates and task success rates. Though of course these play a role they are but ends to the means of answering these larger questions. Furthermore, the book also points out that there is also an analysis step that can greatly alter the seemingly obvious findings.

I cannot tell you the amount of time and money I have seen wasted as perfectly reasonable and wonderful user research was conducted, only to have its results obfuscated and mutilated beyond use. This book will not just enable the usability tester or researcher to avoid such mistakes it also empowers a project manager to see to it that a development project designs the solid usability study that will fit in the goals and needs of the development team.

In their discussion of designing the right usability study. The authors guide you in choosing the right metrics.

First you need to establish if the goal of your study is what the goal of the user’s are. Then on that basis you can look at which metrics, the authors identify 10 common types of usability studies:

  1. Completing a transaction
  2. Comparing products
  3. Evaluating frequent use of the same product
  4. Evaluating navigation and/or information architecture
  5. Increasing awareness
  6. Problem discovery
  7. Maximizing usability for a critical product
  8. Creating an overall positive user experience
  9. Evaluating the impact of subtle changes
  10. Comparing alternative designs

Then, a key issue they discuss is looking at the budgets and timelines, aka, the Machiavellian business case for the study. Then you can tailor the type of study: how many participants, will it be tests, or reviews or focus groups or a combination thereof.

In the conduct of these studies it is also important to track the right metrics. Tullis and Albert identify the following types of metrics:

  • Performance Metrics — time on task error rates, etc.
  • Issue-based metrics — particular problems or successes in the interface along with severity and frequency
  • Self-reported metrics — how a user can report their experience with questionnaires or interviews
  • Behavior or physical metrics — facial expressions, eye-tracking etc.

It handles these metrics as they should be as part of an overall strategy not favoring one over another as being innately superior. All too often usability testing consultants are one trick ponies, prisoners of whatever limited toolset they happen to have learned.

This book allows the user to assemble all the needed metrics across types to achieve a more holistic view of the user experience, or at least sensitize them that they are not looking at the whole picture.

What is also amazing is the focus and discipline in the book. I think many other authors would not be able to fight the temptation to then expand the book to include how to perform the different types of evaluations, usability tests, etc. These authors acknowledge there are already books that cover these other related aspects and keep their emphasis purely on the subject matter of their book: measuring the user experience.

Yes the book does also get into statistics and evens hows you how to do simple straightforward statistical analysis using that panacea to the world’s known problems’ excel (but that is next week’s topic).

And just in case your wondering the usability score for Amazon is 3.25, while Google’s is 4.13 and the Apple iPhone is a mere 2.97. While the web application suite I just finished designing got a perfect 4.627333.

Confusing a Heuristic with a Moral Imperative

Heuristics are excellent assistance in identifying potential problems with a given user interface design. The trouble lies when people come to rely on these as the sole input, that somehow they can come and overtake the more rigorous and far more accurate methods of evaluation. So please don’t read below as being anti-heuristic but rather anti-misuse of heuristics.

I have been working more and more with consultants and pseudo-designers who have been working on evaluating web applications with a ton of heuristics in their hands. I can hear them clear across cubeville with clipboards in their hands:

“This is terrible, you are inconsistent between these pages, those pages ignore web standards, these other pages behave differently than the others, and oh my gosh look at all these unnecessary graphics, rip these all out. Get rid of the background coors, and ugh those button colors!”

Concept and user groups can trump heuristics

The fact is there could be a valid reason for violating every single one of these heuristics. Worse yet, there are these type of evaluators who without so much as learning the context go in a tear apart a site for violating, standards, UI conventions and other heuristics of all sorts.

A well defined and innovative concept will often require breaking a few rules. Moreover, if a concept is tailored to a specific user group, who is not the evaluator, then all the heuristics are almost invalid.

Heuristics are defined as (according to my Mac dictionary, and why should we doubt Apple?):

Heuristic (/hjʊˈrɪs.tɪk/) is an adjective for experience-based techniquesthat help in problem solving, learning and discovery. A heuristic method is particularly used to rapidly come to a solution that is hoped to be close tothe best possible answer, or ‘optimal solution’. Heuristics are “rules of thumb, educated guesses, intuitive judgments or simply common sense.

Well here are some of these so-called common sense rules of thumb with some food for thought to think about along side of them, I am using the list from Jakob Nielsen’s site, just to pick 10 basic one (http://www.useit.com/papers/heuristic/heuristic_list.html) This is not to pick on Jakob, as the point here is to discuss the pitfalls when heuristics are used as the sole means for evaluation, as such every heuristic can be picked apart and discredited, these are just 10 examples:

Heuristic Justification (From Nielsen) Yes but…
Visibility of system status The system should always keep users informed about what is going on, through appropriate feedback within reasonable time. Maybe the user doesn’t and shouldn’t care. This heuristic assumes a user population actually cares about what is going on. Many user’s could careless unless it’s going to cause them a problem. You should have some basic trust built with your users and that trust may mean only informing them in the case of a problem, or handling the back end status problems yourself.
Match between system and the real world The system should speak the users’ language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order. Unless the purpose of the site is teaching the user a domain, or new task. An example would be Google Ad Words, where a novice user does need to learn some basic Advertising terminology or the advanced features will be lost on them.
User control and freedom Users often choose system functions by mistake and will need a clearly marked “emergency exit” to leave the unwanted state without having to go through an extended dialogue. Support undo and redo. This heuristic seems to justify poor design. User control and freedom come more from safety is more than just redo or undo, its the ability to let the user explore and play around with the system. This is done through facile interaction design, a heuristic I have never seen listed.
Consistency and standards Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions. This assumes 1. the user has no other reference point than platform standards 2. the platform has standards or usable ones

Again this justifies lazy design. Standards are a fall back (I say this as someone who has written UX standards for 3 major software companies); the conceptual design should be leading.

Error prevention Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action. Here is a useless Heuristic. What is an error? One man’s error is another man’s exploration. Maybe you should enable errors?
Recognition rather than recall Minimize the user’s memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate. Indeed the memory load should be lightened for a user. However the better way to do this is to employ well established visual and interaction patterns. Worse, this explanation can be very misleading for the naive reader. Indeed I have experienced many a designer and developer use this heuristic explanation to 1. attack progressive disclosure and 2. To create a ridiculously busy screen throwing all functionality with equal visibility into a “one-stop shopping” kind of screen. Or worse a screen with a huge amount of text explaining how to use the screen. All of which are from a cognitive ergonomic perspective completely unusable.
Flexibility and efficiency of use Accelerators — unseen by the novice user — may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions. Building in redundancy to support multiple styles of interaction would be a better way of putting this. However, this needs to be seen in the context of a broader design concept. For example, there is often this designer fetish for drag and drop, when often it is only the designer who wants to perform this action. Also, implementing drag and drop in one place, invites the user to try it everywhere and very annoying when it does not work as they expect it to. So pick these accelerators well and not just for their own sake.
Aesthetic and minimalist design Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility. The explanation is here at odds with the heuristic. The heuristic seems to cry for everything to be a Scandinavian styled minimalist design; whereas the explanation goes on about text.

The visual design should leverage the brand and ability to communicate. Gratuitous graphics are supposedly bad unless the delight the target users (think of Google’s doodles on their home page).

As far as minimalism, I recall Tufte who said anyone can pull information out, how you pack information into something and keep it intelligible and usable is the real challenge.

Help users recognize, diagnose, and recover from errors Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution. My only problem here is the “precisely indicate the problem.” I am sure Jakob did not mean go into gory technical details of the problem, but rather concisely describe the issue. E.g. “Your data was not saved.” not “Your data was sent to the application layer and experienced a time out longer than 3 ms and the system sent back the data in an unusable format.”

My formula for error message writing:

“Short sentence what happened. (forget why) Short sentence how to fix it. A link can be added “Learn more” or “Why did this happen to such an undeserving user as me” for the morbidly curious.

Help and documentation Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user’s task, list concrete steps to be carried out, and not be too large. Far from apologizing for help we should revel in it.Help and documentation should be electronic and in context. For example, micro help (a question mark icon or What’s this link which work on mouseover or a small popup) often assist the user without interruption.

The mythical 80/20 rule

In a sad day for most digital products and services, an italian economist, Vilfredo Pareto, observed that 80% of Italy’s wealth was owned by 20% of the population. From that economic observation has come a torrent of the most far flung interpretations of a non-existent 80-20 rule. There is no 80-20 rule. There never was and never will be. Yet so many developers, designers, product managers evoke this mythical rule to justify the most outlandish pipe dreams, shoddy work or just plain laziness. Which is a pity because it ruins the credibility of a principle that in 20% of the circumstances can be 80% helpful.

The crux of when the 80/20 principle is helpful is when you need to fend off perfectionists. The the 80/20 principle helps you to illustrate that a minority of factors can result in a majority of effects, the aka ‘biggest bang for your buck.’ But how can you tell when the principle is being used wisely or not.

80/20 Pipe dreams
Some G-d forsaken Gui Guru once said that 80% of screens could be driven by templates; while only 20% of the screens needed to be designed. This completely unsubstantiated drivel has lead to many efforts to “automatically generate a UI”. It has lead to millions of wasted dollars and development effort in worthless tools and idiotic processes all aimed at designing without designers. If the Guru was right then about 80% of the screens could be generated, reasoned the technocrati, you hit the 80/20 rule and you applications will be fine except for 20% of the time. Some more thoughtful product managers then would hire in an army of designers to cover the 20% they thought ‘really needed design.’ But even then, as one Product manager in just such a project told me, in how own pipedream: “I want you to design templates with such a narrow path of movement that a designer can only make the right choice.”

The reality of the matter is more along the lines that 80% of a given screen could be generated while 20% needed to be designed, but oh the devil is in the details and often that 20% is where the most difficult design challenges lay. Therefore, the 20& should end up driving the other 80%, not the other way around. [Never mind the fact that this pipe dream totally negates the necessity and power of the conceptual design (see It’s all about the concept).]

80/20 Shoddy work
Often someone will deliver (or even ask for) 80% of the work they really need to get done. This is usually done to purposefully keep the quality low. Example. A software engineer asks the designer for rough documentation that is quick and easy to read, just giving the 20% key interactions and let the ‘no brainers’ to the engineer himself. This assures the design bar remains low. With it this low shoddy work can triumph the design goals being so low. The design fails to deliver but it was set up to fail and no one even expected it to succeed in the first place. This way the technology can triumph, reason the developer, while the design has a systematic back place.

80/20 Laziness
All to often a designer themselves will end with a rough sketch and miss some of the finer details of the design they need to deliver again claiming to deliver according to the 80/20 rule. Often the excuse is: “No need to over deliver those developers won’t design it to spec anyway.” Or: “Stuff always comes up during development it will change anyway. I will just give them 80% and give them 20% margin to play with.” This is pure laziness. As any good designer will tell you that the devil is in the details. Or as some of the better desigers have pointed out G-d is in the detail; because those small details are often what separates an ordinary design from a truly excellent well thought out design. That last 20% is again, the last thing you want to leave to a developer or other non-designer. Furthermore, those gnarly details you have solved will go a lot further in helping developers improvise when they have to than if you just leave them a blank space to fill in all on their own.

80/20 rawks

The 80/20 principle works excellently when you need to stop someone going off into the weeds of perfectionism. The software should be bug free. The software should please every user to do everything. The software should have perfect tests. Anything that reeks of perfectionist is liable for the 80/20 rule. For as we all know at one moment, the waters can get murky. e.g. one user’s bug is another user’s feature. Just make sure whenever some pulls an 80/20 on you, or you pull it on someone, that you have an objective measure to back up your 80/20. Yes, 20 percent of the people own 80% of the wealth. Yes, if we provide 20% of the functionality we will make 80% of the users happy as we can see in these usability tests, etc.

Usability Engineers vs Designer: the process problem.

Another week has come (mine starts on Tuesday, you can do those things when you live in Europe). More and more, we see problems surfacing not from having the wrong people in place but rather the wrong process. My previous post discusses the rampant wrong process with prototyping. Here I want to touch on the process issues with usability testing and design. I want you to consider this familiar and completely unnecessary scenario:

 A Designer works on a conceptual design with the customer. Then he works out a detailed design into a prototype that can be tested. So far so good. But what goes wrong is that the Usability Engineer is often disconnected to either the design concept or the detailed design. The usability engineer ends up suggesting new designs that totally contradict the conceptual design. The designer is gone. The engineering team implements the changes and the result is a Frankenstein’s monster that despite the best UX resources, fails in the marketplace.
 The obvious problem is the process disconnect between Designer and Usability. And the problems are serious. I want to discuss two aspects to these problems and how to resolve them: namely how False Negatives in a usability test and how to deal with Usability Engineer’s design advice.

False negatives

When usability testing is conducted without input from the designer, this can lead to many false negative issues in the usability test. Examples of the errors that can result include:

  • Early tests will report usability issues with conventions that a user is expected to learn over time or with a different task flow than being tested.
  • Early tests especially lower fidelity ones may not catch learnability/system feedback issues due to lack of visual fidelity needed to communicate with the user
  • Test moderators, not knowing the underlying concept may inadvertently introduce the topic or task in a way that is at odds with the design, thereby confusing the test subject.

This list is just the tip of the iceberg. These negative side effects are completely avoidable by making sure the Designer and Usability Engineer work together on the Usability Test script, identifying tasks and their importance. Also the task order when that is appropriate for a test (for example one step is a required gateway: e.g. sign up). Also let the Usability Engineer attend some of the conceptual design sessions and, OMG let them participate in the conceptual design; so they gain a thorough understand of it. Conversely, Designers should observe the usability tests whenever possible. The tests themselves can be so much more inspiring and vivid than even the best written report.

Usability Engineer’s design advice

It is an expected part of the Usability Engineer’s work to include not just data and analysis; but also design advice or alternative designs. This does not need to be a problem. But without setting expectations, the innocent Product Manager or software engineer confronted with new contradictory designs can quickly conclude that the UX profession is a screwed up group who cannot make up their minds.

Among the possible problems with blindly taking Usability Engineer design advice is:

  • Designs may not be an ideal solution for the problems they have discovered
  • Designs often recommend things that will cause usability problems elsewhere by introducing conceptually non-standard interactions
  • Designs ignore larger issues since their advice focuses on the testing and not the larger issues (e.g. Business and Technical requirements which may lead to a different solution than suggested).
    • A common example of this is when the Usability Engineer suggests something that is technically impossible for the requirements or constraints.

These and other issues with the Usability Engineer design advice harms everyone’s credibility both designer and usability engineer. This is not to say that Usability Engineers should not give their advice. But it is absolutely important to set the right expectations. Usability Engineer design advice should be viewed as input to the problem not the solution. If the Usability Engineer includes the design rationale this will often give the vital information for coming up with a more ideal solution.

The design rationale should enumerate the objective reasons for the alternative design. This allows the designer to bridge the problem design with a solution based on objective criteria instead of personal taste.

[Objective information is one that either refers to the usability data itself (e.g. only 2 out of 12 users understood this command) or conceptual data based on requirements, (This design does not appeal to our target users or is not constant with the image/branding of the company). Both types of information can lead to a solution. Comments like, “I don’t like that color” or “It doesn’t look right to me.” do not lead to workable solutions.]

Usability data misinforming design

Usability data is rarely communicated with the limitations or short-comings in the data and this is a real pity. All too often a usability engineering report reads like a set of demands and commandments without stipulating under what conditions this advice or anaylsis should be given. Things like the significance, persistence, sampling issues, etc. are often underplayed. Again a faulty process is the problem. Many usability engineers are under pressure to work quickly and also find dramatic and significant results. This can put a Usability Engineer between a rock and a hard place: asked to review a product with three of Janitor’s friends and then come up with a list of “just the most important recommendations.” Ah if life were only so easy? Yet we are constantly being put in this position. The client maybe always king, but findings that can include a little context setting would help the end-users of the usability reports.

Lessons learned

  • Designers and Usability Engineers should insist on working together in projects. Meaning the Engineer is available during the concept design phase and the Designer is available during the testing design phase. (With iterative testing the designer must be available with each design cycle.)
  • The customers should require design and usability engineers to work together. This will often require the usability engineer to come in early for 1-2 days in the conceptual design phase before their main work begins week(s) later. (Yes that also means if the engineer is an external contractor, the customer must pay the Usability Engineer for this work.)
  • Customers should also realize the usability engineers do not provide solutions they propose challenges and problems that need to be solved.
  • Usability Engineers may be great designers or maybe crap designers but as long as they include objective design rationale for their proposed solutions they will always be helpful

It’s all about the Concept

This editorial is one very dear to my heart, as it touches on the cornerstone of good design and something I miss in all to many HCI designer’s work: the concept. In order to keep to the same points as in the original, I have done little editing from the original.

How can a design review be conducted on static interfaces?

What constitutes good design?

Good design is harder and harder to find these days. It is disheartening when people present a single window or Web page and ask for an evaluation, especially when the question is: “Is this good design?” How can a design review be conducted on static interfaces? What is possible to evaluate? What constitutes good design? Is it possible to judge a design from static screens?

As we said earlier—in fact way earlier in an article co-written with Wendy Mackay in March 2001—when discussing the importance of contextualizing design, the issue is not whether a design is good, but is it a good design of…what?
When starting to review a single screen, your heuristic-laden ego itches to pour out criticisms. A copy command on the file menu! Are you nuts?! You don’t put buttons on the bottom left! Serif fonts! Are you mad? Where did you get that typeface? Spinners are so 1990s! Script typefaces on mobile devices? Split buttons! Are you sure about that? But instead of continuing that tirade, let’s pause for a moment and ask: design of what?
First off, let’s suggest some good questions. Do you really know the context to start judging a design correctly? What aspect of design are you reviewing? The visual design? The information design? The layout design? The interaction design? How do you “see” an interaction design in a static page?
There are of course many ways to represent all of the above. We’re interested in how you do it. Do you divorce these aspects of design, or do you combine them in certain ways? Which combinations have been most successful for you?
It’s difficult to divorce one from the others: All aspects of design must work together in a unified concept. That concept involves rich knowledge preceding the design activity: of the end users, their background, their tasks, their mental models. It doesn’t stop there: You then need to understand your engineering constraints: what your developers’ toolkit can and can’t do, what sort of custom code your design will require, and whether your design needs to absolutely follow standards for future evolution and code maintenance, or if you’re able to leap into new territory and design a new widget or two. Further rich knowledge that can and should influence the conceptual design: understanding the business model of the company producing the software. Is this a demo? Can it cut corners, or is it production quality? Is it a step in a long line of .dot releases? Is it the first version? Can you take risks? Do you need to reach feature and usability parity with other competitors, or do you need to excel and claim best-in-class?
Before you can understand how to design, you need to understand design. The conceptual design is more than any one facet of design; it’s a gestalt that is more than the sum of the parts. Taking this perspective, how do you evaluate that single screen?—Jonathan Arnowitz and Elizabeth Dykstra-Erickson

This is a draft version (preprint) of material which was later published in substantially the same form in my Rant column in ACM’s magazine. The published version is a copyrighted feature of Communications of the ACM. All requests for reproduction and/or distribution of the published version should be directed to the ACM.