Advocacy

  Myths
  Press

Dojo (HowTo)

  General
  Hack
  Hardware
  Interface
  Software

Reference

  Standards
  People
  Forensics

Markets

  Web

Museum

  CodeNames
  Easter Eggs
  History
  Innovation
  Sightings

News

  Opinion

Other

  Martial Arts
  ITIL
  Thought


Real Interfaces
UI Religious Wars

By:David K. Every
©Copyright 1999


A "real" interface is where program designers decide that rather than create a User Interface using a more abstract metaphor, they sort of cheat, and just make a simulation of a "real" object.

For example, you just make a picture of a remote control (for a TV or VCR) and make the controls on it "active" and try to make them work just like the real interface.

Now most metaphors do try to borrow from the real world, but there is a difference between an abstract metaphors and real world ones. Apple's "Desktop Metaphor" was abstract -- you can move files and folders around on your computer desktop, not completely unlike a real desktop, but many things are not exactly like the real world. For example, you don't drop files onto your printer to get them to print. When you move something from one file cabinet to another it doesn't automatically make a copy (like moving from one drive to another). Abstract metaphors don't even try to look exactly like the real thing either -- usually opting for simpler iconic or stylized representations (trying to imply this is "like" that -- but not necessarily exactly that.

Real world metaphors usually try to borrow as closely as possible. They don't use stylized representations (like icons), instead they opt for real pictures of the objects -- and behaviors aren't usually similar, instead they try to be "exact" -- even when that doesn't make sense. There was once a "real world" version of the Finder that had a little picture of an office with a desk. You didn't have a drive, you actually clicked on the drawers in the desk or file cabinet to file things, all programs (tools) you wanted to run were arranged on the desk or around the office. It had a few advantages and lots of disadvantages and wasn't very popular and people lost interest quickly. About 8 or 9 years later, Microsoft tried to copy it with "Bob(tm)" -- it too was a flop.

There is somewhat a religious war over these interfaces in human factors (User Interface) circles about whether these are good or bad. It definitely seems to polarize some for many valid reasons (on both sides). I tend to be pragmatic about it (and don't come down on either extreme) -- though I do like to avoid them when possible and only occasionally find them better than a good abstract metaphor. But understanding the why's can help you to understand more about Computer-User Interface.

The Balances

Many UI people dislike real interfaces for valid reasons, but there are some real advantages to them as well. Lets just look at the pluses and minuses of each.

Because users are usually familiar with the real device (say a remote control), they can easily understand how "real world" interface will work. For example; they see the off button on their simulated remote, and if they click on it the simulated VCR will turn off. When they manipulate the channel, volume, play, pause and so on, it does exactly what they would expect. They just "get it". The interface is fairly obvious from first look. But looks can be deceiving, and this is why many UI people have problems with so called "real world" interfaces.

What if users don't really know the real world interface? Your simulation will be of little use to them. So a sound mixing board is great for people who know how to use one -- but many people may want to control music without understanding the subtleties of mixing board operation, they will be confused by your interface. Imagine a user of you simple music program wants to change the volume on two different channels, or set the balance -- the last thing they want to see is a 36 channel mixing board full of LED's and an equalizer and so on. Many times a simpler interface (more abstract) can do fine.

Real world interfaces usually take a lot of screen space (relatively). You don't have abstract stylized buttons and controls which can be small (and simply), you need detailed buttons to give them the effect. Often you can't distort objects and scale like in an abstract metaphor. In an abstract metaphor you try to make the important things larger than they might be in real life, and unimportant things smaller -- but real objects just are the size/shape they are. You are trying to convey reality, not distort it. If you want to stress a pencil tool, in the abstract you can do more a cartoon with a big pencil on a little piece of paper, and the user gets the idea that the pencil is what is important in this image. But a real-world metaphor is more about keeping scale (keeping things real) -- that means for the same size piece of paper the pencil is really small, so usually you have to make both much larger to get enough detail in the pencil.

In real world interfaces there is the problem with TMI (Too Much Information). In the real world objects you have a picture of a tudor house (or rambler, Victorian, ranch, etc.). With that much detail it can get tricky to know what is the critical information. They can see the door, the windows, the doorbell, the building materials used, the lawn, the landscaping and so on -- all the unneeded information just confuses the message. An abstract interface is more likely to have a little stylized cartoon house, for "home" -- with all the extraneous information just simplified out of the image. It is like if I was trying to convey the message of a box to someone (as in, "just draw a box") -- it might confuse them if I had a beautifully detailed picture of a 3 dimensional box, wrapped in shipping paper, bound in twine, and having proper postage affixed (along with a Federal Express label and shipping address). All the extra information is cute -- but just noise (visual confusion). The signal (important data) may just be a shape with 4 lines set at right angles to each other.

Real objects have extra information that makes their interface valuable (easier to use and understand) in the first place. When you slide a slider or turn a knob it has tactile feedback -- it may have little notches for middle position or multiple sliders on a mixing board may move together (have friction to keep them together). Resistance, pressure, feel all matter -- and that gets very hard to simulate this well. Once simulated, how do you turn it off? You can have "notches" or things that have visual friction on your simulated interface (things that snap or move together), but that often sacrifices some control and users often end up fighting your interface instead of enjoying it. If you have options to turn it on or off, then you've got little checkboxes, controls or preferences that are confusing and not on the real object anyway. So simulating a real world object is a lot harder than people think.

Another problem with real world interfaces is that you can't easily extend your interface beyond what the real world object does -- or if you do, you violate the consistency of your own metaphor (remember it is supposed to behave like the real world). When you put a CD into a CD player you don't get a popup menu on your remote to choose the song you want to play by title -- but that is nice! An abstract interface can do that much easier than a real world one. In the real world your remote may not keep a list of your favorite channels based on your usage, nor what web links for each channel or show, and so on. So the real world interface really binds the designer to real world limitations -- and limits them accordingly. Whenever you try to extend the real world, you are really violating the constraints of the interface.

Users are less likely to experiment with possibilities on a real world metaphor. They make the assumption that they know what it does (or think they do). But by definition, it is a computer and is only a model of the real world. So when you do extend something they may not ever know or may never try. Some will see a seam in the remote and assume it is where they open the control to put in batteries (that is how their real remote control works) -- but you had a pop-out drawer with more options and assumed they knew that your control had a drawer. They can be so comfortable with their real world item that they won't think that yours could be any different.

The real world has many limitations of manufacturing or cost. The behaviors are not the best in the world -- they are just compromises between "good enough" and what can make cheaply. In the real world I would love it if my VCR remote had a screen of its own (or used on screen display) to show me a little chart of all programs for a given date/time, and I could just click on a show and have it automatically schedule to record that show for me. But there are practicalities that just don't have that quite working in the real world (though they are trying). The abstract metaphor could do this behavior fairly easily -- just show me a TV guide page (that has the chart mode) and says "click on the show you want to record". Users think, "Wow this is easy". But on a real world device you usually have to set the start-time and stop-time (and day/date), and quality, and channel. In my abstract metaphor I might love to have a dial on a chisel (virtual sculpture) to make it wider or narrower, or to vary it from flat to "U" or "V" shaped. But on a real world metaphor those controls would probably look confusing (and shouldn't be there) -- it would likely scare people and they wouldn't be sure what the object is (after all, you just changed reality). The real world just can't keep up with our virtual world -- so why restrict ourselves by creating a real-world interface?

Sometimes there are multiple valid real world interfaces. One example is one that I bump into with video conferencing systems. You'd think people are used to looking at themselves, and they are -- in a mirror, not a camera. When you are on TV (or viewing yourself through a camera) you have one behavior -- when you move to your left, the image on the screen you are watching moves to its left (your right). When you normally see yourself (in a mirror), when you move to your left, your mirror image moves to its right (your left). Notice the exact opposite behaviors. Which is correct? Most users are immediately comfortable with the mirror metaphor, just watch users on a video conferencing system (using the camera metaphor) -- they see themselves on the screen too far to their right (not centered) so they move to the left expecting their mirror image to center, and instead they fly off the right side of the screen. Then the user compensates and moves the other way to center themselves. So the mirror image is better, right? In that metaphor watch the user hold up a book cover to show the other person what it says, and they can be frustrated because they can't read it (it is flipped horizontally). There are solutions and compromises -- mine has always been to mirror what the near-side viewer sees, but not what the far-side viewer sees. But that solution has problems too -- like if there are multiple systems in the same room then users are perplexed by the inconsistency between their screen and everyone else's (no one likes WYSINWYG --What You See is Not-exactly What You Get). Believe it or not, these subtleties of interface can make or break a product and how well it is liked. So you can't just assume any real world metaphor (or any metaphor), you have to choose the right metaphor to start with.

Murphy's law of interfaces dictated that the user will always assume (or want) the opposite metaphor interface from you. This applies to all metaphor (not just real world ones). PageMaker used an more life like (but a little abstract) pasteboard metaphor -- you laid down a column/picture and the cutoff, or reflowed on later pages / columns what you hadn't used on the first one. Quark and Ready-Set-Go used what I thought was a superior but even more abstract "Object / Frame" metaphor -- you drew frames and then filled them with text or pictures. Many people loved the more "real life" interface of PageMaker (if they were trained to use a pasteboard). But since I (and many other computer users) were not trained to use a pasteboard, we preferred the more abstract frame metaphor (that was more like other programs) -- and as it turned out the more abstract interface was more dynamic and adaptable. Quark seemed to eat up more of the "new" and "power user" market -- while PageMaker gobbled up more of the traditional market (traditional publishers). Your customer base can really surprise you, and your choices may dictate your customer base.

One of the great advantages of real world interface is that they are cross platform by nature. Since they use their own controls and interface they work equally as well (or as poorly) whether you are running them on DOS, Mac, Windows, UNIX and so on. They each have their own look and feel. Either the user understands your interface or they do not. But they understand it equally on whatever platform they are running on. But the disadvantage is that it doesn't fit with the platforms native interface -- it is equally inconsistent. Window and the Mac have certain behaviors and controls that are designed to take advantage of screen real-estate and other things to be the best computer controls possible (and so all controls of a certain type look the same). Imagine a slider control, which is a great way to set volume -- it shows you its value just by looking at it, it has absolute positioning, is a standard control, and so on. Remote controls don't have sliders (too expensive, unreliable and easy to bump in the real world) -- so they have thumb wheels or relative up-down buttons. Thumb wheels don't normally show value very well, and relative controls are annoying and don't give you good feedback as to how loud something is, and so on. And these are all nonstandard controls as compared to the slider. So real world interfaces are always nonstandard interfaces -- but equally non standard on any platform. Each interface is its own little world.

Different goals demand different interfaces. There is always a war between "Ease of Learning" versus "Ease of Use" versus "Power Use". UNIX is the king of sacrificing the first two for the latter -- hard to learn, not intuitive to use (like memorizing dozens of archaic vi commands), but can be very powerful. Mac is easier to learn and use, but sometimes takes longer to scale up and get the power features in there (like it took many years to get scripting into the Mac). Real world metaphors are easy to learn in that people can walk up and imagine what it will do -- but try to turn a little thumb control on screen and you will find out that usability sometimes suffers (not much space to manipulate the control, and so on). Power Usage is something that can't be added into real-world metaphors. So for example, I want to pick from a list of my favorite music to play in QuickTime or tell the controller to only play at random from 5 out of 10 songs on the CD (or always block a song or two that I really hate). The real world just doesn't have these things. I may get close through a complex series of commands via the remote, but it just isn't the same thing. If I want to pick from a list of my favorites, I don't want a little simulated remote control popping out that I have to open a drawer to, and then pick an icon in a 2 dimensional grid -- I want to say "Open" and type the name of the song I want to hear from an ordered list, or want to save a favorites file. Many people want something that is quicker and easier to use (and has more features) -- even if it is a tad harder to learn.

Conclusion

So interface wars are often religious debates. There are many valid points on both sides and lots of tradeoffs. But don't confuse "opinion" with "all opinions are equal". I've seen many people and companies just decide that "it is all opinion" and so make their decisions without thinking of WHY they should make one over the other. They often just go to user studies to avoid thinking and knowing the problem themselves.

This is one problem I have with User Studies. Companies will poll 10 people who are familiar with one way -- and assume that their opinion has some meaning. The reality is that the user study is probably biased by the sample and the designers. There are many REASONS to make a decision, and users don't know those reasons. It doesn't matter if users prefer a real world interface over your metaphor in the first 20 minutes of use -- that may change after days or weeks of use. One design change in one part of the interface can cascade and effect dozens of others in really bad ways. A real world interface may be great if the version you have will never have to change or grow (it just does what you want) -- like the Apple Calculator which hasn't changed and just works. But if the features may want to change for the next version, and then your real world interface becomes a millstone around your neck. Nothing pisses your userbase off more than changing the metaphor. So, "Just pick the one" is the worst way to decide on an interface -- you have to know why it is sometimes right to ignore the users. Easy to market is not easy to use -- too often I've found features that looked OK in theory, but were barely usable designs (this is my biggest problem with Microsoft Windows). It is important for the designers to be able to listen to the users -- but it is also important to know when to ignore them. So good UI people are often far more important than good UI studies.

Real world interfaces are very hard to do well, and very limiting. They have some value and are easy to learn -- but you have to watch that you aren't sacrificing usability or features and capabilities for that ease of learning or marketing glitz (attracting users). Real world interfaces almost always force designers to mix their metaphors and have a little "real world" interface mixed in with some native UI extensions -- which is why it is so easy to make the real world interface peculiar and inconsistent (with the rest of the computer interface and with the real object). Abstract metaphors allow far more changing over time, power features, and let designers weight display elements and create more scalable interfaces and so on. Good abstract interfaces can be as easy (or nearly so) as good real-world interfaces (but often far more powerful). Too many people think it is "easier" to do real-world interfaces -- and it is easier, to do them poorly -- but in reality it is much harder to do real world interfaces well than to create a nice abstract interface that can grow with your needs. So because of so many hidden pitfalls with real world interfaces, I try to avoid them.


Why this article exists?

This whole article came about because of QuickTime 4.0 which decided to change its interface from a nice simple abstract interface to a more "real world" interface. A bunch of people sent me email about this "Interface Hall of Shame" article (http://www.iarchitect.com/qtime.htm) trashing QuickTime 4.0's interface changes. Many wanted to know what I thought about the site (Isys Information Architects) and what I thought about the flaws in the new interface.

First let me say that I have read and enjoyed the Isys site for years now and find it one of the best sites on the web for discussions on User Interface details. Basically I agree (to some degree) with every point that article made! I would recommend it to others to think about the tradeoffs in good interface design. I was going to write a similar article, but thanks to their efforts I now don't feel I have to, and in fact they clearly addressed some points that I might have missed. (I'm not sure that is what some people wanted to hear).

In a few ways I think the article is too harsh and looks only at the many negatives and doesn't stress the few positives and added features. QuickTime has a couple of reasons to go to a cross platform, more "real world" interface. The most obvious is BECAUSE it is cross platform, and used by a lot of newbies. So while it is a noble goal, QuickTime does so much, that it in many ways a real-world interface is just an impossible goal (trying to make one interface that does everything your TV, VCR, Slide Projector, Video Conferencing system, CD-JukeBox, Synthesizer, and Stereo do combined, along with file translation, Virtual Reality and many other features). So I think the QuickTime team created a Sisyphussian task for themselves. The first beta of QuickTime is very short on polish and should have probably used standard controls and behaviors where possible (in other words, I prefer the previous versions interface because of screen real-estate efficency and simplicity).

The article also did not weight the negatives according to importance, so minor detractions seem to have major weight and it may seem like the author implying that it is unusable. But my interface articles do the same thing as well. The point of these interface articles are more to mention and address ALL the points for the record (to pick all the nits) -- not to imply that some minor inconvenience makes the whole interface unworkable, just to categorize all the problems so that you can weight them and make reasoned interface decisions in the future. So I can hardly hold that against them now can I? These articles just have to be read from the proper perspective of "pointing out issues", and not the erroneous "trying to bash with one sided attacks". I think the article did an excellent job of pointing out issues and potential issues -- which was its purpose.

What is scary is that Apple seems to be going towards more of these "real world" interfaces, and trying to use things like drawers and physical controls and so on -- and they have many problems and issues. If done really well they can be usable for simple things -- but QuickTime, Sherlock-3 and other things are not simple things. I'd much rather have a scrolling list of "favorite items" than some docked icons in a grid -- and I know that I'd rather have a more powerful abstract interface than a more confusing and limited real-world one.


Created: 06/06/99
Updated: 11/09/02


Top of page

Top of Section

Home