Search

Loading...

Analytics

Saturday, January 10, 2009
I tend to like a movie to be playing while I'm working, preferably one I know well so I can ignore it. Today Alien was up on Netflix Instant. While going to make more tea I noticed the early scene where the Nostromo's captain goes into the command area to commune with Mother. What the hell were the designers thinking when concocting the fake system user interface that is depicted in that scene?

There's a million tiny status lights, all white, set into a white background. None of them has a readable label. The whole surface of the pod is encrusted with incomprehensible but significant miniature beacons. It's been frequently pointed out that some errors on our limited space missions so far have been tied back to ambiguous or confusing information displays in spacecraft cockpits. What evolutions did Ridley Scott expect to have happened in a few decades that would allow a standard human pilot to instantly discriminate one white light from ten thousand others and act on its information?

Then there's the higher-resolution interface, which turns out to be a 1970s 9" version of a Tektronix terminal. The human input device also looks like an IBM keyboard with the top plate taken off.

A movie made in the early 80s looks forward to the mistakes of the 60s and the limiations of the 70s. Really, what was the art director being paid for?
Friday, November 21, 2008

I finally got around to posting this as an error in iTunes:


When calendars are synced between iCal and iPhone via iTunes, iTunes assigns arbitrary colors to the calendars on iPhone. These colors cannot be changed, and do not match the colors chosen in iCal. On occasion, iTunes will assign two calendars the same arbitrarily-chosen color, making them functionally indistinguishable on iPhone.

This is terrible user interface design because users become accustomed to the 'meaning' of the color of the calendar and use it in recognition of the calendar layout. The mental 'wrench' involved in translating color recognition between two instances of the same calendar data imposes a unnecessary cognitive load on the user.

Solution

  1. Allow the user to set the color of iPhone calendars.

  2. If the intention is not to allow the user control of the calendar color, for simplicity of implementation on iPhone, then the logical solution is to use the same color as specified in iCal.

  3. If the synchronization API does not expose this value, then the algorithm for assigning colors should take care not to assign the same color to two or more calendars. Colors should be assigned in a predictable way (e.g. red for first calendar in iCal, Green for second, etc), so that the user has at least the chance of making a one-time change to their iCal calendars' colors in order to make them the same in both applications.

I filed this as a bug on Radar, and also in the new OpenRadar (here). Apple probably won't take any notice but one has to point this stuff out. 

Monday, August 11, 2008

When I was a poor civil servant (plus ça change) and part-time graduate student I longed to own a Mac. I’d read everything available about them, and nothing I read did anything to dissuade me. What I wanted, as well as the crisp, typographic display and the integration between the applications, was the windowing system. I already knew, somehow, that a Proper Computer™ would be able to show more than one program on the screen at once and let you move between them. Lacking the funds to buy the current model Apple was offering, a Mac SE, I made do with an Atari ST 520 STFM. This was a strange machine with a dual personality:  a games machine with aspirations to being a workstation, and which used the non-broken version of Digital Research’s GEM windowing environment. This system had been shamelessly copied from the Mac interface, so much so that Apple sued Digital Research and got an injunction that prevented later versions of GEM (used on DOS machines, notably those from Amstrad) from using overlapping windows, drive icons on the desktop or other elements of Mac window manager furniture. Apple didn’t cite Atari in its suit, so on the ST machines GEM still kept its original, highly Mac-like functionality. It didn’t multi-task, but even so one program could show more than one window at once. On the desktop, it had multiple resizable and overlapping windows onto the file system, just like the Mac. It had WYSIWYG screen display and printing, a menu bar across the top of the screen, a crisp monochrome display, and was so much like a cut-price Mac that it was sometimes called - after Jack Tramiel, the Atari founder - the Jackintosh.

Later, when I was working in a large accounting firm I was issued a Compaq Deskpro 386 and acquired a copy of Windows/386. This had the ability to run several Windows applications at once (even though they were painfully ugly and crude by later standards) and to multitask DOS applications, all in separate windows which could overlap each other and be moved around and resized any way you wanted. Then came Windows 3.0, and all the versions after. When I worked at the University of Cambridge I spent lots of time using UNIX computers, using X Windows and a variety of desktop managers - twm, the basic one, Motif, the Windows-like one, and my then favorite, fvwm, which allowed you to use multiple Motif desktops on the same screen, and flick between them with a simple keyboard command. Along the way I picked up some more powerful Macs at home too, which ran System 6 and System 7, which almost allowed proper multi-tasking of applications and their windows. 

All this time screens were getting bigger and higher in resolution, meaning the area available to run lots of programs at once was increasing. And there was a natural hierarchy of attention which translated reasonably well into the vertical stacking order and screen space alloted to programs and their windows. The main thing you were working on was at the top and had the largest window, such as a text editor or word processor or spreadsheet; then behind that and off to one side would be your mail application so you could see new mail arrive and scan its sender and subject without pulling its window to the front, and decide if you needed to read it right now or carry on working. Over on one side in a tiny window would be a chat program, either instant messaging or an IRC client, sized so you could keep an eye on the conversation without having to focus exclusively on it. A couple of terminal windows for fiddling with stuff on remote servers, a call tracking system way in the background or even relegated to another virtual screen, and the odd utility all were shuffled in a loose stack, arranged so each got the screen space and attention it merited and no more.

This is the way I have worked for well over a decade, maybe more, and so did all my peers. It just seemed obvious, intuitive and natural. But when it comes to the mass use of GUIs, we’re the minority.

When users began to get Windows on the desktop, the screens were pretty low resolution - 640 pixels across by 480 high. There wasn’t really room to get more than one application’s window on the screen at once. Even the next generation of screens that had 800 pixels by 600 were only good for showing more of your word processing document or your spreadsheet. So it was only natural that most users reacted to multi-tasking, windowing operating systems by using them as full-screen task switchers like the short-lived DOS application managers like TopView and DesqView that preceded Windows.

Since TFT display technology became affordable, with transistor-based panels replacing cathode ray tubes on the desktop, users’ screens have followed the trend (described by Gordon Moore, co-founder of Intel) of geometrically increasing transistor density for least cost. This means that a single-task user such as a word processing worker now commands a screen of maybe 1600 by 1200 pixels, which is overkill even if you turn on all of Microsoft Word’s toolbars, palettes and floating interface gadgets. To me, it’s absolutely crazy to use all that area - two megapixels - to display just one application’s window. But that’s exactly what most users do.

That’s not to say that most users just use one application, zoomed out to full screen size. They use many applications, but each one’s window is maximized. Typically they user the Windows Taskbar to track the open windows, and switch between them by pressing  each window’s Taskbar button with the mouse. This produces absurd situations, for example: a user composing a short e-mail in Outlook or similar program finds the whole text of their message becomes one very long line going from one side of the enormous message window to another. That’s not a comfortable way to write, especially if you review it before you send it, and if the recipient is reading mail full-screen too it’s equally difficult to read. I’ve seen so many people, smart people, do this that it can’t  be wholly due to lack of training, or incuriosity about the possibilities of the user interface. There must be a benefit, but for the longest time I had no idea what it might be. Then the other day it came to me in a 4 AM epiphany (I live high up on the Upper Upper West Side of Manhattan, which is very noisy indeed. Being woken up at 4AM is normal). In order to explain what I realized, I have to talk about a bit of Human-Computer Interaction (HCI) science.

With command-line driven systems, ergonomics is all about making the commands memorable, short and consistent in their syntax and behavior. With GUI systems, it’s about making elements of the interface obvious, easily located and putting them in the best place for users to find and use them. A very important model for studying this field is Fitts’s law, which predicts the time needed to move to a target area on the screen as a function of the distance to the target and the target’s size. The actual formula for the law is up at the top of this article. The bottom line for our purposes today is that if you make targets wider, people find them easier to hit. A corollary of Fitts’s Law Is that things you want to click on should be sized proportionately with their importance or frequency of use. 

This doesn’t, at first glance, seem to have any relevance to the reason Windows users maximize their windows. There is one benefit to maximizing the window that doesn’t have any connection to Fitts’s Law, but is connected with how we learn to do things. Tasks involving hand-eye coordination are easier to learn if the target is in the same place every single time. Recall that every window on the Windows desktop has its own menu bar, at the top of each window. In the “Power User” Windows desktop approach, with its overlapping, variously sized windows, the user has to accurately hit the menu item they want in each individual window. Power Users don’t tend to find this difficult because a) over many years of using GUIs they’ve become horribly adept at accurately slinging the mouse anywhere at all on the screen with pinpoint precision, and b) they make enormous use of keyboard shortcuts instead of menus to accomplish the same tasks a casual user handles with the mouse. However, casual users tend to be mouse-centric, either because they were only trained to use the mouse and menus, or because that’s the primary way to explore the system (experience of earlier computer systems has scared them off experimentally pressing keys to learn what happens). Let a casual user use your Power User Windows or Motif desktop and they’ll uncomfortably hunt around the menus, missing the one they want, until you let them click the Maximize button. As soon as they do this, the ubiquitous File and Edit menus, where 80% of the work gets done in most users’ computing lives, jump to the same location they have in every single window. Now the user can nail them with ease and fluency.

The relevance of Fitts’s Law to maximized windows is this. I said above that things should be wide in proportion to how much they get used. On a Power User desktop, the menus are always pretty narrow and hard for a non-expert to hit reliably. Maximizing the window doesn’t make the actual physical size of the menus any greater. However, the edges of the screen are treated specially by most GUIs. Assuming you only have one screen, if you flick the mouse up so the mouse pointer goes to the top of the screen, and keep shoving the mouse upwards even though the pointer is already at the top, the pointer stays where it is. This makes the menu bar semi-infinitely wide. You can dispense with fine motor control and slow precise movements when you’re aiming for the top of the screen. You want to save this file quickly? You can whack the mouse up to the File menu, pull it down and hit Save. The speed at which you can quickly acquire the File menu makes a great difference to the perceived ease of use. 

This is exactly why the Macintosh user interface has always had only one menu bar, no matter how many applications may be running. The early Mac OS designers realized consistency of positioning and hospitality to overshooting the mouse were going to be crucial in a usable GUI interface. One of the better-known interface engineers who worked at Apple for a long time, Bruce “Tog” Tognazzini, has written extensively about the use of Fitts’s Law in the design of the Mac UI.  

What’s quite wonderful is that I don’t think users are consciously choosing to maximize the windows in order to give themselves this consistency and hospitality. It’s something they’ve done intuitively, the spatial and visual parts of their brain prodding the consciousness into giving the poor overworked centers of proprioception and coordination a break. Windows users are unconsciously adding back to Windows one of the important elements of the Mac UI that Microsoft failed to copy. 

Sunday, April 13, 2008
Integer Handicap
[compound noun]
The period of reduced efficiency while you adjust to the fact that the software vendor has relocated vital application functionality to strange and different parts of the user interface in the next version, seemingly for the hell of it.
Monday, April 07, 2008

First, this widget for planning BART trips. It's a glorious piece of information design. Then the essay on the thinking behind it, Magic Ink. It's like Edward Tufte started writing code for the Mac.

Technorati Tags: , ,

Sunday, January 20, 2008

Over the last few months, my friends and I are starting to find there's something very wrong in the state of Mac remote clients. I don't have a huge number of computers, but I have enough to make the use of VNC (the free screen-sharing system originally developed by Olivetti Labs at Cambridge) necessary. I used to have a mix of Macs and Windows systems, so some years ago I settled on VNC as the lowest common denominator method of connecting to one machine from another. VNC server runs as a native service on Windows, and there are mainstream clients which work well. On the Mac, it used to be that the canonical server was OSXvnc, now incorporated into Redstone's vine server. For a VNC viewer I used Chicken of the VNC. Chicken has a reputation for being very slow but very stable and my experience certainly bore this out.

Then came OS X 10.4 (Tiger) which exposed the Apple Remote Desktop (ARD) screen sharing service in the Sharing preference pane. ARD uses another version of the VNC code-base. I happily used this on the server end while continuing to use Chicken as a client. A while ago I started using Jolly's Fast VNC because it promised faster, snappier response from the far end. The promises were true, but I began to find that after a few minutes of using the Jolly Screen Client, the session would freeze. Restarting the client made no difference, nor did:

  • Stopping and starting the Apple Remote Desktop client on the server
  • Disabling and re-enabling the ethernet interface on the server
  • Unplugging and replugging the Cat5 cable on the server

Not only did it kill the workings of remote desktop access, it killed every ethernet service on the server too. It wasn't talking to the VNC client, but neither was it talking to any other IP host - by name or by IP address. Nor could I renew the server's DHCP-granted IP address, a process that uses broadcast IP datagrams. I couldn't even see the server's MAC address in a client's ARP table, showing that the whole ethernet infrastructure was dead in the water. The only thing that brought my Trappist server back to life was a complete restart. I never go to the bottom of the problem so I just gave up on Jolly's and went back to Chicken and the problems went away. Then came Leopard.

You probably know that Mac OS X 10.5 "Leopard" has a slick, Apple-written Screen Sharing app. When I upgraded most of my machines to Leopard, I got rid of my third party VNC solutions. I was happy for a good long time controlling my Mac Mini (our media machine) from my MacBook Pro, both running Leopard. Connection times were fast and performance was snappy. Then the same lockups started. I'd be using Screen Sharing over Bonjour (mDNS) to control iTunes and the screen would lock up, and network traffic on the Mini would fall to zero in both directions. As when I was using Jolly's a complete reboot of the server system was the only thing that cured the problem.

I can't find any reliable information on whether the Apple client uses the same codebase as the Jolly's client, but it's very odd that both clients offer identically brisk performance and have identically disastrous effects on the server machine. I hope the upcoming OS X 10.5.2 update brings a fix to this issue along with the many others it offers.

Technorati Tags: , , , , , , , ,

Friday, December 07, 2007
I was at Apple today and was given one of their new lightweight aluminum mini-notebooks. Surprisingly it has a small fixed amount of storage and uses a pen input device. I have put photos on my Flickr page

Here

Here

Here

Enjoy!

Technorati Tags: , , ,