Wednesday, January 21, 2009

"In a perfect world..."

Most design agencies are structured into silos, with people specialized in different aspects of the design and management. I wrote about some of the problems this can create in an earlier post, but there's another aspect to specialization: we expect people in later stages of the process to be able to fully realize the ideas we have as designers earlier on in the process.

I have seen too many times an idea get implemented either without the interaction/design fully realized, or just plain implemented wrong. The results are almost always disappointing, and sometimes that seemingly small difference between well-polished and rough-around-the-edges is the difference between easy and frustrating. A poorly-implemented great idea is often worse than a well-implemented mediocre idea.

This is a tough situation to deal with, because often you won't know if the idea fails in implementation until it has actually been implemented, and at that point it may be too late to try an alternate approach. And even if you know it is risky, sometimes it is worth the risk. My best suggestions?

Make a sample version/functional prototype
Try to create a mini version of your big idea. Problems may rear their head that you didn't expect, while things you expected to be difficult may work out smoothly after all. In either case, it should tell you if it is worth exploring further or abandoning.

Do it yourself
You can't always do this in a large agency (in fact, you probably rarely can), but one of the benefits of working small or being an all-in-one freelancer is that you know fully the skills and knowledge of every one at every step -- because it might be you in every role. You may not know every single aspect of how to create something in advance, but it's important that you at least know how to learn everything needed, and that you can do it within the scope of your project.

Have a well-thought-out Plan B
Sometimes your Plan B may just be an earlier brainstorm that wasn't fully realized, but just as you should design for user failure, you should design for designer failure too. Unexpected obstacles are common, and the more ready you are with an alternate approach, the more likely your end result will be successful. Trying to patch up a sinking ship that has been battered and bruised may be a mistake -- sometimes you just need to scratch that ship and build another one. If you've got the blueprints for the second ship available already, you'll have a head start.

I've half-joked before that I want to become a "reality consultant" -- someone who can look at all the big ideas and plans and offer some cold hard reality about what is or isn't achievable, what will be simple or difficult, and ultimately where energy should be spent to really ensure something good in the end. But until that is a real job, we can all be our own reality consultants and be ready to adapt if our "perfect world" vision doesn't come to fruition.

Wednesday, January 14, 2009

Predicting the future

Recently I've been reading or watching a few different older sci-fi works, and I'm particularly intrigued by how they believed the future would be, and how wrong they frequently were.

Paper
In Philip K. Dick's "Do Androids Dream of Electric Sheep" (1968), there are numerous references to carbon papers and onionskins -- in fact I don't think there is any reference to anything resembling a modern computer with an interface. And yet in this same book, which is the inspiration for "Blade Runner," there are androids that are so nearly identical to humans as to require bone marrow samples to determine if they are human or real. He was able to envision a future with complicated thinking robots, but not a future in which something more permanent and flexible than paper existed. (Obviously paper is still used, but could you imagine if the entire record of a criminal only existed on a photocopy somewhere, and not digitally stored?)

Televisions
Watching "Aliens" (1986) and the later, rarely remembered TV series "Earth 2" (1994-95), I am struck by how they both failed to foresee the death of the old cathode-ray monitor. Thin LCD and plasma screens have already well outpaced old-school TVs, and new OLED screens are so thin as to be pliable. It's also interesting to see how whenever "static" or a weak signal is depicted, it is shown through the screen turning fuzzy or lines running across the screen, as one might experience with a weak antenna. Well, as anyone with digital cable or even online video watchers may know, when you get a crappy video, you see pixellation and possibly missed frames.

Gadgetry
Whether it's the phone in "Aliens" (dialing via sticking a plastic card into a slot) or the VR "gear" in "Earth 2" (a bulky headset with awkwardly swinging eyepieces), a lot of sci-fi failed to predict things like Bluetooth (which is little more than an earbud) or even simple speed-dialing. And interestingly, nearly every piece of sci-fi seems to have thought that video-phones would be the way of the future. Well, the technology has been around for quite some time, and it just never took off. Why? Because we multitask, and if we're talking on the phone, there really is no need to also see the person. We might be doing the dishes or driving or doing some other task. We just don't really need or want that most of the time.

So what are we getting wrong now?
There are undoubtedly a number of things that will begin to look foolishly short-sighted in our current predictions of the future. I think of the standard keyboard and wonder if that will die -- will we continue typing this way at all? Will "typing" even exist as communication? I'm willing to bet that we'll soon be seeing more and more examples of mind-controlled interfaces. I'm not talking anything magic here, but just complicated systems of interpreting electrical impulses in the brain. This may not be any time soon, but when you set something far in the future, it's worth considering.

There's also something I've been reading about lately -- devices that are essentially real "transformers" -- objects that can become other objects. Essentially a collection of "nano-machines" that can rebuild themselves based on the user's need. I don't know the limitations of this, but it wouldn't surprise me to start seeing self-reconstructing devices within the next 20 years.

But, hey, in the 1960s they all thought we'd be taking rockets to the moon for vacation by now. And boy were they wrong.