Uncategorized

You are currently browsing the archive for the Uncategorized category.

UX Insight Elements

Funny how things can pop into your head when you’re not thinking about them. I can’t remember why this occurred to me last week … but it was one of those thoughts I realized I should write down so I could use it later. So I tweeted it. Lots of people kindly “re-tweeted” the thought, which immediately made me self-conscious that it may not explain itself very well. So now I’m blogging about it. Because that’s what we kids do nowadays.

My tweet: User Experience Design is not data-driven, it’s insight-driven. Data is just raw material for insight.

I whipped up a little model to illustrate the larger point: insight comes from a synthesis between talent, expertise, and the fresh understanding we gain through research. It’s a set of ingredients that, when added to our brains and allowed to stew, often over a meal or after a few good nights’ sleep, can bring a designer to those moments of clarity where a direction finally makes sense.

I’ve seen a lot of talk lately about how we shouldn’t be letting data drive our design decisions — that we’re designers, so we should be designing based on best practices, ideas, expertise, and even “taste.” (I have issues with the word “taste” as many people use it, but I don’t have a problem with the idea of “expert intuition” which is I think more what a lot of my colleagues mean. In fact, that Ira Glass video that made the rounds a few weeks ago on many tweets/blogs puts a better spin on the word “taste” as one’s aspiration that may be, for now, beyond one’s actual abilities, without work and practice.)

As for the word “data” — I’m referring to empirical data as well as the recorded results of something less numbers-based, like contextual research. Data is an input to our understanding, but nothing more. Data cannot tell us, directly, how to design anything.

But it’s also ludicrous to ask a client or employer to spend their money based solely on your expertise or … “taste.” Famous interior or clothing designers or architects can perhaps get away with this — because their names carry inherent value, whether their designs are actually useful or not. So far, User Experience design practitioners don’t have this (dubious) luxury. I would argue that we shouldn’t, otherwise we’re not paying much attention to “user experience” to begin with.

Data is valuable, useful, and often essential. Data can be an excellent input for design insight. I’d wager that you should have as much background data as you can get your hands on, unless you have a compelling reason to exclude it. In addition, our clients tend to speak the language of data, so we need to be able to translate our approach into that language.

It’s just that data doesn’t do the job alone. We still need to do the work of interpretation, which requires challenging our presuppositions, blind spots and various biases.

The propensity for the human brain to completely screw stuff up with cognitive bias is, alone, reason enough to put our design ideas through a bit of rigor. Reading through the oft-linked list of cognitive biases on Wikipedia is hopefully enough to caution any of us against the hubris of our own expertise. We need to do the work of seeing the design problem anew, with fresh understanding, putting our assumptions on the table and making sure they’re still viable. To me, at least, that’s a central tenet behind the cultural history of “user experience” design approaches.

But analysis paralysis can also be a serious problem; and data is only as good as its interpretation. Eventually, actual design has to happen. Otherwise you end up with a disjointed palimpsest, a Frankenstein’s Monster of point-of-pain fixes and market-tested features.

We have to be able to do both: use data to inform the fullest possible understanding of the behavior and context of potential users, as well as bring our own experience and talent to the challenge. And that’s hard to do, in the midst of managing client expectations, creating deliverables, and endless meetings and readouts. But who said it was easy?

The UX Tribe

UX Meta-community of practiceI don’t have much to say about this, I just want to see if I can inject a meme in the bloodstream, so to speak.

Just an expanded thought I had recently about the nature of all the design practices in the User Experience space. From the tweets and posts and other chatter that drifted my way from the IxDA conference in Vancouver last week, I heard a few comments around whether or not Interaction Designers and Information Architects are the same, or different, or what. Not to mention Usability professionals, Researchers, Engineers, Interface Programmers, or whatever other labels are involved in the sort of work all these people do.

Here’s what I think is happening. I believe we’re all part of the same tribe, living in the same village — but we happen to gather and tell our stories around different camp-fires.

And I think that is OK. As long as we don’t mistake the campfires for separate tribes and villages.

The User Experience (UX) space is big enough, complex enough and evolving quickly enough that there are many folds, areas of focus, and centers of gravity for people’s talents and interests. We are all still sorting these things out — and will continue to do so.

Find me a single profession, no matter how old, that doesn’t have these same variations, tensions and spectrums of interest or philosophical approach. If it’s a living, thriving profession, it’ll have all these things. It’s just that some have been around long enough to have a reified image of stasis.

We need different campfires, different stories and circles of lore. It’s good and healthy. But this is a fairly recently converged family of practices that needs to understand what unifies us first, so that our conversations about what separates us can be more constructive.

The IAI is one campfire. IxDA is another. CHI yet another, and so-on. Over time, some of these may burn down to mere embers and others will turn into bonfires. That’s OK too. As long as, when it comes time to hunt antelope, we all eat the BBQ together.

And now I’m hungry for BBQ. So I’ll leave it at that.

PS: a couple of presentations where I’ve gone into some of these issues, if you haven’t seen them before: UX As Communities of Practice; Linkosophy.

I don’t know how I missed this before, but I’m glad I ran across it.

If you haven’t seen this very brief clip of Edward Tufte critiquing the iPhone interface, check it out.

A couple of salient quotes:

“The idea is that the content is the interface, the information is the interface, not computer-administrative debris.”

“Here’s the general theory: To clarify, add detail. Imagine that. To clarify, add detail. And … clutter and overload are not an attribute of information, they are failures of design. If the information is in chaos, don’t start throwing out information, instead fix the design.”

When I first heard about the Kozinski story (some mature content in the story), it was on NPR’s All Things Considered. The interviewer spoke with the LA Times reporter, who went on about how the judge had “published” offensive material on a “public website.”

I won’t go into detail on the story itself. But I urge anyone to take the LA Times article with a grain or two of salt. Evidently, the thing got started when someone who had an ax to grind with the judge sent links and info to the media, and said media went on to make it all look as horrible as possible. However, the more we learn about the details in the case, the more it sounds like the LA Times is twisting the truth a great deal. **

To me, though, the content issue isn’t as interesting (or challenging) as the “public website” idea.

Basically, this was a web server with an IP and URL on the Internet that was intended for family to share files on, and whatever else (possibly email server too? I don’t know). It’s the sort of thing that many thousands of people run — I lease one of my own that hosts this blog. But the difference is that Kozinski (or, evidently, his grown son) set it up to be private for just their use. Or at least he thought he had — he didn’t count on a disgruntled individual looking beyond the “index” page (that clearly signaled it as a private site) and discovering other directories where images and what-not were listed.

Lawrence Lessig has a great post here: The Kozinski mess (Lessig Blog). He makes the case that this wasn’t a ‘public’ site at all, since it wasn’t intended to be public. You could only see this content if you typed various additional directories onto the base URL. Lessig likens it to having a faulty lock on your front door, and someone snooping in your private stuff and then telling about it. (Saying it was an improperly installed lock would be more accurate, IMHO.)

The comments on the page go on and on — much debate about the content and the context, private and public and what those things mean in this situation.

One point I don’t see being made (possibly because I didn’t read it all) is that there’s now a difference between “public” and “published.”

It used to be that anything extremely public — that is, able to be seen by more than just a handful of people — could only be there if it was published that way on purpose. It was impossible for more than just the people in physical proximity to hear you, see you or look at your stuff unless you put a lot of time and money into making it that way: publishing a book, setting up a radio or TV station and broadcasting, or (on the low end) using something like a CB radio to purposely send out a public signal (and even then, laws limited the power and reach of such a device).

But the Internet has obliterated that assumption. Now, we can do all kinds of things that are intended for a private context that unwittingly end up more public than we intended. By now almost everyone online has sent an email to more people than they meant to, or accidentally sent a private note to everyone on Twitter. Or perhaps you’ve published a blog article that you only thought a few regular readers would see, but find out that others have read it who were offended because they didn’t get the context?

We need to distinguish between “public” and “published.” We may even need to distinguish between various shades of “published” — the same way we legally distinguish between shades of personal injury — by determining intent.

There’s an informative thread over at Groklaw as well.

**About the supposedly pornographic content, I’ll only say that it sounds like there was no “pornography” as typically understood on the judge’s server, but only content that had accumulated from the many “bad-taste jokes” that get passed around the net all the time. That is, nothing more offensive than you’d see on an episode of Jackass or South Park. Whether or not that sort of thing is your cup of tea, and whether or not you think it is harmfully degrading to any segment of society, is certainly your right. Some of the items described are things that I roll my eyes at as silly, vulgar humor, and then forget about. But describing a video (which is currently on YouTube) where an amorously confused donkey tries mount a guy who was (inadvisedly) trying to relieve himself in a field as “bestiality” is pretty absurd. Monty Python it ain’t; but Caligula it ain’t either.

Everybody’s linking to this article today, but I had to share a chunk of it that gave me goosebumps. It’s this bit from Leonard Kleinrock:

: September 2, 1969, is when the first I.M.P. was connected to the first host, and that happened at U.C.L.A. We didn’t even have a camera or a tape recorder or a written record of that event. I mean, who noticed? Nobody did. . . . on October 29, 1969, at 10:30 in the evening, you will find in a log, a notebook log that I have in my office at U.C.L.A., an entry which says, “Talked to SRI host to host.” If you want to be, shall I say, poetic about it, the September event was when the infant Internet took its first breath.

This is based on a slide I’ve been slipping into decks for over a year now as a “quick aside” comment; but it’s been bugging me enough that I need to get it out into a real blog post. So here goes.

We hear the words Strategy and Innovation thrown around a lot, and often we hear them said together. “We need an innovation strategy.” Or perhaps “We need a more innovative strategy” which, of course, is a different animal. But I don’t hear people questioning much exactly what we mean when we say these things. It’s as if we all agree already on what we mean by strategy and innovation, and that they just fit together automatically.

There’s a problem with this assumption. The more I’ve learned about Communities of Practice, the more I’ve come to understand about how innovation happens. And I’ve come to the conclusion that strategy and innovation aren’t made of the same cloth.

strategy and innovation

1. Strategy is top-down; Innovation is bottom-up

Strategy is a top-down approach. In every context I can think of, strategy is about someone at the top of a hierarchy planning what will happen, or what patterns will be invoked to respond to changes on the ground. Strategy is programmed, the way a computer is programmed. Strategy is authoritative and standardized.

Innovation is an emergent event; it happens when practitioners “on the ground” have worked on something enough to discover a new approach in the messy variety of practitioner effort and conversation. Innovation only happens when there is sufficient variety of thought and action; it works more like natural selection, which requires lots of mutation. Innovation is, by its nature, unorthodox.

2. Strategy is defined in advance; Innovation is recognized after the fact

While a strategy is defined ahead of time, nobody can seem to plan what an innovation will be. In fact, many (or most?) innovations are serendipitous accidents, or emerge from a side-project that wasn’t part of the top-down-defined work load to begin with. This is because the string of events that led to the innovation is never truly a rational, logical or linear process. In fact, we don’t even recognize the result as an innovation until after it’s already happened, because whether something is an innovation or not depends on its usefulness after it’s been experienced in context.

We fill in the narrative afterwards — looking back on what happened, we create a story that explains it for us, because our brains need patterns and stories to make sense of things. We “reify” the outcome and assume there’s a process behind it that can be repeated. (Just think of Hollywood, and how it tries to reproduce the success of surprise-hit films that nobody thought would succeed until they became successful.) I discuss this more in a post here.

3. Strategy plans for success in known circumstances; Innovation emerges from failure in unknown circumstances.

One explicit aim of a strategy is to plan ahead of time to limit the chance of failure. Strategy is great for things that have to be carried out with great precision according to known circumstances, or at least predicted circumstances. Of course strategy is more complex than just paint-by-numbers, but a full-fledged strategy has to have all predictable circumstances accounted for with the equivalent of if-then-else statements. Otherwise, it would be a half-baked strategy. In addition, strategy usually aims for the highest level of efficiency, because carrying something off with the least amount of friction and “wasted” energy often makes the difference between winning and losing.

However, if you dig underneath the veneer of the story behind most innovations, you find that there was trial and error going on behind the scenes, and lots of variety happening before the (often accidental) eureka moment. And even after that eureka moment, the only reason we think of the outcome as an innovation is because it found traction and really worked. For every product or idea that worked, there were many that didn’t. Innovation sprouts from the messy, trial-and-error efforts of practitioners in the trenches. Bell Labs, Xerox PARC and other legendary fonts of innovation were crucibles of this dynamic: whether by design or accident, they had the right conditions for letting their people try and fail often enough and quickly enough to stumble upon the great stuff. And there are few things less efficient than trial and error; innovation, or the activity that results in innovation, is inherently inefficient.

So Innovation and Strategy are incompatible?

Does this mean that all managers can do is cross their fingers and hope innovation happens? No. What it does mean is that to having an innovation strategy has nothing to do with planning or strategizing the innovation itself. To misappropriate a quotation from Ecclesiastes, such efforts are all in vain and like “striving after wind.”

Managing for innovation requires a more oblique approach, one which works more directly on creating the right conditions for innovation to occur. And that means setting up mechanisms where practitioners can thrive as a community of practice, and where they can try and fail often enough and quickly enough that great stuff emerges. It also means setting up mechanisms that allow the right people to recognize which outcomes have the best chance of being successes — and therefore, end up being truly innovative.

I’m as tired of hearing about Apple as anyone, but when discussing innovation they always come up. We tend to think of Apple as linear, controlled and very top-down. The popular imagination seems to buy into a mythic understanding of Apple — that Steve Jobs has some kind of preternatural design compass embedded in his brain stem.

Why? Because Jobs treats Apple like theater, and keeps all the messiness behind the curtain. This is one reason why Apple’s legal team is so zealous about tracking down leaks. For people to see the trial and error that happens inside the walls would not only threaten Apple’s intellectual property, it would sully its image. But inside Apple, the strategy for innovation demands that design ideas to be generated in multitudes like fish eggs, because they’re all run through a sort of artificial natural-selection mechanism that kills off the weak and only lets the strongest ideas rise to the top. (See the Business Week article describing Apple’s “10 to 3 to 1” approach. )

Google does the same thing, but they turn the theater part inside-out. They do a modicum of concept-vetting inside the walls, but as soon as possible they push new ideas out into the marketplace (their “Labs” area) and leverage the collective interest and energy of their user base to determine if the idea will work or not, or how it should be refined. (See accounts of this philosophy in a recent Fast Company article.) People don’t mind using something at Google that seems to be only half-successful as a design, because they know it’ll be tweaked and matured quickly. Part of the payoff of using a Google product is the fun of seeing it improved under your very fingertips.

One thing I wonder: to what extent do any of these places treat “strategy” as another design problem to be worked out in the bottom-up, emergent way that they generate their products? I haven’t run across anything that describes such an approach.

At any rate, it’s possible to have an innovation strategy. It’s just that the innovation and the strategy work from different corners of the room. Strategy sets the right conditions, oversees and cultivates the organic mass of activity happening on the floor. It enables, facilitates, and strives to recognize which ideas might fit the market best — or strives to find low-impact ways for ideas to fail in the marketplace in order to winnow down to the ones that succeed. And it’s those ideas that we look back upon and think … wow, that’s innovation.

The granddaddy of the Internet clarifies a popular misconception.

Print What I’ve Learned: Vint Cerf
Al Gore had seen what happened with the National Interstate and Defense Highways Act of 1956, which his father introduced as a military bill. It was very powerful. Housing went up, suburban boom happened, everybody became mobile. Al was attuned to the power of networking much more than any of his elective colleagues. His initiatives led directly to the commercialization of the Internet. So he really does deserve credit.

Something tells me you won’t hear this quoted on Fox News. (Or from hardly anyone else, probably.)

In the “Linkosophy” talk I gave on Monday, I suggested that a helpful distinction between the practices of IxD & IA might be that IxD’s central concern is within a given context (a screen, device, room, etc) while IA’s central concern is how to connect contexts, and even which contexts are necessary to begin with (though that last bit is likely more a research/meta concern that all UX practices deal with).

But one nagging question on a lot of people’s minds seems to be “where did these come from? haven’t we been doing all this already but with older technology?”

I think we have, and we haven’t.

Both of these practices build on earlier knowledge & techniques that emerged from practices that came before. Card sorting & mental models were around before the IA community coalesced around the challenges of infospace, and people were designing devices & industrial products with their users’ interactions in mind long before anybody was in a community that called itself “Interaction Designers.” That is, there were many techniques, methods, tools and principles already in the world from earlier practice … but what happened that sparked the emergence of these newer practice identities?

The key catalyst for both, it seems to me, was the advent of digital simulation.

For IA, the digital simulation is networked “spaces” … infospace that’s made of bits and not atoms, where people cognitively experience one context’s connection to another as moving through space, even though it’s not physical. We had information, and we had physical architecture, but they weren’t the same thing … the Web (and all web-like things) changed that.

For IxD, the digital simulation is with devices. Before digital simulation, devices were just devices — anything from a deck chair to an umbrella, or a power drill to a jackhammer, were three-dimensional, real industrially made products that had real switches, real handles, real feedback. We didn’t think of them as “interactive” or having “interfaces” — because three-dimensional reality is *always* interactive, and it needs no “interface” to translate human action into non-physical effects. Designing these things is “Industrial Design” — and it’s been around for quite a while (though, frankly, only a couple of generations).

The original folks who quite consciously organized around the collective banner of “interaction designer” are digital-technology-centric designers. Not to say that they’ve never worked on anything else … but they’re leaders in that practitioner community.

Now, this is just a comment on origins … I’m not saying they’re necessarily stuck there.

But, with the digital-simulation layer soaking into everything around us, is it really so limiting to say that’s the origin and the primary milieu for these practices?

Of course, I’m not trying to build silos here — only clarify for collective self-awareness purposes. It’s helpful, I believe, to have shared understanding of the stories that make up the “history of learning and making” that forms our practices. It helps us have healthier conversations as we go forward.

Since so much of our culture is digitized now, we can grab clippings of it and spread it all over our identities the way we used to decorate our notebooks with stickers in grade school. Movies, music, books, periodicals, friends, and everything else. Everything that has a digital referent or avatar in the pervasive digital layer of our lives is game for this appropriation.

I just ran across a short post on honesty in playlists.

The what-I’m-listening-to thing always strikes me as aspirational rather than documentary. It’s really not “what I’m listening to” but rather “what I would be listening to if I were actually as cool as I want you to think I am.”

And my first thought was: but where, in any other part of our lives, are we that “honest”?

Don’t we all tweak our appearances in many ways — both conscious and unconscious — to improve the image we present to the world? Granted, some of us do it more than others. But everybody does it. Even people who say they’re *not* like this actually are … to choose to be style-free is a statement just as strong as being style-conscious, because it’s done in a social context too, either to impress your other style-free, logo-hating friends, or to define yourself over-against the pop-culture mainstream.

Now, of course it would be dishonest to list favorite movies and books and music that you neither consume nor even really like. But my guess is a very small minority do that.

Our decorations have always been aspirational. Always. From idealizing the hunt with wall cave wall drawings to hanging pictures of beautiful still-life scenes of stuff you can’t afford in middle-class homes in the Renaissance, all the way to choosing which books to put on the eye-level shelves in your apartment, or making a cool playlist of music for a party. We never expose *everything* in our lives, we always select subsets that tell others particular things about us.

The digital world isn’t going to be any different.

(See earlier post on Flourishing.)

gygax calls in a paladin

IASummit 2008

Meet me at the IA Summit
Some very nice and well-meaning people have asked me to speak as the closing plenary at the IASummit conference this year, in Miami.

This is, as anyone who has been asked to do such a thing will tell you, a mixed blessing.

But I’m slogging through my insanely huge bucket of random thoughts from the last twelve months to surface the stuff that will, I dearly hope, be of interest and value to the crowd. Or, at the very least, keep their hungover cranial contents entertained long enough to stick around for Five-Minute Madness.

“Linkosophy” is a homely title. But it’s a hell of a lot catchier than “Information Architecture’s Role in the UX Context: What Got It Here, What It’s About, and Where It Might Be Headed.” Or some such claptrap.

Here’s the description and a link:

Closing Plenary: Linkosophy
Monday April 14 2008, 3:00 – 4:00PM

At times, especially in comparison to the industrial and academic disciplines of previous generations, the User Experience family of practices can feel terribly disorganized: so little clarity on roles and responsibilities, so much dithering over semantics and orthodoxy. And in the midst of all this, IA has struggled to explain itself as a practice and a domain of expertise.

But guess what? It turns out all of this is perfectly natural.

To explain why, we’ll use IA as an example to learn about how communities of practice work and why they come to be. Then we’ll dig deeper into describing the “domain” of Information Architecture, and explore the exciting implications for the future of this practice and its role within the bigger picture of User Experience Design.

In addition, I’ve been dragooned (but in a nice way … I just like saying “dragooned”) to participate in a panel about “Presence, identity, and attention in social web architecture” along with Christian Crumlish, Christina Wodtke, and Gene Smith, three people who know a heck of a lot more about this than I do. Normally when people ask me to talk about this topic, I crib stuff from slides those three have already written! Now I have to come up with my own junk. (Leisa Reichelt is another excellent thinker on this “presence” stuff, btw. And since she’s not going to be there, maybe I’ll just crib *her* stuff? heh… just kidding, Leisa. Really.)

Seriously, it should be a fascinating panel — we’ve been discussing it on a mailing list Christian set up, so there should be some sense that we actually prepared for it.

There are some insightful comments on how moderation architectures affect the emergent character of social platforms in Chris Wilson’s article on Slate:
Digg, Wikipedia, and the myth of Web 2.0 democracy.

He explains how the rules structures of Wikipedia and Digg have resulted (ironically) in highly centralized power structures and territorialism. A quote:

While both sites effectively function as oligarchies, they are still democratic in one important sense. Digg and Wikipedia’s elite users aren’t chosen by a corporate board of directors or by divine right. They’re the people who participate the most. Despite the fairy tales about the participatory culture of Web 2.0, direct democracy isn’t feasible at the scale on which these sites operate. Still, it’s curious to note that these sites seem to have the hierarchical structure of the old-guard institutions they’ve sought to supplant.

He goes on to explain how Slashdot’s moderator-selection rules help to keep this top-heavy effect from happening, by making moderator status a bit easier to acquire, at more levels of involvement, while still keeping enough top-down oversight to keep consistent quality levels high.

« Older entries § Newer entries »