Uncategorized

You are currently browsing the archive for the Uncategorized category.

Please visit the new blog at andrewhinton.com!

Inkblurt the Blog has served me well for many years. I was so proud of this homely little portmanteau when I first concocted it over a decade ago. I even used it as my Twitter handle.

that's all folks

Alas, it’s time to simplify. I wasn’t posting much for quite a while, partly because this old WordPress installation has a lot of cruft from years of plugins and my amateur CSS tweaks. I’m way past the point in my life where I enjoy spending an evening maintaining a private CMS installation as a hobby. So I ported the content over to my personal site, where wordpress.com takes care of the cruft for me. You know… software as a service, as the kids like to say. I can’t promise I’ve yet (or may ever) handle the redirects and whatnot like I should… but these are cobbler’s children, and I’m a busy cobbler-dad. We’ll see.

Anyway … it’s just bits on a disk, pixels on a screen. But this place felt like a home for me for a long time, when I was just starting to really grow in my career (and my life, for that matter.)

Thanks, old blog.

I’ve been doing some writing over at the blog for The Understanding Group.

  • Last month, I posted on e-commerce, and how it’s really just commerce now, but there are still many legacy impediments for retailers.
  • And this week, I wrote a bit about the talk I gave at WebVisions Portland in May (with an interview video, and my slides) on “Happiness Machines.”

 

I’m very happy to announce I’m joining The Understanding Group as an Information Architect.

I’m a big believer in TUG’s mission: using information architecture to make things “be good.”

Since I’ve been blathering on and on about the importance of IA for over a decade now, I figured I might as well put my career where my mouth is and join up with this exciting new firm that has IA as its organizing principle. It doesn’t hurt that the people are pretty awesome too.

For the time being I’ll still be living in Atlanta, but traveling on occasion to Michigan, NYC and wherever else necessary to collaborate with clients and team members.

But unfortunately, going on to something new means having to leave behind something else.

I want to say that I’ll miss working with the great people at Macquarium. The two years I’ve spent with “MQ” have been among the best of my career, in terms of the practitioners I’ve gotten to know, the clients I’ve been able to partner with, and the fascinating, challenging work I’ve gotten to do.

Macquarium is doing some of the most cutting-edge work I’ve heard of in the cross-channel, service-design and organizational design spaces. I’m very fortunate to have had the chance to be part of their team.

 

 

I joined Path on December 1st, 2011. I know this because it says so, under my “path” in the application on my iPhone.
That same day, I posted this message in the app:

“Wondering how Path knew whom to recommend as friends?!?”

I’ve used a lot of social software over the years (technically since 1992 when the Internet was mainly a social platform, before the e-commerce era), and I do this Internet stuff for a living, so I have a pretty solid mental model for where my data is and what is accessing it. But this was one of those moments where I realized something very non-transparent was happening.

How did it know? 

Path was very smartly recommending users on Path to me, even though it knew nothing about me other than my email address and the fact that it was on my phone. I hadn’t given it a Twitter handle; I hadn’t given it the same email address I use on Facebook (which isn’t public anyway). So how did it know?
I recall in a dinner conversation with co-workers deciding that it must just be checking my address book on my phone. That bugged me, but I let it slide.
Now, I’m intrigued with why I let it go so easily. I suspect a few reasons:

  • Path had positioned itself as an app for intimate connections with close friends. It set the expectation that it was going to be careful and safe, more closed than most social platforms.
  • It was a very pleasing experience to use the app; I didn’t want to just stop using it, but wanted to keep trying it out.
  • I was busy and in the middle of a million other things, so I didn’t take the time to think much about it beyond that initial note of dismay.
  • I assumed it was only checking names of contacts and running some kind of smart matching algorithm — no idea why I thought this, but I suppose the character of the app caused me to assume it was using a very light touch.

Whatever the reasons, Path set me up to assume a lot about what the app was and what it was going to do. After a few weeks of using it sporadically, I started noticing other strange things, though.

  • It announces, on its own, when I have entered a new geographical area. I had been assuming it was only showing me this information, but then I looked for a preference to set it as public or private and found none. But since I had no way of looking at my own path from someone else’s point of view, I had to ask a colleague: can you see that I just arrived in Atlanta? He said yes, and we talked about how odd that was… no matter how close your circle of friends, you don’t necessarily want them all knowing where you are without saying so.
  • When someone “visited my path” it would tell me so. But it wasn’t entirely clear what that meant. “So and so visited your path” sounds like they walked up to the front of my house and spent a while meditating on my front porch, but in reality they may have just accidentally tapped something they thought would allow them to make a comment but ended up in my “path” instead. And the only way to dismiss this announcement was to tap it, which took me to that person’s path. Were they now going to get a message saying I had visited their path? I didn’t know … but I wondered if it would misconstrue to the other users what I’d done.
  • Path also relies on user pictures to convey “who” … if someone just posts a picture, it doesn’t say the name of the person, just their user picture. If the picture isn’t of the person (or is blank) I have no idea who posted it.

All of these issues, and others, add up to what I’ve been calling Context Management — the capabilities that software should be giving us to manage the multifaceted contexts it exposes us to, and that it allows us to create. Some platforms have been getting marginally better at this (Facebook with its groups, Google + with its circles) but we’re a long way from solving these problems in our software. Since these issues are so common, I mostly gave Path a pass — I was curious to see how it would evolve, and if they’d come up with interesting solutions for context management.

It Gets Worse

And now this news … that Path is actually uploading your entire address book to Path’s servers in order to run matching software and present possible friends.

Once I thought about it for half a minute, I realized, well yeah of course they are. There’s no way the app itself has all the code and data needed to run sophisticated matching against Path’s entire database. They’d have to upload that information, the same way Evernote needs you to upload a picture of a document in order to run optical character recognition. But Evernote actually tells me it’s doing this … that there’s a cloud of my notes, and that I have to sync that picture in order for Evernote to figure out the text. But Path mentioned nothing of the sort. (I haven’t read their license agreement that I probably “signed” at some point, because nobody ever reads that stuff — I’d get nothing else done in life if I actually read the terms & conditions of every piece of software I used; it’s a broken concept; software needs to explain itself in the course of use.)

When you read the discussion going on under the post I linked to, you see the Path CEO joining in to explain what they did. He seems like a nice chap, really. He seems to actually care about his users. But he evidently has a massive blind spot on this problem.

The Blind Spot

Here’s the deal: if you’re building an app like Path and look at user adoption as mainly an engineering problem, you’re going to come to a similar conclusion that Path did. To get people to use Path they have to be connected to friends and family, and in order to prime that pump, you have to go ahead and grab contact information from their existing social data. And if you’re going to do that effectively, you’re going to have to upload it to a system that can crunch it all so it surfaces relevant recommendations, making it frictionless for users to start seeding their network within the Path context.

But what Path skipped was the step that most such platforms take: asking your permission to look at and use that information. They essentially made the same mistake Google Buzz and Facebook Beacon did — treating your multilayered, complex social sphere as a database where everyone is suddenly in one bucket of “friends” and assuming that grabbing that information is more important than helping you understand the rules and structures you’ve suddenly agreed to live within.

Using The Right Lenses

For Path, asking your permission to look at your contacts (or your Twitter feed, or whatever else) would add friction to adoption, which isn’t good for growing their user base. So, like Facebook has done so many times, they err on the side of what is best for their growth rather than what is best for users’ peace of mind and control of their contextual reality. It’s not an evil, calculated position. There’s no cackling villain planning how to expose people’s private information.

It’s actually worse than that: it’s well-meaning people looking only through a couple of lenses and simply not seeing the problem, which can be far more dangerous. In this case, the lenses are:

  • Aesthetics (make it beautiful so people want to touch it and look at it),
  • Small-bore interaction design (i.e. delightful & responsive interaction controls),
  • Engineering (very literally meeting a list of decontextualized requirements with functional system capabilities), and
  • Marketing (making the product as viral as possible, for growth and market valuation purposes).

What’s missing?

  • Full-fledged interaction design (considering the entire interaction framework within which the small, delightful interactions take place — creating a coherent language of interaction that actually makes sense rather than merely window-dresses with novelty)
  • Content strategy (in part affecting the narrative around the service that clearly communicates what the user’s expectations should be: is it intimate and “safe” or just another social platform?)
  • Information architecture (a coherent model for the information environment’s structure and structural rules: where the user is, where their information lives, what is being connected, and how user action is affecting contexts beyond the one the user thinks they’re in — a structural understanding largely communicated by content & interaction design, by the way)

I’m sure there’s more. But what you see above is not an anomaly. This is precisely the diagnosis I would give nearly every piece of software I’m seeing launched. Path is just an especially egregious example, in part because its beauty and other qualities stand in such stark contrast to its failings.

Path Fail is UX Fail

This is in part what some of us in the community are calling the failure of “user experience design” culturally: UX has largely become a buzzword for the first list, in the rush to crank out hip, interactively interesting software. But “business rules” which effectively act as the architecture of the platform are driven almost entirely by business concerns; content is mostly overlooked for any functional purposes beyond giving a fun, hip tone to the brand of the platform; and interaction design is mainly being driven by designers more concerned with “taste” performance and “innovative” UI than creating a rigorously considered, coherent experience.

If a game developer released something like this, they’d be crushed. The incoherence alone would make players throw up their hands in frustration and move on to a competitor in a heartbeat; Metacritic would destroy its ability to make sales. How is it, then, that we have such low standards and give such leeway to the applications being released for everything else?

So, there’s my rant. Will I keep using Path? Well … damn… they already have most of my most personal information, so it’s not like leaving them is going to change that. I’m going to ride it out, see if they learn from mistakes, and maybe show the rest of the hip-startup software world what it’s like to fail and truly do better. They have an opportunity here to learn and come back as a real champion of the things I mentioned above. Let’s hope for the best.

As I hinted in a post a couple of weeks ago, I’m writing a book. The topic: Designing Context.
If the phrase sounds a little awkward, that’s on purpose. It’s not something we’re used to talking about yet. But I believe “context” to be a medium of sorts, that we’ve been shaping for years without coming to grips with the full implications of our work.
Although I have written many things, some of them pretty long, I have never written anything this long before. I’m a little freaked out.
But I have to keep reminding myself that the job of this book isn’t to definitively and comprehensively cover everything having to do with its subject. I just want to do a good job getting some fascinating, helpful ideas about this topic into the hands of the community in a nice, readable format that gives me the room to tell the story well.
This isn’t a how-to book, more of a “let’s look at things this way and see what happens” book. It’s also not an academic book–I’m not an academic and still have a 50+ hour a week job, so there’s no way I’ll ever have time to read & reference every related/relevant work on the topic, even though that seems to be what I’m trying to do in spite of myself.
And I’m going to be very honest about the fact that it’s largely a book on information architecture: how information shapes & creates context for humans.
Thanks to O’Reilly Media for working with me on getting this thing going, and to Peter Morville for the prodding & encouragement.
Now … time to write.

PS for a better idea of what I’m getting at, here are some previous writings:

My talk for Interaction 12 in Dublin, Ireland.

Another 10-minute, abbreviated talk.

You can see the video on Vimeo.

So, the short version of my point in this post (the “tl;dr” as it were) is this: possibly the most significant value of Second Life is as a pioneering platform for navigating & comprehending the pervasive information dimension in a ubiquitous/pervasively networked physical environment.

That’s already a mouthful … But here’s the longer version, if you’re so inclined … Second Life Hand Logo

It’s easy to dismiss Second Life as kitsch now. Even though it’s still up and running, and evidently still providing a fulfilling experience for its dedicated user-base, it no longer has the sparkle of the Next Big Thing that the hype of several years ago brought to it.

I’ll admit, I was quite taken by it when I first heard of it, and I included significant commentary about it in presentations and writings I did at the time. But after only a few months, I started realizing it had serious limitations as a mainstream medium. For one thing, the learning curve for satisfying creation was too steep.

Three-dimensional modeling is hard enough with even the best tools, but Second Life’s composition toolset at the height of its popularity was frustratingly clumsy. Even if it had been state-of-the-art, however, it takes special knowledge & ability to draw in three dimensions. Unlike text-based MUDs, where anyone with half decent grasp of language could create relatively convincing characters, objects, rooms, Second Life required everything to be made explicitly, literally. Prose allows room for gestalt — the reader can fill in the details with imagination. Not in an environment like Second Life, though.

Plus, to make anything interactive, you had to learn a fairly complex scripting language. Not a big deal for practiced coders, but for regular people it was daunting.

So, as Second Life attracted more users, it became more of a hideous tragedy-of-the-commons experience, with acres of random, gaudy crap lying about, and one strange shopping mall after another with people trying to make money on the platform selling clothing, dance moves, cars and houses — things that imaginative players would likely have preferred to make for themselves, but instead had to piece together through an expensive exercise in collage.

At the heart of what made so many end up dismissing the platform, though, was its claim to being the next Web … the new way everyone was supposed to interact digitally online.

I never understood why anyone was making that claim, because it always seemed untenable to me. Second Life was inspired by Neal Stephenson’s virtual reality landscape in Snow Crash (and somewhat more distantly, Gibson’s vision of “cyberspace”), and managed an adroit facsimile of how Stephenson’s fictional world sounded. But Stephenson’s vision was essentially metaphorical.

Still, beyond the metaphor issue, the essential qualities of the Web that made it so ubiquitous were absent from Second Life: the Web is decentralized, not just user-created but non-privatized and widely distributed. It exists on millions of servers run by millions of people, companies, universities and the like. The Web is also made of a technology that’s much simpler for creators to use, and perhaps most importantly, the Web is very open and easily integrated into everything else. Second Life never got very far with being integrated in that way, though it tried. The main problem was that the very experience itself was not easily transferable to other media, devices etc. Even though they tried using a URL-like linking method that could be shared anywhere as text, the *content* of Second Life was essentially “virtual reality” 3D visual experience, something that just doesn’t transfer well to other platforms, as opposed to the text, static images & videos we share so easily across the Web & so many applications & devices.

Well, now that I’ve said all that somewhat negative stuff about the platform, what do I mean by “what we learned”?

It seems to me Second Life is an example of how we sometimes rehearse the

Recent version of the SL "Viewer" UI (danielvoyager.wordpress.com)

Recent version of the SL "Viewer" UI (danielvoyager.wordpress.com)

future before it happens. In SL, you inhabit a world that’s essentially made of information. Even the physical objects are, in essence, information — code that only pretends to be corporeal, but that can transform itself, disappear, reappear, whatever — a reality that can be changed as quickly as editing a sentence in a word processor.

While it’s true that our physical world can’t literally be changed that way, the truth is that the information layer that pervades it is becoming more substantial, more meaningful, and more influential in our experience of the world around us.

If “reality” is taken to be the sum total of all the informational and sensory experience we have of our environs, and we acknowledge that the informational (and to some degree sensory, as far as sight and sound go) layer is becoming dominated by digitally mediated, networked experience, then we are living in a place that is not too far off from what Second Life presents us.

Back when I was on some panels about Second Life, I would explain that the most significant aspect of the platform for user experience wasn’t the 3D space we were interacting with, but the “Viewer” — the mediating interface we used for navigating and manipulating that space. Linden Labs continually revised and matured the extensive menu-driven interface and search features to help inhabitants navigate that world, find other players & interest groups, or to create layers of permissions rules for all the various properties and objects. It was flawed, frustrating, volatile — but it was tackling some really fascinating, complex problems around how to live in a fluid, information-saturated world where wayfinding had more to do with the information layer *about* the actual places than the “physical” places themselves.

If we admit that the meaning & significance of our  physical world is becoming largely driven by networked, digital information, we can’t ignore the fact that Second Life was pioneering the tools we increasingly need for navigating, searching, filtering & finding our way through our “real life” environments.

What a city “means” to us is tied up as much in the information dimension that pervades it — the labels & opinions, statistics & rankings — the stuff that represents it on the grid, as it is the physical atoms we touch as we walk its sidewalks or drive through its streets, or as we sit in its restaurants and theaters. All those experiences are shaped powerfully by reviews and tips of Yelp, or the record of a friend having been in a particular spot as recorded in Foursquare, or a picture we see on Flickr taken at a particular latitude and longitude. Or the real-time information about where our friends are *right now* and which places are kinda dead tonight. Not to mention the market-generated information about price, quantity & availability.

It’s always been the case that the narrative of a place has as much to do with how we experience the reality of the place as the physical sensations we have of it in person. But now that narrative has been made explicit, as a matter of record, and cumulative as well — from the interactions of everyone who has gone before us there and left some shadow of their presence, thoughts, reactions.

One day it would be interesting to compare all the ways in which various bits of software are helping us navigate this information dimension to the tools invented for inhabiting and comprehending the pure-information simulacra of Second Life. I bet we’d find a lot of similarities.

 

Unhappiness Machine

I posted the content below over on the Macquarium Blog, but I’m repeating here for posterity, and to first add a couple other thoughts:

1. It’s amazing how easily corporations can fool themselves into feeling good about the experiences they create for their users by making elaborate dreamscapes & public theater — as if the fictions they’re creating somehow make up for the reality of what they deliver (and the hard work it takes to make reality square in any way with that imagined experience). This reminds me a bit of the excellent, well-executed dismemberment of this sort of thinking that Bret Victor posted this past week on the silliness & laziness behind things like the Microsoft “everything is a finger-tap slab” future-porn. Go read it.

2. Viral videos like the CocaCola Happiness Machine don’t only fool the originating brand into feeling overconfident — they make the audience seeing the videos mistake the bit of feel-good emotion they receive as substantial experience, and then wonder “how can my own company give such delight?” I’ve seen so many hours burned with brainstorming sessions where people are trying to come up with the answer to that — and they end up with more reality-numbing theatrics rather than fixing difficult problems with their actual product or service delivery.

Post after the cut — but it looks nicer on the MQ Blog ;-)
Read the rest of this entry »

In Defense of D

DTDT means lots of things

A long time ago, in certain communities of practice in the “user experience” family of practices, an acronym was coined: “DTDT” aka “Defining the Damned Thing”.

For good or ill, it’s been used for years now like a flag on the play in a football game. A discussion gets underway, whether heated or not, and suddenly someone says “hey can we stop defining the damned thing? I have work to do here, and you’re cluttering my [inbox / Twitter feed / ear drums / whatever …]”

Sometimes it rightly has reset a conversation that has gone well off the rails, and that’s fine. But more often, I’ve seen it used to shut down conversations that are actually very healthy, thriving and … necessary.

Why necessary? Because conversation *about* the practice is a healthy, necessary part of being a practitioner, and being in a community of other practitioners. It’s part of maturing a practice into a discipline, and getting beyond merely doing work, and on to being self-aware about how and why you do it.

It used to be that people weren’t supposed to talk about sex either. That tended to result in lots of unhappy, closeted people in unfulfilling relationships and unfulfilled desires. Eventually we learned that talking about sex made sex better. Any healthy 21st century couple needs to have these conversations — what’s sex for? how do you see sex and how is that different from how I see it? Stuff like that. Why do people tend to avoid it? Because it makes them uncomfortable … but discomfort is no reason to shun a healthy conversation.

The same goes for design or any other practice; more often than not, what people in these conversations are trying to do is develop a shared understanding of their practice, developing their professional identities, and challenging each other to see different points of view — some of which may seem mutually exclusive, but turn out to be mutually beneficial, or even interdependent.

I’ll grant that these discussions often have more noise than signal, but that’s the price you pay to get the signal. I’ll also grant that actually “defining” a practice is largely a red herring — a thriving practice continues to evolve and discover new things about itself. Even if a conversation starts out about clean, clinical definition, it doesn’t take long before lots of other more useful (but muddier, messier) stuff is getting sorted out.

It’s ironic to me that so many people in the “UX family” of practitioner communities utterly lionize “Great Figures” of design who are largely known for what they *wrote* and *said* about design as much as for the things they made, and then turn to their peers and demand they stop talking about what their practice means, and just post more pat advice, templates or tutorials.

A while back I was doing a presentation on what neuroscience is teaching us about being designers — how our heads work when we’re making design decisions, trying to be creative, and the rest. And one of the things I learned was the importance of metacognition — the ability to think about thinking. I know people who refuse to do such a thing — they just want to jump in and ACT. But more often than not, they don’t grow, they don’t learn. They just keep doing what they’re used to, usually to the detriment of themselves and the people around them. Do you want to be one of those people? Probably not.

So, enough already. It’s time we defend the D. Next time you hear someone pipe up and say “hey [eyeroll] can we stop the DTDT already?” kindly remind them that mature communities of practice discuss, dream, debate, deliberate, deconstruct and the rest … because ultimately it helps us get better, deeper and stronger at the Doing.

I liked this bit from Peter Hacker, the Wittgenstein scholar, in a recent interview. He’s talking about how any way of seeing the world can take over and put blinders on you, if you become too enamored of it:

The danger, of course, is that you over do it. You overplay your hand – you make things clearer than they actually are. I constantly try to keep aware of, and beware of, that. I think it’s correct to compare our conceptual scheme to a scaffolding from which we describe things, but by George it’s a pretty messy scaffolding. If it starts looking too tidy and neat that’s a sure sign you’re misdescribing things.

via TPM: The Philosophers’ Magazine | Hacker’s challenge. (emphasis mine)

It strikes me this is true of design as well. There’s no one way to see it, because it’s just as organic and messy as the world in which we do it.

I mean this both in the larger sense of “what is design?” and the smaller sense of “what design is best for this particular situation?”

Over the years, I’ve come to realize that most things are “messy” — and that while any one solution or model might be helpful, I have to ward against letting it take over all my thinking (which is awfully easy to do … it’s pleasant, and much less work, to just dismiss everything that doesn’t fit a given perspective, right?).

The actual subject of the interview is pretty great too … case in point, for me, it warns against buying into the assumptions behind so much recent neuroscience thinking, especially how it’s being translated in the mainstream (though Hacker goes after some hard-core neuroscience as well).

Today it’s official that I’m leaving my current role at Vanguard as of June 25, and starting in a new role as Principal User Experience Architect at Macquarium.

I know everybody says “it’s with mixed feelings” that they do such a thing, but for me it’s definitely not a cliche. Vanguard has been an excellent employer, and for the last 6 1/2 years I’ve been there, I’ve always been able to say I worked there with a great deal of pride. It has some of the smartest, most dedicated user-experience design professionals I’ve ever met, and I’ll miss all of them, as well as the business and technology people I’ve worked closely with over the years.

I’m excited, however, to be starting work with Macquarium on June 28. On a personal level, it’s a great opportunity to work in the the region where I live (Charlotte and environs) as well as Atlanta, where the company is headquartered, and where I grew up, and have a lot of family I haven’t gotten to see as often as I’d like. On the professional side, Macquarium is tackling some fascinating design challenges that fit my interests and ambitions very well at this point in my life. I can’t wait to sink my teeth into that juicy work.

I’ve been pretty quiet on the blog for quite a while, partly because leading up to this (very recently emerging) development, I also spoke at a couple of conferences, and got married … it’s been a busy 2010 so far. I’m hoping to be more active here at Inkblurt in the near future … but no promises… I don’t want to jinx it.

In an article called “The Neuroscience of Leadership” (free registration required*), from Strategy + Business a few years ago, the writers explain how new understanding about how the brain works helps us see why it’s so hard for us to fully comprehend new ideas. I keep cycling back to this article since I read it just a few months ago, because it helps me put a lot of things that have perpetually bedeviled me in a better perspective.

One particularly salient bit:

Attention continually reshapes the patterns of the brain. Among the implications: People who practice a specialty every day literally think differently, through different sets of connections, than do people who don’t practice the specialty. In business, professionals in different functions — finance, operations, legal, research an development, marketing, design, and human resources — have physiological differences that prevent them from seeing the world the same way.

Note the word “physiological.” We tend to assume that people’s differences of opinion or perspective are more like software — something with a switch that the person could just flip to the other side, if they simply weren’t so stubborn. The problem is, the brain grows hardware based on repeated patterns of experience. So, while stubbornness may be a factor, it’s not so simple as we might hope to get another person to understand a different perspective.

Recently I’ve had a number of conversations with colleagues about why certain industries or professions seem stuck in a particular mode, unable to see the world changing so drastically around them. For example, why don’t most advertising and marketing professionals get that a website isn’t about getting eyeballs, it’s about creating useful, usable, delightful interactive experiences? And even if they nod along with that sentiment in the beginning, they seem clueless once the work starts?

Or why do some or coworkers just not seem to get a point you’re making about a project? Why is it so hard to collaborate on strategy with an engineer or code developer? Why is it so hard for managers to get those they manage to understand the priorities of the organization?

And in these conversations, it’s tempting — and fun! — to somewhat demonize the other crowd, and get pretty negative about our complaints.

While that may feel good (and while my typing this will probably not keep me from sometimes indulging in such a bitch-and-moan session), it doesn’t help us solve the problem. Because what’s at work here is a fundamental difference in how our brains process the world around us. Doing a certain kind of work in a particular culture of others that work creates a particular architecture in our brains, and continually reinforces it. If your brain grows a hammer, everything looks like a nail; if it grows a set of jumper cables, everything looks like a car battery.

Now … add this understanding to the work Jonathan Haidt and others have done showing that we’re already predisposed toward deep assumptions about fundamental morals and values. Suddenly it’s pretty clear why some of our biggest problems in politics, religion, bigotry and the rest are so damned intractable.

But even if we’re not trying to solve world hunger and political turmoil, even if we’re just trying to get a coworker or client to understand a different way of seeing something, it’s evident that bridging the gap in understanding is not just a peripheral challenge for doing great design work — it may be the most important design problem we face.

I don’t have a ready remedy, by the way. But I do know that one way to start building bridges over these chasms of understanding is to look at ourselves, and be brutally honest about our own limitations.

I almost titled this post “Why Some People Just Don’t Get It” — but I realized that sets the wrong tone right away. “Some People” becomes an easy way to turn others into objects of ridicule, which I’ve done myself even on this blog. It’s easy, and it feels good for a while, but it doesn’t help the situation get better.

As a designer, have you imagined what it’s like to see the world from the other person’s experience? Isn’t that what we mean when we say the “experience” part of “user experience design” — that we design based on an understanding of the experience of the other? What if we treated these differences in point of view as design problems? Are we up to the challenge?

Later Edit:

There have been some excellent comments, some of which have helped me see I could’ve been more clear on a couple of points.

I perhaps overstated the “hardware” point above. I neglected to mention the importance of ‘neuroplasticity‘ — and that the very fact we inadvertently carve grooves into the silly-putty of our brains also means we can make new grooves. This is something about the brain that we’ve only come to understand in the last 20-30 years (I grew up learning the brain was frozen at adulthood). The science speaks for itself much better than I can poorly summarize it here.

The concept has become very important to me lately, in my personal life, doing some hard psychological work to undo some of the “wiring” that’s been in my way for too long.

But in our role as designers, we don’t often get to do psychotherapy with clients and coworkers. So we have to design our way to a meeting of minds — and that means 1) fully understanding where the other is coming from, and 2) being sure we challenge our own presuppositions and blind spots. This is always better than just retreating to “those people don’t get it” and checking out on the challenge altogether, which happens a lot.

Thanks for the comments!

* Yet another note: the article is excellent; a shame registration is required, but it only takes a moment, and in this case I think it’s worth the trouble.

« Older entries