When I ran for the IA Institute board a couple of years ago, I’d never been on a board of anything before. I didn’t run because I wanted to be on a board at all, really. I ran because I had been telling board members stuff I thought they should focus on, and making pronouncements about what I thought the IA Institute should be, and realized I should either join in and help or shut up about it.

So I ran, and thanks to the membership of the Institute that voted for me, I was voted into a slot on the board.

It didn’t take long to realize that the organization I’d helped (in a very small way) get started back in 2002 had grown into a real non-profit with actual responsibilities, programs, infrastructure and staff. What had been an amorphous abstraction soon came into focus as a collection of real, concrete moving parts, powered mainly by volunteers, that were trying to get things done or keep things going.

Now, two years later, I’m rolling off of my term on the board. I chose not to run again this year for a second term only because of personal circumstance, not because I don’t want to be involved (in fact, I want to continue being involved as a volunteer, just in a more focused way).  I’m a big believer in the Institute — both what it’s doing and what it can accomplish in the future.

I keep turning over in my head: what sort of advice should I give to whoever is voted into the board this year? Then I realized: why wait to bring these things up … maybe this would be helpful input for those who are running and voting for the board? So here goes… in no particular order.

Perception rules

The Institute has been around for 8 years now. In “web time” that’s an eternity.  That gives the organization a certain patina of permanence and … well, institution-ness … that would lead folks to believe it’s a completely established, solidly funded, fully staffed organization with departments and stuff. But it’s actually still a very shoestring-like operation. The Institute is still driven 99% by volunteers, with only 2 half-time staff, paid on contract, who live in different cities, and who are very smart, capable people who could probably be making more money doing something else. (Shout-out to Tim & Noreen — and to Staff Emeritus Melissa… you guys  all rock).  But I don’t know that we did the best job of making that clear to the community. That has led at times to some misunderstandings about what people should expect from the org.

Less “Can-Do” and more “Jiu Jitsu”

Good intentions and willingness to work hard and make things happen isn’t enough. In fact it may be too much. A “can-do” attitude sounds great! But it results in creating things that can’t be sustained, or chasing ideals that people say they believe in but don’t actually have the motivation to support over time.

Jiu jitsu, on the other hand, takes the energy that’s available and channels it. It’s disciplined in its focus. Overall, I think the org needs to keep heading in that direction — picking the handful of things it can stand for and accomplish very well.

The Institute has a history of having very inventive, imaginative people involved in its board and volunteer efforts, and in its community at large. These are folks who think of great ideas all the time. But not every idea is one that should be followed up on and considered as an initiative. Here’s the thing: even most of the *good* ideas cannot be followed up on and considered an actual initiative. There just isn’t bandwidth.

I’d bet any organization that has a leadership team that changes out every 1-2 years probably has this challenge. Add the motivation to “make a mark” as a board member to the motivation to make members & community voices happy who are asking for (or demanding) things, and before you know it, you have a huge list of stuff going on that may or may not actually still have relevance or value commensurate with the effort it requires.

It’s easy in the heat of the moment of a new idea to say “yeah we love that, let’s make that happen” … but it’s an illusion created by the focus of novelty. I urge the community (members, board, volunteers, everyone) to keep this in mind when thinking “why doesn’t the Institute to X or Y? it seems so obvious!”  The response I’ve taken to having to those requests is: that sounds like a great idea… how’d you like to investigate making that happen for the Institute?

Anything that doesn’t have people interested enough to make it happen *outside the board* probably shouldn’t happen to begin with. The Board is there to sponsor things, make decisions about how money should be spent and what to support — but not do the legwork and heavy lifting. It’s just impossible to do that plus run the organization, for people who have paying, full-time jobs already.

Money & Membership

This is not a wealthy organization. The budget is pretty small. It only charges members $40 a year (still, after 8 years), and other than membership fees, makes a big chunk of its budget from its annual conference (IDEA — go register!). Where does the money go? Lots of it goes to the community — helping to fund conferences, events, grants, and initiatives aimed at helping grow the knowledge & skills of the whole community. It also goes to paying the part-time staff to keep the lights on, fix stuff & enable most of the work that goes on. The benefits are not just for paying members, by the way. Most of what the Institute does is pretty open-source type stuff. Frankly I’ve thought for a while now that we should move away from “membership” and call people “contributors” instead. Because that’s what you’re doing … you’re contributing a small amount of cash in support of the community, and you get access to a closed, relatively quiet mailing list of helpful colleagues as a “thank you” gift.

Whenever I hear somebody complaining about the Institute and “what I get for my forty dollars,” I get a little miffed. But then I realize to some degree the organization sets that expectation. It may be helpful for the next board to think about the membership model — which really may be more about semantics & expectations-setting than policy, who knows.

One thing the Institute has historically been afraid to do is spend money on itself. But then it tries to handle some tasks that would honestly be much better to pay others to handle. (Again, that can-do attitude getting us in trouble.) Historically, the board tried to handle a lot of the financial tasks through a treasurer (banking, recordkeeping, etc). It took a long line of dedicated people who gave a lot of their personal time to handling those tedious tasks. We finally hit a wall where we realized we just weren’t handling the tasks as well as we should as amateurs — we needed help. So we found an excellent 3rd party service provider (recommended by our excellent Board of Advisors) to take care of a lot of that stuff. (And it’s very cost-efficient — I won’t go into why and how here.)

One thing that comes up year after  year is that the board should have an annual retreat to ramp up new board members and spend concentrated face-to-face time bonding as a team, deciding on priorities & getting a shared vision. But there’s a lot of fear about spending the money (especially to fly international folks around) and the perception issue (see above) that the Board is blowing money on junkets or something.

But face time, especially if it’s moderated & structured, could go a long way toward building rapport & accountability and setting things up for success. This should be mandatory and written into the bylaws, and an explanation published on the site explaining why it is necessary. IMHO this may be the single biggest pitfall that’s gotten in the way of having a fully effective board, at least in my term.

Roads & Bridges

The infrastructure? It’s a hodgepodge of code & 3rd party services strung together through heroic efforts & ingenuity, over 8 years. A lot of it is pretty old & rickety. But honestly, it’s the 3rd party services that seem to be the biggest problem at times — for example the 3rd party membership system is messy and inflexible (though some excellent volunteer work is going on to switch systems to something that will integrate better with other web services).

I can’t tell you how many times over the last 3 years (1 as an advisor, 2 as a director) I’ve heard it said “we could totally do X better if we had the infrastructure” and just didn’t have the bandwidth or funding to move forward with that.

Progress is being made on several fronts, but the Institute needs an organized, passionate & well-led effort to deal with the infrastructure issue from the ground up. I do not mean that the Institute needs some kind of Moon Landing project. It needs to use a few easy-to-maintain mechanisms that take the least effort for maximum effect. One problem is that the infrastructure is supporting a lot of initiatives that have accrued over the last 8 years, some of which are still relevant, some of which may not be, and many of which should be reorganized or combined to better focus efforts (see the Can-do vs Jiu-jitsu bit above).

People will be people

This org, like any non-profit, volunteer-driven organization, is made up of people. And one constant among people is that we all have our flaws, and we all have complicated lives. We all have personalities that some folks like and some folks don’t. We all say things we wish we could take back, and we all do stuff that other people look at and say “WTF?”

While any organization like this is, indeed, made up of people … it’s a mistake to judge the organization as a whole by any handful of individuals involved in it. But it happens anyway.

So, since that’s inevitable, anyone running for a leadership position in an organization like this should be aware: being on the board is going to put you in a spotlight in a way that will probably surprise you. There are a lot of people who pay attention to who’s on such a list — and they look to you with a lot of expectations you wouldn’t dream other people would have of you. Just be aware that.

At the same time, remember to have some humility and openness about the people who came before you in your role, and their decisions and the hard work they did. Much of what I tried to do in the last 2 years turned out to be misplaced effort, or just the wrong idea … and some of the stuff that I think is valuable may end up being irrelevant in another year or two. That’s just how it goes. It’s tempting to go into a new role with the attitude of “I’m gonna clean this mess up” and “why the hell did they decide to do it like this?”  Just remember that somebody will likely be thinking that about some of your work & ideas a couple years from now, and give others the break you hope they may give you.

Signing Off

Speaking of people — it’s been an honor & privilege to serve with the folks I worked with over the last two years, and to have been entrusted with a board role by the Institute members. I hope I left the place at least a little better off than when I got there.

I had the privilege of hanging out with Allen Ginsberg for a few days back in a previous life when I wanted to be a full-time poet. At dinner one night, as he was working his way through some fresh fruit he’d had warmed for digestion (he was going macrobiotic because of his “diyabeetus”), he was talking about people he’d known in his past. He said something that stuck with me about his teachers & mentors through the years … I paraphrase: “You know, one thing I’ve learned … you don’t kick the people who came before you in the teeth.”  I think it’s important to keep that rule about the people who come after you as well.

I make this pledge to the incoming leaders & other volunteers: if I have an issue with the Institute, something it’s done or some decision it’s made, something that isn’t working right, or something a person said or did, I’ll strive to remember to avoid blurting an outburst or even grousing in private, because it’s best to communicate with you and ask “how can I help?” Otherwise, I have no room to complain.

A final note (finally!) … any good I and the other board members did was only building on the excellent efforts of the community members who went before … the previous boards, volunteers & staff. Thanks to all of you for the hard work you put in thus far … and thanks to those of you stepping up to offer your time, passion and ingenuity in the future.

Today it’s official that I’m leaving my current role at Vanguard as of June 25, and starting in a new role as Principal User Experience Architect at Macquarium.

I know everybody says “it’s with mixed feelings” that they do such a thing, but for me it’s definitely not a cliche. Vanguard has been an excellent employer, and for the last 6 1/2 years I’ve been there, I’ve always been able to say I worked there with a great deal of pride. It has some of the smartest, most dedicated user-experience design professionals I’ve ever met, and I’ll miss all of them, as well as the business and technology people I’ve worked closely with over the years.

I’m excited, however, to be starting work with Macquarium on June 28. On a personal level, it’s a great opportunity to work in the the region where I live (Charlotte and environs) as well as Atlanta, where the company is headquartered, and where I grew up, and have a lot of family I haven’t gotten to see as often as I’d like. On the professional side, Macquarium is tackling some fascinating design challenges that fit my interests and ambitions very well at this point in my life. I can’t wait to sink my teeth into that juicy work.

I’ve been pretty quiet on the blog for quite a while, partly because leading up to this (very recently emerging) development, I also spoke at a couple of conferences, and got married … it’s been a busy 2010 so far. I’m hoping to be more active here at Inkblurt in the near future … but no promises… I don’t want to jinx it.

In 2010 I was fortunate to be invited to be a keynote speaker at the Italian IA Summit in Pisa, Italy.

I presented on “Why Information Architecture Matters (To Me)” — a foray into how I see IA as being about creating places for habitation, and some personal background on why I see it that way.

What am I?

So… here we are a year after the 2009 IA Summit in glorious Memphis. At the end of that conference, Jesse James Garrett, one of the more prominent and long-standing members of the community, (and, ironically, a co-founder of the IA Institute ;-), made a pronouncement in his closing plenary that “there are no Information Architects” and “there are no Interaction Designers” … “there are only User Experience Designers.”

There has since been some vocal expression of discontent with Jesse’s pronouncement.*

I held off — mostly because I was tired of the conversation about what to call people, and I’ve come to realize it doesn’t get anyone very far. More on that in a minute.

First I want to say: I am an information architect.

I say that for a couple of reasons:

1. My interests and skills in the universe that is Design tack heavily toward using information to create structured systems for human experience. I’m obsessed with the design challenges that come from linking things that couldn’t be linked before the Internet — creating habitats out of digital raw material. That, to me, is the heart of information architecture.

2. I use the term Information Architect because that’s the term that emerged in the community I discovered over 10 years ago where people were discussing the concerns I mention in (1) above. That’s the community where I forged my identity as a practitioner. In the same way that if I ever moved to another country, I would always be “American” there’s a part of my history I can’t shake. Nobody “decided” to call it that — it just happened. And that, after all, is how language works.

Now that I’ve gotten that out of the way, back to Jesse’s talk. I appreciated his attempt to sort of cut the Gordian knot. I can see how, from a left-brain analytical sort of impulse, it looks like a nice, neat solution to the complications and tensions we’ve seen in the UX space — by which I mean the general field in which various related communities and disciplines seem to overlap & intersect. Although, frankly, I think the tensions and political intrigue he mentioned were pretty well contained and already starting to die off by attrition on their own … 99.9% of the people in the room and those who read/heard his talk later had no idea what he was talking about. (Later that year I met some terrific practitioners in Brazil who call themselves information architects and were genuinely concerned, because the term had already become accepted among government and professional organizations — and that if the Americans decide to stop using the term, what will they be called? I told them not to sweat it.)

So like I said — I get the desire to just cut the Gordian knot and say “these differences are illusions! let’s band together and be more formidable as one!” But unfortunately, this particular knot just won’t cut. It probably won’t untangle either. And that’s not necessarily a bad thing.

When I heard Jesse’s pronouncement about “there are no” and “there are only,” I thought it was too bad it would probably end up muddying the effect of his talk … people would hyper-fixate on those statements and miss a lot of the other equally provocative (but probably more useful) comments he made that afternoon.

Why would I say that? Because over the years I’ve come to realize that telling someone what or who they are is counterproductive. Telling people who call themselves X that they should actually call themselves Y — and that a role named X doesn’t actually exist — is like telling someone named Sally that her name is Maude. Or telling a citizen of a country (e.g. USA, Germany, Australia) he’s not a “real” American, German or Australian.

Saying such a thing pushes deep emotional buttons about our identities. Buttons we aren’t even fully aware we have.

There are some kinds of language that our brains treat as special. If you show me a fork and tell me it’s a spoon, my brain will just say “you’re confused, really just look this up in the dictionary, you’ll see you’re wrong.” No sense of being threatened there, little emotional reaction other than amusement and slight concern for your mental health.

But language about our identities is different. That sort of language often reaches right past our frontal cortex and heads straight for the more ancient parts of our brains. The parts that felt fear when our parents left the room when we were infants, or the parts that make us eat whatever is in front of us if we’ve skipped a meal or two, even if we’re really trying to eat healthier that day. It’s the part that translates sensory data into basic emotions about our very existence and survival. Telling someone they aren’t something that they really think they are is like threatening to chop off a limb — or better, a portion of their face, so they won’t quite recognize themselves in the mirror.

Like I said — counterproductive.

Why would I go into such a dissertation on our brains and identity? Because it helps us understand why practitioner communities can get into such a bind over the semantics of their work.

A couple of years ago, I did the closing talk at IA Summit in Miami. The last section of that talk covered professional identity, and explains it better there than I could here. I also posted later about the Title vs Role issue in particular. So I won’t repeat all that here.

In particular — my own analytical side wanted to believe it was possible to separate the “role/practice” of information architecture from the need we have to call ourselves something. But I should’ve added another layer between “Title” and “Role” and called it something like “what we call ourselves to our friends.” It turns out that’s an important layer, and the one that causes us the most grief.

Since I did that talk, I’ve learned it’s a messier issue than I was making of it at the time. It’s helpful, I think, to have some models and shared language for helping us more dispassionately discuss the distinctions between various communities, roles and names. But they only go so far — most of this is going to happen under the surface, in the organic, emergent fog that roils beneath the parts of our professional culture that we can see and rationalize about.

It’s also worth noting that no professional practice that is still living and thriving has finally, completely sorted these issues out. Sure, there are some professions that have definitions for the purpose of licensure or certification — but those are only good for licensure and certification. Just listen to architects arguing over what it means to be an architect (form vs function, etc) or medical practitioners arguing over what it means to be a doctor (holistic vs western, or Nurse Practitioner vs MD).

I’m looking forward to the 2010 IA Summit in Phoenix, and the conversations that we’ll undoubtedly have on these issues. I realize these topics frustrate some (though I suspect the frustration comes mainly from the discomfort I explained above). But these are important, relevant conversations, even if people don’t realize it at the time. They mark the vibrancy of a field of practice, and they’re the natural vehicle for keeping that field on its toes, evolving and doing great work.

* Note: Thanks to Andrea Resmini, Dan Saffer and Dan Klyn for “going there” in their earlier posts, and making me think. If there are any other reactions that I missed, kindly add links in the comments below? Also, thanks Jesse for saying something that’s making us think, talk and debate.

Peter Morville and the IA Institute have joined forces with some excellent sponsors to host a contest. To wit:

In this contest, you are invited to explain information architecture. What is it? Why is it important? What does it mean to you? Some folks may offer a definition in 140 characters or less, while others will use this opportunity to tell a story (using text, pictures, audio, and/or video) about their relationship to IA.

Be sure to note the fine print lower on the Flickr page (where there’s also a link to a free prize!):

Our goals are to engage the information architecture community (by fostering creativity and discussion) and advance the field (by evolving our definitions and sharing our stories). We believe this can be a positive, productive community activity, and a whole lot of fun. We hope you do too!

I’m glad to see most of the chatter around this has been positive. But there are, of course, some nay-sayers — and the nays tend to ask a question along the lines of this: “Why is the IA Institute having to pay people to tell it what Information Architecture is?”

I suspect the contest would come across that way only if you’re already predisposed to think negatively of IA as a practice or the Institute as an organization — or people who self-identify as “Information Architects” in general. This post isn’t addressed to those folks, because I’m not interested in trying to sway their opinions — they’re going to think what they want to think.

But just in case others may be wondering what’s up, here’s the deal.

Information architecture is a relatively new community of practice. As technology and the community evolve, so does the understanding of this term of art.

For some people, IA is old hat — a relic of the days when websites were mere collections of linked text files. For others, IA represents an archaic, even oppressive worldview, where only experts are allowed to organize and label human knowledge. Again, I think these views of IA say more about the people who hold them than the practice of IA itself.

But for the rest of us, this contest is just an opportunity to celebrate the energetic conversations that are already happening anyway — and that happen within any vibrant, growing community of practice. It’s a way to spotlight how much IA has evolved, and bring others into those conversations as well.

Of course, the Institute is interested in these expressions as raw material for how it continues to evolve itself. But why wouldn’t any practitice-based organization be interested in what the community has to say about the central concern of the practice?

I’m looking forward to what everyone comes up with. I’m especially excited to learn things I don’t know yet, and discover people I hadn’t met before.

So, go for it! Explain that sucker!

EBAI was awesome

ebai

EBAI, The Brazilian Information Architecture Congress (basically the IA Summit or EuroIA of Brazil) was kind and generous enough to invite me to Sao Paulo as a keynote speaker, closing their first day. They gave me a huge chunk of time, so I presented a long version of my Linkosophy talk, expanded with more about designing for Context. It was a terrific experience. Here’s just a smattering of what I discovered:

  • Brazilian user-experience designers tend to use the term Information Architecture (and Architect) for their community of practice — which I think is a fine thing. (I explained we still need to agree what “IA” means in the context of a given design, but who am I to tell them “there are no information architects“?)
  • These people are brilliant. They’re doing and inventing UX design research and methods that really should be shared with the larger, non-Portuguese-speaking world.
  • I wish I knew Portuguese so I could’ve understood even more of what they were presenting about. (Hence my wish it could all be translated to English!)
  • Brazilians have the best methods of drinking beer and eating steak ever invented: small portions that keep on coming through the meal means your beer is never warm, and your steak is always fresh off the grill. Genius!

Thank you, EBAI (and in particular my gracious host, Guilhermo Reis) for an enlightening, delightful experience.

Quick

mother always called it the quick
so that was always still is its name
that bit of fingerflesh around the nail sewn
by magic into the wrapped fingerprint we all have
embossed on our extremities unique index thick
opposing thumb she might catch me gnawing and when she did

she’d say be careful or you’ll be done
chewed it all down into the quick

my teeth pulling at the splinter of skin going thicker
into the sensitive crease it should have a name
that crease but I’ve not heard should have
taught me something the way it hurt like a sewing

needle pressed and wriggled so
it reddens swells maybe bleeds she did
warn me I should have listened I should have
that moment back but watch it skitter away so quickly
what if every moment had a name
we could never remember them no matter how thick

the books where we kept them and no matter how thick
the shelves to keep the books no matter how stiff the spines are sewn
our very lives would burn them history’s fuel a comet trail of names
of moments and minutes hours and days what’s done
and undone it was years before I learned that quick
means alive versus dead and dead the part I’d chew until I had

hit nerve that bleeding cuticle sting that has
a lesson someplace about blood that nothing is thicker
and what we know about moments that nothing is quicker
see how simple children could sing it on a seesaw
tick tock up down until the shrill bathtime mothercall but do
you leave no you play until snatched awake by your full name

hurled from the kitchen door a great net woven of your name
and you’re waving goodbye to the neighbor boy who has
that same blue jacket from last year and in just a minute you don’t
quite see him in the dusk that descended so soon so thick
just the glow of clouds stretched pink raw and sewn
with veins of yellowing light and suddenly your steps are quicker

until you find yourself under the thick blanket with the soft-sewn
edge tucked under your chin quick quick tick tock sleep has
taken you alive even though it doesn’t know your name

roadsigns

I’ve recently run across some stories involving Pixar, Apple and game design company Blizzard Entertainment that serve as great examples of courageous redirection.

What I mean by that phrase is an instance where a design team or company was courageous enough to change direction even after huge investment of time, money and vision.

Changing direction isn’t inherently beneficial, of course. And sometimes it goes awry. But these instances are pretty inspirational, because they resulted in awesomely successful user-experience products.

Work colleague Anne Gibson recently shared an article at work quoting Steve Jobs talking about Toy Story and the iPhone. While I realize we’re all getting tired of comparing ourselves to Apple and Pixar, it’s still worth a listen:

At Pixar when we were making Toy Story, there came a time when we were forced to admit that the story wasn’t great. It just wasn’t great. We stopped production for five months…. We paid them all to twiddle their thumbs while the team perfected the story into what became Toy Story. And if they hadn’t had the courage to stop, there would have never been a Toy Story the way it is, and there probably would have never been a Pixar.

(Odd how Jobs doesn’t mention John Lasseter, who I suspect was the driving force behind this particular redirection.)

Jobs goes on to explain how they never expected to run into one of those defining moments again, but that instead they tend to run into such a moment on every film at Pixar. They’ve gotten better at it, but “there always seems to come a moment where it’s just not working, and it’s so easy to fool yourself – to convince yourself that it is when you know in your heart that it isn’t.

That’s a weird, sinking feeling, but it’s hard to catch. Any designer (or writer or other craftsperson) has these moments, where you know something is wrong, but even if you can put your finger on what it is, the momentum of the group and the work already done creates a kind of inertia that pushes you into compromise.

Design is always full of compromise, of course. Real life work has constraints. But sometimes there’s a particular decision that feels ultimately defining in some way, and you have to decide if you want to take the road less traveled.

Jobs continues with a similar situation involving the now-iconic iPhone:

We had a different enclosure design for this iPhone until way too close to the introduction to ever change it. And I came in one Monday morning, I said, ‘I just don’t love this. I can’t convince myself to fall in love with this. And this is the most important product we’ve ever done.’ And we pushed the reset button.

Rather than everyone on the team whining and complaining, they volunteered to put in extra time and effort to change the design while still staying on schedule.

Of course, this is Jobs talking — he’s a master promoter. I’m sure it wasn’t as utopian as he makes out. Plus, from everything we hear, he’s not a boss you want to whine or complain to. If a mid-level manager had come in one day saying “I’m not in love with this” I have to wonder how likely this turnaround would’ve been. Still, an impressive moment.

You might think it’s necessary to have a Steve Jobs around in order to achieve such redirection. But, it’s not.

Another of the most successful products on the planet is Blizzard’s World of Warcraft — the massively multiplayer universe with over 10 million subscribers and growing. This brand has an incredibly loyal following, much of that due to the way Blizzard interacts socially with the fans of their games (including the Starcraft and Diablo franchises).

Gaming news site IGN recently ran a thorough history of Warcraft, a franchise that started about fifteen years ago with an innovative real-time-strategy computer game, “Warcraft: Orcs & Humans.”

A few years after that release, Blizzard tried developing an adventure-style game using the Warcraft concept called Warcraft Adventures. From the article:

Originally slated to release in time for the 1997 holidays, Warcraft Adventures ran late, like so many other Blizzard projects. During its development, Lucas released Curse of Monkey Island – considered by many to be the pinnacle of classic 2D adventures – and announced Grim Fandango, their ambitious first step into 3D. Blizzard’s competition had no intention of waiting up. Their confidence waned as the project neared completion …

As E3 approached, they took a hard look at their product, but their confidence had already been shattered. Curse of Monkey Island’s perfectly executed hand-drawn animation trumped Warcraft Adventures before it was even in beta, and Grim Fandango looked to make it downright obsolete. Days before the show, they made the difficult decision to can the project altogether. It wasn’t that they weren’t proud of the game the work they had done, but the moment had simply passed, and their chance to wow their fans had gone. It would have been easier and more profitable to simply finish the game up, but their commitment was just that strong. If they didn’t think it was the best, it wouldn’t see the light of day.

Sounds like a total loss, right?

But here’s what they won: Blizzard is now known for providing only the best experiences. People who know the brand do not hesitate to drop $50-60 for a new title as soon as it’s available, reviews unseen.

In addition, the story and art development for Warcraft Adventures later became raw material for World of Warcraft.

I’m aware of some other stories like this, such as how Flickr came from a redirection away from making a computer game … what are some others?

In an article called “The Neuroscience of Leadership” (free registration required*), from Strategy + Business a few years ago, the writers explain how new understanding about how the brain works helps us see why it’s so hard for us to fully comprehend new ideas. I keep cycling back to this article since I read it just a few months ago, because it helps me put a lot of things that have perpetually bedeviled me in a better perspective.

One particularly salient bit:

Attention continually reshapes the patterns of the brain. Among the implications: People who practice a specialty every day literally think differently, through different sets of connections, than do people who don’t practice the specialty. In business, professionals in different functions — finance, operations, legal, research an development, marketing, design, and human resources — have physiological differences that prevent them from seeing the world the same way.

Note the word “physiological.” We tend to assume that people’s differences of opinion or perspective are more like software — something with a switch that the person could just flip to the other side, if they simply weren’t so stubborn. The problem is, the brain grows hardware based on repeated patterns of experience. So, while stubbornness may be a factor, it’s not so simple as we might hope to get another person to understand a different perspective.

Recently I’ve had a number of conversations with colleagues about why certain industries or professions seem stuck in a particular mode, unable to see the world changing so drastically around them. For example, why don’t most advertising and marketing professionals get that a website isn’t about getting eyeballs, it’s about creating useful, usable, delightful interactive experiences? And even if they nod along with that sentiment in the beginning, they seem clueless once the work starts?

Or why do some or coworkers just not seem to get a point you’re making about a project? Why is it so hard to collaborate on strategy with an engineer or code developer? Why is it so hard for managers to get those they manage to understand the priorities of the organization?

And in these conversations, it’s tempting — and fun! — to somewhat demonize the other crowd, and get pretty negative about our complaints.

While that may feel good (and while my typing this will probably not keep me from sometimes indulging in such a bitch-and-moan session), it doesn’t help us solve the problem. Because what’s at work here is a fundamental difference in how our brains process the world around us. Doing a certain kind of work in a particular culture of others that work creates a particular architecture in our brains, and continually reinforces it. If your brain grows a hammer, everything looks like a nail; if it grows a set of jumper cables, everything looks like a car battery.

Now … add this understanding to the work Jonathan Haidt and others have done showing that we’re already predisposed toward deep assumptions about fundamental morals and values. Suddenly it’s pretty clear why some of our biggest problems in politics, religion, bigotry and the rest are so damned intractable.

But even if we’re not trying to solve world hunger and political turmoil, even if we’re just trying to get a coworker or client to understand a different way of seeing something, it’s evident that bridging the gap in understanding is not just a peripheral challenge for doing great design work — it may be the most important design problem we face.

I don’t have a ready remedy, by the way. But I do know that one way to start building bridges over these chasms of understanding is to look at ourselves, and be brutally honest about our own limitations.

I almost titled this post “Why Some People Just Don’t Get It” — but I realized that sets the wrong tone right away. “Some People” becomes an easy way to turn others into objects of ridicule, which I’ve done myself even on this blog. It’s easy, and it feels good for a while, but it doesn’t help the situation get better.

As a designer, have you imagined what it’s like to see the world from the other person’s experience? Isn’t that what we mean when we say the “experience” part of “user experience design” — that we design based on an understanding of the experience of the other? What if we treated these differences in point of view as design problems? Are we up to the challenge?

Later Edit:

There have been some excellent comments, some of which have helped me see I could’ve been more clear on a couple of points.

I perhaps overstated the “hardware” point above. I neglected to mention the importance of ‘neuroplasticity‘ — and that the very fact we inadvertently carve grooves into the silly-putty of our brains also means we can make new grooves. This is something about the brain that we’ve only come to understand in the last 20-30 years (I grew up learning the brain was frozen at adulthood). The science speaks for itself much better than I can poorly summarize it here.

The concept has become very important to me lately, in my personal life, doing some hard psychological work to undo some of the “wiring” that’s been in my way for too long.

But in our role as designers, we don’t often get to do psychotherapy with clients and coworkers. So we have to design our way to a meeting of minds — and that means 1) fully understanding where the other is coming from, and 2) being sure we challenge our own presuppositions and blind spots. This is always better than just retreating to “those people don’t get it” and checking out on the challenge altogether, which happens a lot.

Thanks for the comments!

* Yet another note: the article is excellent; a shame registration is required, but it only takes a moment, and in this case I think it’s worth the trouble.

I don’t usually get into nitty-gritty interaction design issues like this on my blog. But I recently moved to a new address, and started new web accounts with various services like phone and utilities. And almost all of them are adding new layers of security asking me additional personal questions that they will use later to verify who I am. And entirely too many are asking questions like these, asked by AT&T on their wireless site:

badsecurity1

I can’t believe how many of them are using “favorites” questions for security. Why? Because it’s so variable over time, and because it’s not a fully discrete category. Now, I know I’m especially deficient in “favorite” aptitude — if you ask me my favorite band, favorite food, favorite city, I’ll mumble something about “well, I like a lot of them, and there are things about some I like more than others, but I really can’t think of just one favorite…” Most people probably have at least something they can name as a favorite. But because it’s such a fuzzy category, it’s still risky and confusing.

It’s especially risky because we change over time. You might say Italian food is your favorite, but you’ve never had Thai. And when you do, you realize it blows Italian food away — and by the next time you try logging into an account a year later, you can’t remember which cuisine you specified.

Even the question about “who was your best friend as a kid” or “what’s the name of your favorite pet, when you were growing up” — our attitudes toward these things are highly variable. In fact, we hardly ever explicitly decide our favorite friend or pet — unless a computer asks us to. Then we find ourselves, in the moment, deciding “ok, I’ll name Rover as my favorite pet” — but a week later you see a picture in a photo album of your childhood cat “Peaches” and on your next login, it’s error-city.

I suspect one reason this bugs me so much is that it’s an indicator of how a binary mentality behind software can do uncomfortable things to us as non-binary human beings. It’s the same problem as Facebook presents when it asks you to select which category your relationship falls into. What if none of them quite fit? Or even if one of them technically fits, it reduces your relationship to that data point, without all the rich context that makes that category matter in your own life.

Probably I’m making too much of it, but at least, PLEASE, can we get the word out in the digital design community that these security questions simply do not work?

The excellent Neuroanthropology blog offers up a terrific list of links to recent research & articles covering topics like Design, Research, Addiction and Art Criticism. Check it out!

Jonah Lehrer explains the import of a study described in Science.

The larger implication is that the birth of human culture was triggered by a new kind of connectedness. For the first time, humans lived in dense clusters, and occasionally interacted with other clusters, which allowed their fragile innovations to persist and propagate. The end result was a positive feedback loop of new ideas.

Sounds an awful lot like what the Internet is doing, no?

« Older entries § Newer entries »