Articles by Andrew

Owner of inkblurt.com

I was lucky enough to be part of a panel at this year’s IA Summit that included Andrea Resmini and Jorge Arango (thanks to Jorge for suggesting the idea and including me!). We had at least 100 show up to hear it, and it seemed to go over well. Eventually there will be a podcast, I believe. Please also read Andrea’s portion, and Jorge’s portion, because they are both excellent.

 
Update: There’s now an archive of podcasts from IA Summit 2011! And here’s a direct link to the podcast for this session (mp3). Or see them on iTunes.
 

A while back, I posted a rant about information architecture that invoked the term “cyberspace.” I, of course, received some flack for using that word. It’s played out, people say. It invokes dusty 80s-90s “virtual reality” ideas about a separate plane of existence … Tron-like cyber-city vistas, bulky goggles & body-suits, and dystopian worlds. Ok…yeah, whatever. For most people that’s probably true.

So let’s start from a different angle …

Over the last 20 years or so, we’ve managed to cause the emergence of a massive, global, networked dimension of human experience, enabled by digital technology.

It’s the dimension you visit when you’re sitting in the coffee shop catching up on your Twitter or Facebook feed. You’re “here” in the sense of sitting in the coffee shop. But you’re also “there” in the sense of “hanging out ‘on’ <Twitter/Facebook/Whatever>.”

It’s the dimension brave, unhappy citizens of Libya are “visiting” when they read, in real-time, the real words of regular people in Tunisia and Egypt, that inspire them to action just as powerfully as if those people were protesting right next to them. It may not be the dimension where these people physically march and bleed, but it’s definitely one dimension where the marching and bleeding matter.

I say “dimension” because for me that word doesn’t imply mutual exclusivity between “physical” and “virtual”: you can be in more than one “dimension” at once. It’s a facet of reality, but a facet that runs the length and breadth of that reality. The word “layer” doesn’t work, because “layer” implies a separate stratum. (Even though I’ve used “layer” off and on for a long time too…)

This dimension isn’t carbon-based, but information-based. It’s specifically human, because it’s made for, and bound together with, human semantics and cognition. It’s the place where “knowledge work” mostly happens. But it’s also the place where, more and more, our stories live, and where we look to make sense of our lives and our relationships.

What do we call this thing?

Back in 2006, Wired Magazine had a feature on how “Cyberspace is Dead.” They made the same points about the term that I mention above, and asked some well-known futurist-types to come up with a new term. But none of the terms they mentioned have seemed to stick. One person suggests “infosphere” … and I myself tried terms like “infospace” in the past. But I don’t hear anyone using those words now.

Even “ubiquitous computing” (Vint Cerf’s suggestion, but the late Mark Weiser’s coinage) has remained a specialized term of art within a relatively small community. Plus, honestly, it doesn’t capture the dimensionality I describe above … it’s fine as a term for the activity of  “computing” (hello, antiquated terminology) from anywhere, and for reminding us that computing technology is ubiquitously present, but doesn’t help us talk about the “where” that emerges from this activity.

There have been excellent books about this sort of dimension, with titles like Everyware, Here Comes Everybody, Linked, Ambient Findability, Smart Things … books with a lot of great ideas, but without a settled term for this thing we’ve made.

Of course, this begs the question: why do we need a term for it? As one of the people quoted in the Wired article says, aren’t we now just talking about “life”? Yeah, maybe that’s OK for most people. We used to say “e-business” because it was important to distinguish internet-based business from regular business … but in only a few years, that distinction has been effaced to meaninglessness. What business *isn’t* now networked in some way?

Still, for people like me who are tasked with designing the frameworks — the rule sets and semantic structures, the links and cross-experiential contexts, I think it’s helpful to have a term of art for this dimension … because it behaves differently from the legacy space we inherited.

It’s important to be able to point at this dimension as a distinct facet of the reality we’re creating, so we can talk about its nature and how best to design for it. Otherwise, we go about making things using assumptions hardwired into our brains from millions of years of physical evolution, and miss out on the particular power (and overlook the dangers) of this new dimension.

So, maybe let’s take a second look at “cyberspace” … could it be redeemed?

At the Institute for the Future, there’s a paper called “Blended Reality” (yet another phrase that hasn’t caught on). In the abstract, there’s a nicely phrased statement [emphasis mine]:

We are creating a new kind of reality, one in which physical and digital environments, media, and interactions are woven together throughout our daily lives. In this world, the virtual and the physical are seamlessly integrated. Cyberspace is not a destination; rather, it is a layer tightly integrated into the world around us.

The writer who coined the term, William Gibson, was quoted in the “Cyberspace is Dead” piece as saying, “I think cyberspace is past its sell-by, but the problem is that everything has become an aspect of, well, cyberspace.” This strikes me, frankly, as a polite way of saying “yeah I get your point, but I don’t think you get what I mean these days by the term.” Or, another paraphrase: I agree the way people generally understand the term is dated and feels, well, spoiled like milk … but maybe you need to understand that’s not cyberspace …”

Personally, I think Gibson sees the neon-cyberpunk-cityscape, virtual-reality conception of cyberspace as pretty far off the mark. In articles and interviews I’ve read over the years, he’s referenced it on and off … but seems conscious of the fact that people will misunderstand it, and finds himself explaining his points with other language.

Frankly, though, we haven’t listened closely enough. In the same magazine as the “Cyberspace is Dead” article, seven years prior, Gibson posted what I posit to be one of the foundational texts for understanding this… whatever … we’ve wrought. It’s an essay about his experience with purchasing antique watches on eBay, called “My Obsession.”  I challenge anyone to read this piece and then come up with a better term for what he describes.

It’s beautiful … so read the whole thing. But I’m going to quote the last portion here in full:

In Istanbul, one chill misty morning in 1970, I stood in Kapali Carsi, the grand bazaar, under a Sony sign bristling with alien futurity, and stared deep into a cube of plate glass filled with tiny, ancient, fascinating things.

Hanging in that ancient venue, a place whose on-site café, I was told, had been open, 24 hours a day, 365 days a year, literally for centuries, the Sony sign – very large, very proto-Blade Runner, illuminated in some way I hadn’t seen before – made a deep impression. I’d been living on a Greek island, an archaeological protectorate where cars were prohibited, vacationing in the past.

The glass cube was one man’s shop. He was a dealer in curios, and from within it he would reluctantly fetch, like the human equivalent of those robotic cranes in amusement arcades, objects I indicated that I wished to examine. He used a long pair of spring-loaded faux-ivory chopsticks, antiques themselves, their warped tips lent traction by wrappings of rubber bands.

And with these he plucked up, and I purchased, a single stone bead of great beauty, the color of apricot, with bright mineral blood at its core, to make a necklace for the girl I’d later marry, and an excessively mechanical Swiss cigarette lighter, circa 1911 or so, broken, its hallmarked silver case crudely soldered with strange, Eastern, aftermarket sigils.

And in that moment, I think, were all the elements of a real futurity: all the elements of the world toward which we were heading – an emerging technology, a map that was about to evert, to swallow the territory it represented. The technology that sign foreshadowed would become the venue, the city itself. And the bazaar within it.

But I’m glad we still have a place for things to change hands. Even here, in this territory the map became.

I’ve written before about how the map has become the territory. But I’d completely forgotten, until today, this piece I read over 10 years ago. Fitting, I suppose, that I should rediscover it now by typing a few words into Google, trying to find an article I vaguely remembered reading once about Gibson and eBay. As he says earlier in the piece quoted above, “We are mapping literally everything, from the human genome to Jaeger two-register chronographs, and our search engines grind increasingly fine.”

Names are important, powerful things. We need a name for this dimension that is the map turned out from itself, to be its own territorial reality. I’m not married to “cyberspace” — I’ll gladly call it something else.

What’s important to me is that we have a way to talk about it, so we can get better at the work of designing and making for it, and within it.

 

Note: Thanks to Andrea Resmini & Luca Rosati for involving me in their work on the upcoming book, Pervasive IA, from which I gleaned the reference to the Institute for the Future article I mentioned above.

Earlier I shared a post about designing context management, and wanted to add an example I’d seen. I knew I’d made this screenshot, but then couldn’t remember where; luckily I found it today hiding in a folder.

This little widget from Plaxo is the only example I’ve noticed where an online platform allows you to view information from different contextual points of view (other than very simple examples like “your public profile” and “preview before publish”).

Plaxo’s function actually allows you to see what you’re sharing with various categories of users with a basic drop-down menu. It’s not rocket science, but it goes miles further than most platforms for this kind of functionality.

If anybody knows of others, let me know?

Almost a year later, I’m finally posting this presentation to Slideshare. I have no idea what took me so long … but I’m sure that brain science has an answer :-)

I think there’s a lot of potential for design training & evolving methods to incorporate abetter understanding of how our brains function when we’re doing all the work of design.

See the program description on the conference site, and download the podcast or read the transcript at Boxes & Arrows.

Also, thanks to Luke W for the excellent summary of my talk.

I’m happy to announce I’m collaborating with my Macquarium colleague, Patrick Quattlebaum, and Happy Cog Philadelphia’s inimitable Kevin Hoffman on presenting an all-day pre-conference workshop for this year’s Information Architecture Summit, in Denver, CO. See more about it (and register to attend!) on the IA Summit site.

One of the things I’ve been fascinated with lately is how important it is to have an explicit understanding of the organizational and personal context not only of your users but of your own corporate environment, whether it’s your client’s or your own as an internal employee. When engaging over a project, having an understanding of motivations, power structures, systemic incentives and the rest of the mechanisms that make an organization run is immeasurably helpful to knowing how to go about planning and executing that engagement.

It turns out, we have excellent tools at our disposal for understanding the client: UX design methods like contextual inquiry, interviews, collaborative analysis interpretation, personas/scenarios, and the like; all these methods are just as useful for getting the context of the engagement as they are for getting the context of the user base.

Additionally, there are general rules of thumb that tend to be true in most organizations, such as how process starts out as a tool, but calcifies into unnecessary constraint, or how middle management tends to work in a reactive mode, afraid to clarify or question the often-vague direction of their superiors. Not to mention tips on how to introduce UX practice into traditional company hierarchies and workflows.

It’s also fascinating to me how understanding individuals is so interdependent with understanding the organization itself, and vice-versa. The ongoing explosion of new knowledge in social psychology and neuroscience  is giving us a lot of insight into what really motivates people, how and why they make their decisions, and the rest. These are among the topics Patrick & I will be covering during our portion of the workshop.

As the glue between the individual, the organization and the work, there are meetings. So half the workshop, led by Kevin Hoffman, will focus specifically on designing the meeting experience.  It’s in meetings, after all, where the all parties have to come to terms with their context in the organizational dynamics — so Kevin’s techniques for increasing not just the efficiency of meetings but the human & interpersonal growth that can happen in them, will be invaluable. Kevin’s been honing this material for a while now, to rave reviews, and it will be a treat.

I’m really looking forward to the workshop; partly because, as in the past, I’m sure to learn as much or more from the attendees as they learn from the workshop presenters.

Note: a while back, Christian Crumlish & Erin Malone asked me to write a sidebar for a book they were working on … an ambitious tome of design patterns for social software. The book, (Designing Social Interfaces) was published last year, and it’s excellent. I’m proud to be part of it. Christian encouraged contributors to publish their portions online … I’m finally getting around to doing so.

In addition to what I’ve posted below, I’ll point out that there have been several infamous screw-ups with context management since I wrote this … including Google Buzz and Facebook’s Groups, Places and other services.

Also to add: I don’t think we need a new discipline for context management. To my mind, it’s just good information architecture.

——————

There was a time when we could be fairly certain where we were at any given time. Just looking at one’s surroundings would let us know if we were in a public park or a quiet library, a dance hall or a funeral parlor. And our actions and conversations could easily adapt to these contexts: in a library, we’d know not to yell “heads up” and toss a football, and we’d know to avoid doing the hustle during someone’s eulogy.

But as more and more of our lives are lived via the web, and the contexts we inhabit are increasingly made of digits rather than atoms, our long-held assumptions about reality are dissolving under our typing-and-texting fingertips.

A pre-web example of this problem is something most people have experienced: accidentally emailing with “reply all” rather than “reply.”  Most email applications make it brutally easy to click Reply All by accident. In the physical world in which we evolved, the difference between a private conversation and a public one required more physical effort and provided more sensory clues. But in an email application, there’s almost no difference:  the buttons are usually identical and only a few pixels apart.

You’d think we would have learned something from our embarrassments with email, but newer applications aren’t much of an improvement. Twitter, for example, allows basically the same mistake if you use “@” instead of “d.” Not only that, but you have to put a space after the “d.”

Twitter users, by the time of this writing, are used to seeing at least a few of these errors made by their friends every week, usually followed by another tweet explaining that was a “mis-tweet” or cursing the d vs @ convention.

At least with those applications, it’s basically a binary choice for a single piece of data: one message goes either to one or multiple recipients: the contexts are straightforward, and relatively transparent. But on many popular social nework platforms, the problem becomes exponentially more complicated.

Because of its history, Facebook is an especially good example. Facebook started as a social web application with a built-in context: undergraduates at Harvard. Soon it expanded to other colleges and universities, but its contextual architecture continued to be based on school affiliation. The power of designing for a shared real-world context allowed Facebook’s structure to assume a lot about its users: they would have a lot in common, including their ages, their college culture, and circles of friends.

Facebook’s context provided a safe haven for college students to express themselves with their peers in all their immature, formative glory; for the first time a generation of late-teens unwittingly documented their transition to adulthood in a published format. But it was OK, because anybody on Facebook with them was “there” only because they were already “there” at their college, at that time.

But then, in 2006 when Facebook opened its virtual doors to anyone 13 or over with an email address, everything changed.  Graduates who were now starting their careers found their middle-aged coworkers asking to be friends on Facebook. I recall some of my younger office friends reeling at the thought that their cube-mates and managers might see their photos or read their embarrassing teenage rants “out of context.”

The Facebook example serves a discussion of context well because it’s probably the largest virtual place to have ever so suddenly unhinged itself from its physical place. Its inhabitants, who could previously afford an assumed mental model of “this web place corresponds to the physical place where I spent my college years,” found themselves in a radically different place. A contextual shift that would have required massive physical effort in the physical world was accomplished with a few lines of code and the flip of a switch.

Not that there wasn’t warning. The folks who run Facebook had announced the change was coming. So why weren’t more people ready? In part because such a reality shift doesn’t have much precedent; few people were used to thinking about the implications of such a change. But also because the platform didn’t provide any tools for managing the context conversion.

This lack of tools for managing multiple contexts is behind some of the biggest complaints about Facebook and social network platforms (such as MySpace and LinkedIn). For Facebook, long-time residents realized they would like to still keep up their immature and embarrassing memories from college to share just with their college friends, just like before — they wanted to preserve that context in its own space. But Facebook provided no capabilities for segmenting the experience. It was all or nothing, for every “friend” you added. And then, when Facebook launched its News feed — showing all your activities to your friends, and those of your friends to you — users rebelled in part because they hadn’t been given adequate tools for managing the contexts where their information might appear. This is to say nothing of the disastrous launch of Facebook’s “Beacon” service, where all users were opted in by default to share information about their purchases on other affiliated sites.

On MySpace, the early bugbear was the threat of predator activity and the lack of privacy. Again, the platform was built with the assumption that users were fine with collapsing their contexts into one space, where everything was viewable by every “friend” added. And on LinkedIn, users have often complained the platform doesn’t allow them to keep legitimate peer connections separate from others such as recruiters.

Not all platforms have made these mistakes. The Flickr photo site has long distinguished between Family and Friends, Private and Public. LiveJournal, a pioneering social platform, has provided robust permissions controls to its users for years, allowing creation of many different user-and-group combinations.

However, there’s still an important missing feature, one which should be considered for all social platforms even as they add new context-creation abilities. It’s either impossible or difficult for users to review their profiles and posts from others’ point of view.

Giving users the ability to create new contexts is a great step, but they also need the ability to easily simulate each user-category’s experience of their space. If a user creates a “co-workers” group and tries to carefully expose only their professional information, there’s no straightforward way to view their own space using that filter. With the Reply All problem described earlier, we at least get a chance to proof-read our message before hitting the button. But most social platforms don’t even give us that ability.

This function — perhaps call it “View as Different User Type” — is just one example of a whole class of design patterns we still need for managing the mind-bending complexity we’ve created for ourselves on the web. There are certainly others waiting to be explored. For example, what if we had more than just one way to say “no thank you” to an invitation or request, depending on type of person requesting? Or a way to send a friendly explanatory note with your refusal, thereby adding context to an otherwise cold interaction? Or what about the option to simply turn off whole portions of site functionality for some groups and not others? Maybe I’d love to get zombie-throwing-game invitations from my relatives, but not from people I haven’t seen since middle school?

In the rush to allow everyone to do everything online, designers often forget that some of the limitations of physical life are actually helpful, comforting, and even necessary. We’re a social species, but we’re also a nesting species, given to having our little nook in the tribal cave. Maybe we should take a step back and think of these patterns not unlike their originator, Mr Alexander, did — how have people lived and interacted successfully over many generations? What can we learn from the best of those structures, even in the structureless clouds of cyberspace? Ideally, the result would be the best of both worlds: architectures that fit our ingrained assumptions about the world, while giving us the magical ability to link across divides that were impossible to cross before.

I remember back in 1999 working in a web shop that was a sibling company with a traditional ad firm, and thinking “do they realize that digital means more than just packaging copy & images for a new medium?”

Then over the years since, I’ve continually been amazed that most advertising & marketing pros still don’t seem to get the difference between “attention” and actual “engagement” — between momentary desire and actual usefulness.

Then I read this quote from a veteran advertising creative officer:

Instead of building digital things that had utility, we approached it from a messaging mind-set and put messaging into the space. It took us a while to realize … the digital space is completely different.

via The Future of Advertising | Page 4 | Fast Company.

I guess better late than never …

I actually love advertising at its best. Products and brands need to be able to tell great stories about themselves, and engage people’s emotions & aspirations. It’s easy to dump on advertising & marketing as out of touch and wrong-headed — but that’s lazy, it seems to me.

I appreciated the point Bill Buxton made in a talk I saw online a while back about how important the advertising for the iPod was … that it wasn’t just an added-on way to talk about the product; it was part of the whole product experience, driving much of how people felt about purchasing, using and especially *wearing* an iPod and its distinctive white earphones.

But this distinction between utility and pure message is an important one to understand, partly so we can understand how blurred the line has become between them. Back when the only way to interact with a brand was either to receive its advertising message passively, or to purchase and touch/experience its product or service — and there was precious little between — the lines were pretty clear between the message-maker and the product-creator.

These days, however, there are so many opportunities for engagement through interaction, conversation, utility and actual *use* between the initial message and the product itself.

Look at automobiles, for example: once upon a time, there were ads about cars, and then there were the actual cars … and that was pretty much it. But now we get a chance to build the car online, read about it, imagine ourselves in it with various options, look for reviews about it, research prices … all of that before we actually touch the car itself. By the time you touch the car, so much physical engagement has happened on your way to the actual object that your experience is largely shaped already — the car is going to feel different to you if that experience was positive rather than if it was negative (assuming a negative experience didn’t dissuade you from going for a test drive at all).

Granted to some degree that’s always been the case. The advertising acts like the label on a bottle of wine — shaping the expectation of the experience inside the bottle, which we know can make a huge difference.  But the utility experience brings a whole new, physical dimension that affects perception even more: the ability to engage the car interactively rather than passively receiving “messaging” alone. Now it’s even harder to answer the question “where does the messaging end and the car begin?”

I’ve written a lot of stuff over the last few years about information architecture. And I’m working on writing more. But recently I’ve realized there are some things I’ve not actually posted publicly in a straightforward, condensed manner. (And yes, the post below is, for me, condensed.)

WTF is IA?

1. Information architecture is not just about organizing content.

  • In practice, it has never been limited to merely putting content into categories, even though some very old definitions are still floating around the web that define it as such. (And some long-time practitioners are still explaining it this way, even though their actual work goes beyond those bounds.)
  • Every competent information architecture practitioner I’ve ever known has designed for helping people make decisions, or persuade customers, or encourage sharing and conversation where relevant. There’s no need to coin new things like “decision architecture” and “persuasion architecture.”
  • This is not to diminish the importance and complexities involved with designing storage and access of content, which is actually pretty damn hard to do well.

2. IA determines the frameworks, pathways and contexts that people (and information) are able to traverse and inhabit in digitally-enabled spaces.

  • Saying information architecture is  limited to how people interact with information is like saying traditional architecture is limited to how people interact with wood, stone, concrete and plastic.
  • That is: Information architecture uses information as its raw material the same way building architecture uses physical materials.
  • All of this stuff is essentially made of language, which makes semantic structure centrally important to its design.
  • In cyberspace, where people can go and where information can go are essentially the same thing; where and how people can access information and where and how people can access one another is, again, essentially the same thing. To ignore this is to be doing IA all wrong.

3. The increase of things like ubiquitous computing, augmented reality, emergent/collective organization and “beyond-the-browser” experiences make information architecture even more relevant, not less.

  • The physical world is increasingly on the grid, networked, and online. The distinction between digital and “real” is officially meaningless. This only makes IA more necessary. The digital layer is made of language, and that language shapes our experience of the physical.
  • The more information contexts and pathways are distributed, fragmented, user-generated and decentralized, the more essential it is to design helpful, evolving frameworks, and conditional/responsive semantic structures that enable people to communicate, share, store, retrieve and find “information” (aka not just “content” but services, places, conversations, people and more).
  • Interaction design is essential to all of this, as is graphical design, content strategy and the rest. But those things require useful, relevant contexts and connections, semantic scaffolding and … architecture! … to ensure their success. (And vice versa.)

Why does this need to be explained? Why isn’t this more clear? Several reasons:

1. IA as described above is still pretty new, highly interstitial, and very complex; its materials are invisible, and its effects are, almost by definition, back-stage where nobody notices them (until they suck). We’re still learning how to talk about it. (We need more patience with this — if artists, judges, philosophers and even traditional architects can still disagree among one another about the nature of their fields, there’s no shame in IA following suit.)

2. Information architecture is a phrase claimed by several different camps of people, from Wurmanites (who see it as a sort of hybrid information-design-meets-philosophy-of-life) to the polar-bear-book-is-all-I-need folks, to the information-technology systems architects and others … all of whom would do better to start understanding themselves as points on a spectrum rather than mutually exclusive identities.

3. There are too many legacy definitions of IA hanging around that need to be updated past the “web 1.0″ mentality of circa 2000. The official explanations need to catch up with the frontiers the practice has been working in for years now. (I had an opportunity to fix this with IA Institute and dropped the ball; glad to help the new board & others in any way I can, though.)

4. Leaders in the community have the responsibility to push the practice’s understanding of itself forward: in any field, the majority of members will follow such a lead, but will otherwise remain in stasis. We need to be better boosters of IA, and calling it what it is rather than skirting the charge of “defining the damn thing.”

5. Some leaders (and/or loud voices) in the broader design community have, for whatever reason, decided to reject information architecture or, worse, continue stoking some kind of grudge against IA and people who identify as information architects. They need to get over their drama, occasionally give people the benefit of the freakin’ doubt, and move on.

Update:

This has generated a lot of excellent conversation, thanks!

A couple of things to add:

After some prodding on Twitter, I managed to boil down a single-statement explanation of what information architecture is, and a few folks said they liked it, so I’m tacking it on here at the bottom: “IA determines what the information should be, where you and it can go, and why.” Of course, the real juice is in the wide-ranging implications of that statement.

Also Jorge Arango was awesome enough to translate it into Spanish. Thanks, Jorge!

I liked this bit from Peter Hacker, the Wittgenstein scholar, in a recent interview. He’s talking about how any way of seeing the world can take over and put blinders on you, if you become too enamored of it:

The danger, of course, is that you over do it. You overplay your hand – you make things clearer than they actually are. I constantly try to keep aware of, and beware of, that. I think it’s correct to compare our conceptual scheme to a scaffolding from which we describe things, but by George it’s a pretty messy scaffolding. If it starts looking too tidy and neat that’s a sure sign you’re misdescribing things.

via TPM: The Philosophers’ Magazine | Hacker’s challenge. (emphasis mine)

It strikes me this is true of design as well. There’s no one way to see it, because it’s just as organic and messy as the world in which we do it.

I mean this both in the larger sense of “what is design?” and the smaller sense of “what design is best for this particular situation?”

Over the years, I’ve come to realize that most things are “messy” — and that while any one solution or model might be helpful, I have to ward against letting it take over all my thinking (which is awfully easy to do … it’s pleasant, and much less work, to just dismiss everything that doesn’t fit a given perspective, right?).

The actual subject of the interview is pretty great too … case in point, for me, it warns against buying into the assumptions behind so much recent neuroscience thinking, especially how it’s being translated in the mainstream (though Hacker goes after some hard-core neuroscience as well).

When I ran for the IA Institute board a couple of years ago, I’d never been on a board of anything before. I didn’t run because I wanted to be on a board at all, really. I ran because I had been telling board members stuff I thought they should focus on, and making pronouncements about what I thought the IA Institute should be, and realized I should either join in and help or shut up about it.

So I ran, and thanks to the membership of the Institute that voted for me, I was voted into a slot on the board.

It didn’t take long to realize that the organization I’d helped (in a very small way) get started back in 2002 had grown into a real non-profit with actual responsibilities, programs, infrastructure and staff. What had been an amorphous abstraction soon came into focus as a collection of real, concrete moving parts, powered mainly by volunteers, that were trying to get things done or keep things going.

Now, two years later, I’m rolling off of my term on the board. I chose not to run again this year for a second term only because of personal circumstance, not because I don’t want to be involved (in fact, I want to continue being involved as a volunteer, just in a more focused way).  I’m a big believer in the Institute — both what it’s doing and what it can accomplish in the future.

I keep turning over in my head: what sort of advice should I give to whoever is voted into the board this year? Then I realized: why wait to bring these things up … maybe this would be helpful input for those who are running and voting for the board? So here goes… in no particular order.

Perception rules

The Institute has been around for 8 years now. In “web time” that’s an eternity.  That gives the organization a certain patina of permanence and … well, institution-ness … that would lead folks to believe it’s a completely established, solidly funded, fully staffed organization with departments and stuff. But it’s actually still a very shoestring-like operation. The Institute is still driven 99% by volunteers, with only 2 half-time staff, paid on contract, who live in different cities, and who are very smart, capable people who could probably be making more money doing something else. (Shout-out to Tim & Noreen — and to Staff Emeritus Melissa… you guys  all rock).  But I don’t know that we did the best job of making that clear to the community. That has led at times to some misunderstandings about what people should expect from the org.

Less “Can-Do” and more “Jiu Jitsu”

Good intentions and willingness to work hard and make things happen isn’t enough. In fact it may be too much. A “can-do” attitude sounds great! But it results in creating things that can’t be sustained, or chasing ideals that people say they believe in but don’t actually have the motivation to support over time.

Jiu jitsu, on the other hand, takes the energy that’s available and channels it. It’s disciplined in its focus. Overall, I think the org needs to keep heading in that direction — picking the handful of things it can stand for and accomplish very well.

The Institute has a history of having very inventive, imaginative people involved in its board and volunteer efforts, and in its community at large. These are folks who think of great ideas all the time. But not every idea is one that should be followed up on and considered as an initiative. Here’s the thing: even most of the *good* ideas cannot be followed up on and considered an actual initiative. There just isn’t bandwidth.

I’d bet any organization that has a leadership team that changes out every 1-2 years probably has this challenge. Add the motivation to “make a mark” as a board member to the motivation to make members & community voices happy who are asking for (or demanding) things, and before you know it, you have a huge list of stuff going on that may or may not actually still have relevance or value commensurate with the effort it requires.

It’s easy in the heat of the moment of a new idea to say “yeah we love that, let’s make that happen” … but it’s an illusion created by the focus of novelty. I urge the community (members, board, volunteers, everyone) to keep this in mind when thinking “why doesn’t the Institute to X or Y? it seems so obvious!”  The response I’ve taken to having to those requests is: that sounds like a great idea… how’d you like to investigate making that happen for the Institute?

Anything that doesn’t have people interested enough to make it happen *outside the board* probably shouldn’t happen to begin with. The Board is there to sponsor things, make decisions about how money should be spent and what to support — but not do the legwork and heavy lifting. It’s just impossible to do that plus run the organization, for people who have paying, full-time jobs already.

Money & Membership

This is not a wealthy organization. The budget is pretty small. It only charges members $40 a year (still, after 8 years), and other than membership fees, makes a big chunk of its budget from its annual conference (IDEA — go register!). Where does the money go? Lots of it goes to the community — helping to fund conferences, events, grants, and initiatives aimed at helping grow the knowledge & skills of the whole community. It also goes to paying the part-time staff to keep the lights on, fix stuff & enable most of the work that goes on. The benefits are not just for paying members, by the way. Most of what the Institute does is pretty open-source type stuff. Frankly I’ve thought for a while now that we should move away from “membership” and call people “contributors” instead. Because that’s what you’re doing … you’re contributing a small amount of cash in support of the community, and you get access to a closed, relatively quiet mailing list of helpful colleagues as a “thank you” gift.

Whenever I hear somebody complaining about the Institute and “what I get for my forty dollars,” I get a little miffed. But then I realize to some degree the organization sets that expectation. It may be helpful for the next board to think about the membership model — which really may be more about semantics & expectations-setting than policy, who knows.

One thing the Institute has historically been afraid to do is spend money on itself. But then it tries to handle some tasks that would honestly be much better to pay others to handle. (Again, that can-do attitude getting us in trouble.) Historically, the board tried to handle a lot of the financial tasks through a treasurer (banking, recordkeeping, etc). It took a long line of dedicated people who gave a lot of their personal time to handling those tedious tasks. We finally hit a wall where we realized we just weren’t handling the tasks as well as we should as amateurs — we needed help. So we found an excellent 3rd party service provider (recommended by our excellent Board of Advisors) to take care of a lot of that stuff. (And it’s very cost-efficient — I won’t go into why and how here.)

One thing that comes up year after  year is that the board should have an annual retreat to ramp up new board members and spend concentrated face-to-face time bonding as a team, deciding on priorities & getting a shared vision. But there’s a lot of fear about spending the money (especially to fly international folks around) and the perception issue (see above) that the Board is blowing money on junkets or something.

But face time, especially if it’s moderated & structured, could go a long way toward building rapport & accountability and setting things up for success. This should be mandatory and written into the bylaws, and an explanation published on the site explaining why it is necessary. IMHO this may be the single biggest pitfall that’s gotten in the way of having a fully effective board, at least in my term.

Roads & Bridges

The infrastructure? It’s a hodgepodge of code & 3rd party services strung together through heroic efforts & ingenuity, over 8 years. A lot of it is pretty old & rickety. But honestly, it’s the 3rd party services that seem to be the biggest problem at times — for example the 3rd party membership system is messy and inflexible (though some excellent volunteer work is going on to switch systems to something that will integrate better with other web services).

I can’t tell you how many times over the last 3 years (1 as an advisor, 2 as a director) I’ve heard it said “we could totally do X better if we had the infrastructure” and just didn’t have the bandwidth or funding to move forward with that.

Progress is being made on several fronts, but the Institute needs an organized, passionate & well-led effort to deal with the infrastructure issue from the ground up. I do not mean that the Institute needs some kind of Moon Landing project. It needs to use a few easy-to-maintain mechanisms that take the least effort for maximum effect. One problem is that the infrastructure is supporting a lot of initiatives that have accrued over the last 8 years, some of which are still relevant, some of which may not be, and many of which should be reorganized or combined to better focus efforts (see the Can-do vs Jiu-jitsu bit above).

People will be people

This org, like any non-profit, volunteer-driven organization, is made up of people. And one constant among people is that we all have our flaws, and we all have complicated lives. We all have personalities that some folks like and some folks don’t. We all say things we wish we could take back, and we all do stuff that other people look at and say “WTF?”

While any organization like this is, indeed, made up of people … it’s a mistake to judge the organization as a whole by any handful of individuals involved in it. But it happens anyway.

So, since that’s inevitable, anyone running for a leadership position in an organization like this should be aware: being on the board is going to put you in a spotlight in a way that will probably surprise you. There are a lot of people who pay attention to who’s on such a list — and they look to you with a lot of expectations you wouldn’t dream other people would have of you. Just be aware that.

At the same time, remember to have some humility and openness about the people who came before you in your role, and their decisions and the hard work they did. Much of what I tried to do in the last 2 years turned out to be misplaced effort, or just the wrong idea … and some of the stuff that I think is valuable may end up being irrelevant in another year or two. That’s just how it goes. It’s tempting to go into a new role with the attitude of “I’m gonna clean this mess up” and “why the hell did they decide to do it like this?”  Just remember that somebody will likely be thinking that about some of your work & ideas a couple years from now, and give others the break you hope they may give you.

Signing Off

Speaking of people — it’s been an honor & privilege to serve with the folks I worked with over the last two years, and to have been entrusted with a board role by the Institute members. I hope I left the place at least a little better off than when I got there.

I had the privilege of hanging out with Allen Ginsberg for a few days back in a previous life when I wanted to be a full-time poet. At dinner one night, as he was working his way through some fresh fruit he’d had warmed for digestion (he was going macrobiotic because of his “diyabeetus”), he was talking about people he’d known in his past. He said something that stuck with me about his teachers & mentors through the years … I paraphrase: “You know, one thing I’ve learned … you don’t kick the people who came before you in the teeth.”  I think it’s important to keep that rule about the people who come after you as well.

I make this pledge to the incoming leaders & other volunteers: if I have an issue with the Institute, something it’s done or some decision it’s made, something that isn’t working right, or something a person said or did, I’ll strive to remember to avoid blurting an outburst or even grousing in private, because it’s best to communicate with you and ask “how can I help?” Otherwise, I have no room to complain.

A final note (finally!) … any good I and the other board members did was only building on the excellent efforts of the community members who went before … the previous boards, volunteers & staff. Thanks to all of you for the hard work you put in thus far … and thanks to those of you stepping up to offer your time, passion and ingenuity in the future.

Today it’s official that I’m leaving my current role at Vanguard as of June 25, and starting in a new role as Principal User Experience Architect at Macquarium.

I know everybody says “it’s with mixed feelings” that they do such a thing, but for me it’s definitely not a cliche. Vanguard has been an excellent employer, and for the last 6 1/2 years I’ve been there, I’ve always been able to say I worked there with a great deal of pride. It has some of the smartest, most dedicated user-experience design professionals I’ve ever met, and I’ll miss all of them, as well as the business and technology people I’ve worked closely with over the years.

I’m excited, however, to be starting work with Macquarium on June 28. On a personal level, it’s a great opportunity to work in the the region where I live (Charlotte and environs) as well as Atlanta, where the company is headquartered, and where I grew up, and have a lot of family I haven’t gotten to see as often as I’d like. On the professional side, Macquarium is tackling some fascinating design challenges that fit my interests and ambitions very well at this point in my life. I can’t wait to sink my teeth into that juicy work.

I’ve been pretty quiet on the blog for quite a while, partly because leading up to this (very recently emerging) development, I also spoke at a couple of conferences, and got married … it’s been a busy 2010 so far. I’m hoping to be more active here at Inkblurt in the near future … but no promises… I don’t want to jinx it.

In 2010 I was fortunate to be invited to be a keynote speaker at the Italian IA Summit in Pisa, Italy.

I presented on “Why Information Architecture Matters (To Me)” — a foray into how I see IA as being about creating places for habitation, and some personal background on why I see it that way.

« Older entries § Newer entries »