Search Results

Your search for context returned the following results.

I’ve written a lot of stuff over the last few years about information architecture. And I’m working on writing more. But recently I’ve realized there are some things I’ve not actually posted publicly in a straightforward, condensed manner. (And yes, the post below is, for me, condensed.)

WTF is IA?

1. Information architecture is not just about organizing content.

  • In practice, it has never been limited to merely putting content into categories, even though some very old definitions are still floating around the web that define it as such. (And some long-time practitioners are still explaining it this way, even though their actual work goes beyond those bounds.)
  • Every competent information architecture practitioner I’ve ever known has designed for helping people make decisions, or persuade customers, or encourage sharing and conversation where relevant. There’s no need to coin new things like “decision architecture” and “persuasion architecture.”
  • This is not to diminish the importance and complexities involved with designing storage and access of content, which is actually pretty damn hard to do well.

2. IA determines the frameworks, pathways and contexts that people (and information) are able to traverse and inhabit in digitally-enabled spaces.

  • Saying information architecture is  limited to how people interact with information is like saying traditional architecture is limited to how people interact with wood, stone, concrete and plastic.
  • That is: Information architecture uses information as its raw material the same way building architecture uses physical materials.
  • All of this stuff is essentially made of language, which makes semantic structure centrally important to its design.
  • In cyberspace, where people can go and where information can go are essentially the same thing; where and how people can access information and where and how people can access one another is, again, essentially the same thing. To ignore this is to be doing IA all wrong.

3. The increase of things like ubiquitous computing, augmented reality, emergent/collective organization and “beyond-the-browser” experiences make information architecture even more relevant, not less.

  • The physical world is increasingly on the grid, networked, and online. The distinction between digital and “real” is officially meaningless. This only makes IA more necessary. The digital layer is made of language, and that language shapes our experience of the physical.
  • The more information contexts and pathways are distributed, fragmented, user-generated and decentralized, the more essential it is to design helpful, evolving frameworks, and conditional/responsive semantic structures that enable people to communicate, share, store, retrieve and find “information” (aka not just “content” but services, places, conversations, people and more).
  • Interaction design is essential to all of this, as is graphical design, content strategy and the rest. But those things require useful, relevant contexts and connections, semantic scaffolding and … architecture! … to ensure their success. (And vice versa.)

Why does this need to be explained? Why isn’t this more clear? Several reasons:

1. IA as described above is still pretty new, highly interstitial, and very complex; its materials are invisible, and its effects are, almost by definition, back-stage where nobody notices them (until they suck). We’re still learning how to talk about it. (We need more patience with this — if artists, judges, philosophers and even traditional architects can still disagree among one another about the nature of their fields, there’s no shame in IA following suit.)

2. Information architecture is a phrase claimed by several different camps of people, from Wurmanites (who see it as a sort of hybrid information-design-meets-philosophy-of-life) to the polar-bear-book-is-all-I-need folks, to the information-technology systems architects and others … all of whom would do better to start understanding themselves as points on a spectrum rather than mutually exclusive identities.

3. There are too many legacy definitions of IA hanging around that need to be updated past the “web 1.0” mentality of circa 2000. The official explanations need to catch up with the frontiers the practice has been working in for years now. (I had an opportunity to fix this with IA Institute and dropped the ball; glad to help the new board & others in any way I can, though.)

4. Leaders in the community have the responsibility to push the practice’s understanding of itself forward: in any field, the majority of members will follow such a lead, but will otherwise remain in stasis. We need to be better boosters of IA, and calling it what it is rather than skirting the charge of “defining the damn thing.”

5. Some leaders (and/or loud voices) in the broader design community have, for whatever reason, decided to reject information architecture or, worse, continue stoking some kind of grudge against IA and people who identify as information architects. They need to get over their drama, occasionally give people the benefit of the freakin’ doubt, and move on.

Update:

This has generated a lot of excellent conversation, thanks!

A couple of things to add:

After some prodding on Twitter, I managed to boil down a single-statement explanation of what information architecture is, and a few folks said they liked it, so I’m tacking it on here at the bottom: “IA determines what the information should be, where you and it can go, and why.” Of course, the real juice is in the wide-ranging implications of that statement.

Also Jorge Arango was awesome enough to translate it into Spanish. Thanks, Jorge!

EBAI was awesome

ebai

EBAI, The Brazilian Information Architecture Congress (basically the IA Summit or EuroIA of Brazil) was kind and generous enough to invite me to Sao Paulo as a keynote speaker, closing their first day. They gave me a huge chunk of time, so I presented a long version of my Linkosophy talk, expanded with more about designing for Context. It was a terrific experience. Here’s just a smattering of what I discovered:

  • Brazilian user-experience designers tend to use the term Information Architecture (and Architect) for their community of practice — which I think is a fine thing. (I explained we still need to agree what “IA” means in the context of a given design, but who am I to tell them “there are no information architects“?)
  • These people are brilliant. They’re doing and inventing UX design research and methods that really should be shared with the larger, non-Portuguese-speaking world.
  • I wish I knew Portuguese so I could’ve understood even more of what they were presenting about. (Hence my wish it could all be translated to English!)
  • Brazilians have the best methods of drinking beer and eating steak ever invented: small portions that keep on coming through the meal means your beer is never warm, and your steak is always fresh off the grill. Genius!

Thank you, EBAI (and in particular my gracious host, Guilhermo Reis) for an enlightening, delightful experience.

I don’t usually get into nitty-gritty interaction design issues like this on my blog. But I recently moved to a new address, and started new web accounts with various services like phone and utilities. And almost all of them are adding new layers of security asking me additional personal questions that they will use later to verify who I am. And entirely too many are asking questions like these, asked by AT&T on their wireless site:

badsecurity1

I can’t believe how many of them are using “favorites” questions for security. Why? Because it’s so variable over time, and because it’s not a fully discrete category. Now, I know I’m especially deficient in “favorite” aptitude — if you ask me my favorite band, favorite food, favorite city, I’ll mumble something about “well, I like a lot of them, and there are things about some I like more than others, but I really can’t think of just one favorite…” Most people probably have at least something they can name as a favorite. But because it’s such a fuzzy category, it’s still risky and confusing.

It’s especially risky because we change over time. You might say Italian food is your favorite, but you’ve never had Thai. And when you do, you realize it blows Italian food away — and by the next time you try logging into an account a year later, you can’t remember which cuisine you specified.

Even the question about “who was your best friend as a kid” or “what’s the name of your favorite pet, when you were growing up” — our attitudes toward these things are highly variable. In fact, we hardly ever explicitly decide our favorite friend or pet — unless a computer asks us to. Then we find ourselves, in the moment, deciding “ok, I’ll name Rover as my favorite pet” — but a week later you see a picture in a photo album of your childhood cat “Peaches” and on your next login, it’s error-city.

I suspect one reason this bugs me so much is that it’s an indicator of how a binary mentality behind software can do uncomfortable things to us as non-binary human beings. It’s the same problem as Facebook presents when it asks you to select which category your relationship falls into. What if none of them quite fit? Or even if one of them technically fits, it reduces your relationship to that data point, without all the rich context that makes that category matter in your own life.

Probably I’m making too much of it, but at least, PLEASE, can we get the word out in the digital design community that these security questions simply do not work?

UX Insight Elements

Funny how things can pop into your head when you’re not thinking about them. I can’t remember why this occurred to me last week … but it was one of those thoughts I realized I should write down so I could use it later. So I tweeted it. Lots of people kindly “re-tweeted” the thought, which immediately made me self-conscious that it may not explain itself very well. So now I’m blogging about it. Because that’s what we kids do nowadays.

My tweet: User Experience Design is not data-driven, it’s insight-driven. Data is just raw material for insight.

I whipped up a little model to illustrate the larger point: insight comes from a synthesis between talent, expertise, and the fresh understanding we gain through research. It’s a set of ingredients that, when added to our brains and allowed to stew, often over a meal or after a few good nights’ sleep, can bring a designer to those moments of clarity where a direction finally makes sense.

I’ve seen a lot of talk lately about how we shouldn’t be letting data drive our design decisions — that we’re designers, so we should be designing based on best practices, ideas, expertise, and even “taste.” (I have issues with the word “taste” as many people use it, but I don’t have a problem with the idea of “expert intuition” which is I think more what a lot of my colleagues mean. In fact, that Ira Glass video that made the rounds a few weeks ago on many tweets/blogs puts a better spin on the word “taste” as one’s aspiration that may be, for now, beyond one’s actual abilities, without work and practice.)

As for the word “data” — I’m referring to empirical data as well as the recorded results of something less numbers-based, like contextual research. Data is an input to our understanding, but nothing more. Data cannot tell us, directly, how to design anything.

But it’s also ludicrous to ask a client or employer to spend their money based solely on your expertise or … “taste.” Famous interior or clothing designers or architects can perhaps get away with this — because their names carry inherent value, whether their designs are actually useful or not. So far, User Experience design practitioners don’t have this (dubious) luxury. I would argue that we shouldn’t, otherwise we’re not paying much attention to “user experience” to begin with.

Data is valuable, useful, and often essential. Data can be an excellent input for design insight. I’d wager that you should have as much background data as you can get your hands on, unless you have a compelling reason to exclude it. In addition, our clients tend to speak the language of data, so we need to be able to translate our approach into that language.

It’s just that data doesn’t do the job alone. We still need to do the work of interpretation, which requires challenging our presuppositions, blind spots and various biases.

The propensity for the human brain to completely screw stuff up with cognitive bias is, alone, reason enough to put our design ideas through a bit of rigor. Reading through the oft-linked list of cognitive biases on Wikipedia is hopefully enough to caution any of us against the hubris of our own expertise. We need to do the work of seeing the design problem anew, with fresh understanding, putting our assumptions on the table and making sure they’re still viable. To me, at least, that’s a central tenet behind the cultural history of “user experience” design approaches.

But analysis paralysis can also be a serious problem; and data is only as good as its interpretation. Eventually, actual design has to happen. Otherwise you end up with a disjointed palimpsest, a Frankenstein’s Monster of point-of-pain fixes and market-tested features.

We have to be able to do both: use data to inform the fullest possible understanding of the behavior and context of potential users, as well as bring our own experience and talent to the challenge. And that’s hard to do, in the midst of managing client expectations, creating deliverables, and endless meetings and readouts. But who said it was easy?

I’ve been puzzling over what I was getting at last year when I was writing aboutflourishing.” And for a while I’ve been more clear about what I was getting at… and realized it wasn’t the right term. Now I’m trying “mixpression” on for size.

What I meant by “flourishing” is the act of extemporaneously mixing other media besides verbal or written-text language in our communication. That is: people using things like video clips or still images with the same facility and immediacy that they now use verbal/written vocabulary. “Mixpression” is an ungainly portmanteau, I’ll admit. But it’s more accurate.

(Earlier, I think I had this concept overlapping too much with something called “taste performance” — more about which, see bottom of the post.)

Victor Lombardi quotes an insightful bit from Adam Gopnik on his blog today: Noise Between Stations » Images That Sum Up Our Desires.

We are, by turn — and a writer says it with sadness — essentially a society of images: a viral YouTube video, an advertising image, proliferates and sums up our desires; anyone who can’t play the image game has a hard time playing any game at all.
– Adam Gopnik, Angels and Ages: A Short Book About Darwin, Lincoln, and Modern Life, p 33

When I heard Michael Wesch (whom I’ve written about before) at IA Summit earlier this month, he explained how his ethnographic work with YouTube showed people having whole conversations with video clips — either ones they made themselves, or clips from mainstream media, or remixes of them. Conversations, where imagery was the primary currency and text or talk were more like supporting players.

Here’s the thing — I’ve been hearing people bemoan this development for a while now. How people are becoming less literate, or less “literary” anyway, and how humanity is somehow regressing. I felt that way for a bit too. But I’m not so sure now.

If you think about it, this is something we’ve always had the natural propensity to do. Even written language evolved from pictographic expression. We just didn’t have the technology to immediately, cheaply reproduce media and distribute it within our conversations (or to create that media to begin with in such a way that we could then share it so immediately).
Read the rest of this entry »

.
.
.
Now that the workshop has come and gone, I’m here to say that it went swimmingly, if I do blog so myself.

My colleagues did some great work — hopefully it’ll all be up on Slideshare at some point. But here are the slides I contributed. Alas, there are no “speaker notes” with these — but most of the points are pretty clear. I would love to blog about some of the slides sometime soon — but whenever I promise to blog about something, I almost guarantee I won’t get around to it. So I’ll just say “it would be cool if I blogged about this…” :-)

—————–
Just one more blog plug for the workshop some of us are doing before the IA Summit in Memphis this year.

Links:
See the Pre-Con Page at the Conference Site.
Register Here

For those of you who may be attending the IA Summit in Memphis this year, let me encourage you to look into the IA Institute’s pre-conference workshop called “Beyond Findability: Reframing IA Practice & Strategy for Turbulent Times.”

A few things I want to make clear about the session:

– We’re making it relevant for any UX design people, not just those who self-identify as “Information Architects.” In fact, part of the workshop is about how different practitioner communities can better collaborate & complement other approaches.
– By “Turbulent times” we don’t just mean the economy, but the turbulence of technological change — the incredibly rapid evolution of how people use the stuff we make.
– It’s not a how-to/tutorial-style workshop, but meant to spark some challenging conversation and push the evolution of our professions ahead a little faster.
– There will, however, be some practical take-away content that you should be able to stick on a cube wall and make use of immediately.
– It’s not “anti-Findability” — but looks at what IA in particular brings to design *beyond* the conventional understanding of the practice.
– We’re hoping experienced design professionals will attend, not just newer folks; the content is meant to be somewhat high-level and advanced, but you should be able to get value from it no matter where you are in your career.

Here’s the quickie blurb:

This workshop aims to take your IA practice to a higher level of understanding, performance and impact. Learn about contextual models and scalable frameworks, design collaboration tactics, and how to wield more influence at the “strategy table.”

If you have any specific questions about it, please feel free to hit me up with an email!

*Note: the IA Summit itself is produced by ASIS&T, not the IA Institute.

David Weinberger’s most recent JOHO post shows us some thinking he’s doing about the history (and nature) of “information” as a concept.

The whole thing is great reading, so go and read it.

Some of it explores a point that I touched on in my presentation for IDEA earlier this month: that computers are very literal machines that take the organic, nuanced ambiguities of our lived experience and (by necessity) chop it up into binary “is or is not” data.

Bits have this symbolic quality because, while the universe is made of differences, those differences are not abstract. They are differences in a taste, or a smell, or an extent, or a color, or some property that only registers on a billion dollar piece of equipment. The world’s differences are exactly not abstract: Green, not red. Five kilograms, not ten. There are no differences that are only differences.

The example I gave at IDEA was how on Facebook, you have about six choices to describe the current romantic relationship you’re in: something that normally is described to others through contextual cues (a ring on your finger, the tone of voice and phrasing you use when mentioning the significant other in conversation, how you treat other people of your sig-other’s gender, etc). These cues give us incredibly rich textures for understanding the contours of another person’s romantic life; but Facebook (again, out of necessity) has to limit your choices to a handful of terms in a drop-down menu — terms that the system renders as mutually exclusive, by the fact that you can only select one.

More and more of the substance of our lives is being housed, communicated & experienced (by ourselves and others) in the Network. And the Network is made of computers that render everything into binary choices. Granted, we’re making things more fine-grained in many systems, and giving people a chance to add more context, but that can only go so far.

Weinberger uses photography as an example:

We turn a visual scene into bits in our camera because we care about the visual differences at that moment, for some human motive. We bit-ify the scene by attending to one set of differences — visible differences — because of some personal motivation. The bits that we capture depend entirely on what level of precision we care about, which we can adjust on a camera by setting the resolution. To do the bit-ifying abstraction, we need analog equipment that stores the bits in a particular and very real medium. Bits are a construction, an abstraction, a tool, in a way that, say, atoms are not. They exist because they stand for something that is not made of bits.

All this speaks to the implications of Simulation, something I’m obsessing about lately as it relates especially to Context. (And which I won’t go into here… not another tangent!)

Dave’s example reminds me of something I remember Neil Young complaining about years ago (in Guitar Player magazine) in terms of what we lose when we put music into a digital medium. He likened it to looking out a screen door at the richly contoured world outside — but each tiny square in screen turn what is seen through its confines into an estimated average “pixel” of visible information. In all that averaging, something vital is inevitably lost. (I couldn’t find the magazine interview, but I did find him saying something similar in the New York Times in 1997: “When you do an analog recording, and you take it to digital, you lose everything. You turn a universe of sounds into an average. The music becomes more abrupt and more agitating, and all of the subtleties are gone.”)

Of course, since that interview (probably 15 years ago) digital music has become much more advanced — reconstructing incredibly dense, high-resolution information about an analog original. Is that the answer, for the same thing that’s happening to our analog lives as they’re gradually soaked up by the great digital Network sponge? Higher and higher resolution until it’s almost real? Maybe. But in every case where we’re supposed to decide on an input to that system (such as which label describes our relationship), we’re being asked to turn something ineffable into language — not only our own, expressively ambiguous language, but the predefined language of a binary system.

Given that many of our lives are increasingly experienced and mediated via the digital layer, the question arises: to what degree will it change the way we think about identity, humanity, even love?

This excellent report came out a couple of weeks ago. It shows that the ubiquity and importance of video games, and game culture, is even bigger than many of us imagined. I explored some of this in a presentation a few years ago: Clues to the Future. I’m itching to keep running with some of those ideas, especially now that they’re being taken more seriously in business & technology circles (not by my doing, of course, but just from increased exposure in mainstream publications and the like).

Pew Internet: Teens, Video Games and Civics

The first national survey of its kind finds that virtually all American teens play computer, console, or cell phone games and that the gaming experience is rich and varied, with a significant amount of social interaction and potential for civic engagement….

The primary findings in the survey of 1,102 youth ages 12-17 include —

* Game playing is universal, with almost all teens playing games and at least half playing games on a given day.
* Game playing experiences are diverse, with the most popular games falling into the racing, puzzle, sports, action and adventure categories.
* Game playing is also social, with most teens playing games with others at least some of the time and can incorporate many aspects of civic and political life.

I’m especially interested in the universality of game playing. It reinforces more than ever the idea that the language of games is going to be an increasingly universal language. The design patterns, goal-based behaviors, playfulness — these are things that have to be considered over the next 5-10 years as software design accommodates these kids as they grow up.

The social aspect is also key: we have an upcoming generation that expects their online & software-based experiences to integrate into their larger lives; they don’t assume that various applications and contexts are separate, and feel pleasantly surprised (or disturbed) to discover they’re connected. They’ll have a different set of assumptions about connectedness.

Motivation

Months ago, I posted the first part of something I’d been presenting on for over a year: a simple way of thinking about social design choices. I called it the “Cultivation Equation for Social Design.” I should’ve known better, but I said at the end of that post that I’d be posting the rest soon … then proceeded to put it off for a very long time. At any rate, here’s the second part, about Motivation. The third part (about Moderation) will be forthcoming, eventually, but I make no promises on timing.
Read the rest of this entry »

Catching up on the AP blog, I saw Kate Rutter’s excellent post: Build your very own seat at the strategy table, complete with a papercraft “table” with helpful reminders! It’s about designers gaining a place at the “strategy table” — where the people who run things tend to dwell.

I had written something about this a while back, about Strategy & Innovation being “Strange Bedfellows.” But Kate’s post brought up something I hadn’t really focused on yet.

So I commented there, and now I’m repeating here: practitioners’ best work is at the level of practice.

They make things, and they make things better, based on the concrete experience of the things themselves. The strategy table, however, has traditionally been populated by those who are pretty far removed from the street-level effects of their decisions, working from the level of ideology. (Not that it’s a bad thing — most ideology is the result of learned wisdom over time, it just gets too calcified and/or used in the wrong context at times.) This is one reason why so many strategists love data rather than first-hand experience: they can (too often) see the data however they need to, based on whatever ideological glasses they’re wearing.

When designers leave the context of hands-on, concrete problem solving and try to mix it up with the abstraction/ideology crowd, they’re no longer in their element. So they have to *bring* their element along with them.

Take that concrete, messy, human design problem, and drop it on the table with a *thud* — just be ready to have some “data” and business speak ready to translate for the audience. And then dive in and get to work on the thing itself, right in front of them. That’s bringing “design thinking” into the strategy room — because “design thinking” is “design doing.”

In the midst of all the other things keeping me busy and away from blogging, some very nice people nominated me to serve on the Board of Advisors for the IA Institute. I’m flattered and honored, and a bit intimidated. But if elected, I’ll give it my best shot.

They asked for a bio and position statement. Here’s the position bit I sent them:

This [Information Architecture] community has excelled at creating a “shared history of learning” over the last 10 years. We’ve seen it bring essential elements to the emergence of User Experience Design, in the form of methods, tools, knowledge, and especially people. I think the IAI has been essential to how the community has developed, thanks to the hard work of its volunteers and staff creating excellent initiatives for mentorship, careers and other important needs.

The next big challenge is for the IAI to become more than a sum of its parts. How can it become a more influential, vital presence in the UX community? How can it serve as an amplifier for the amazing knowledge and insight we have among our members and colleagues? How can it evolve understanding of IA among business and design peers? And how can we better coexist and collaborate with those peers and practices?

From the beginning, IA has grappled with one of the most important challenges designers now face: how to define and link contexts usefully, usably and ethically in a digital hyper-linked world. I don’t see that challenge becoming any easier in the years ahead. In fact, the digital world is only becoming more pervasive, strange and exciting.

As a board member, my focus will be to help the IA Institute grow as a valued, authoritative resource for that future.

UPDATE:

Already, I feel the urge to further explain.

Read the rest of this entry »

Chris Brogan has a great post about 100 Personal Branding Tactics Using Social Media, with some helpful tips on creating that thing we keep hearing about “the Personal Brand.”

I’ve always struggled with this, though. I’ve been doing this “blogging” thing a long time. In fact, my first “home page” was a text-only index file. Why? Because there weren’t any graphical Web browsers yet. And even once there were, the only people who were online to look at any such thing were net-heads like myself. There was already a sense of informality and mutual understanding, and “netizens” seemed to prize a level of authenticity above almost anything else. Anything that looked like a personal “brand” was suspect.

cattlebrand.jpg

So, something about the DNA of my initial forays into personal expression on the ‘net has stuck with me. Namely, that it’s my little corner of the world, where I say what’s on my mind, take it or leave it, with very little concern about my brand or what-not. I am not saying this is a good thing. It just is.

Over the years, though, I’ve become more conscious of the shift in context. It’s like I had a little corner lot in a small town, with a ramshackle house and flotsam in the yard, and ten years later I look out to see somebody developed a new subdivision around me, with McMansions, chemically enhanced lawns, and joggers wearing those special clothes that you only wear if you’re really *into* jogging. You know what I mean.

And now I’m just not sure where my blog stands in all this. I don’t keep up with it often, but if I do it’s not because I’ve set a goal for myself, it’s just because my brainfartery is more active (and long-form) than usual. I feel the need to have a more polished, disciplined blog-presence, with all the right trimmings … but then I’d miss having this thing here. And I know for a fact that if I had both, I’d be so short-circuited about which I should post on, I’d end up doing nothing with either of them.

Or maybe I’m just lazy?

Note: One of Brogan’s awesome tips is to add some visual interest with each post; hence a CC licensed image from mharrsch.

« Previous results § More results »