userexperience

You are currently browsing articles tagged userexperience.

Earlier I shared a post about designing context management, and wanted to add an example I’d seen. I knew I’d made this screenshot, but then couldn’t remember where; luckily I found it today hiding in a folder.

This little widget from Plaxo is the only example I’ve noticed where an online platform allows you to view information from different contextual points of view (other than very simple examples like “your public profile” and “preview before publish”).

Plaxo’s function actually allows you to see what you’re sharing with various categories of users with a basic drop-down menu. It’s not rocket science, but it goes miles further than most platforms for this kind of functionality.

If anybody knows of others, let me know?

Context Management

Note: a while back, Christian Crumlish & Erin Malone asked me to write a sidebar for a book they were working on … an ambitious tome of design patterns for social software. The book, (Designing Social Interfaces) was published last year, and it’s excellent. I’m proud to be part of it. Christian encouraged contributors to publish their portions online … I’m finally getting around to doing so.

In addition to what I’ve posted below, I’ll point out that there have been several infamous screw-ups with context management since I wrote this … including Google Buzz and Facebook’s Groups, Places and other services.

Also to add: I don’t think we need a new discipline for context management. To my mind, it’s just good information architecture.

——————

There was a time when we could be fairly certain where we were at any given time. Just looking at one’s surroundings would let us know if we were in a public park or a quiet library, a dance hall or a funeral parlor. And our actions and conversations could easily adapt to these contexts: in a library, we’d know not to yell “heads up” and toss a football, and we’d know to avoid doing the hustle during someone’s eulogy.

But as more and more of our lives are lived via the web, and the contexts we inhabit are increasingly made of digits rather than atoms, our long-held assumptions about reality are dissolving under our typing-and-texting fingertips.

A pre-web example of this problem is something most people have experienced: accidentally emailing with “reply all” rather than “reply.”  Most email applications make it brutally easy to click Reply All by accident. In the physical world in which we evolved, the difference between a private conversation and a public one required more physical effort and provided more sensory clues. But in an email application, there’s almost no difference:  the buttons are usually identical and only a few pixels apart.

You’d think we would have learned something from our embarrassments with email, but newer applications aren’t much of an improvement. Twitter, for example, allows basically the same mistake if you use “@” instead of “d.” Not only that, but you have to put a space after the “d.”

Twitter users, by the time of this writing, are used to seeing at least a few of these errors made by their friends every week, usually followed by another tweet explaining that was a “mis-tweet” or cursing the d vs @ convention.

At least with those applications, it’s basically a binary choice for a single piece of data: one message goes either to one or multiple recipients: the contexts are straightforward, and relatively transparent. But on many popular social nework platforms, the problem becomes exponentially more complicated.

Because of its history, Facebook is an especially good example. Facebook started as a social web application with a built-in context: undergraduates at Harvard. Soon it expanded to other colleges and universities, but its contextual architecture continued to be based on school affiliation. The power of designing for a shared real-world context allowed Facebook’s structure to assume a lot about its users: they would have a lot in common, including their ages, their college culture, and circles of friends.

Facebook’s context provided a safe haven for college students to express themselves with their peers in all their immature, formative glory; for the first time a generation of late-teens unwittingly documented their transition to adulthood in a published format. But it was OK, because anybody on Facebook with them was “there” only because they were already “there” at their college, at that time.

But then, in 2006 when Facebook opened its virtual doors to anyone 13 or over with an email address, everything changed.  Graduates who were now starting their careers found their middle-aged coworkers asking to be friends on Facebook. I recall some of my younger office friends reeling at the thought that their cube-mates and managers might see their photos or read their embarrassing teenage rants “out of context.”

The Facebook example serves a discussion of context well because it’s probably the largest virtual place to have ever so suddenly unhinged itself from its physical place. Its inhabitants, who could previously afford an assumed mental model of “this web place corresponds to the physical place where I spent my college years,” found themselves in a radically different place. A contextual shift that would have required massive physical effort in the physical world was accomplished with a few lines of code and the flip of a switch.

Not that there wasn’t warning. The folks who run Facebook had announced the change was coming. So why weren’t more people ready? In part because such a reality shift doesn’t have much precedent; few people were used to thinking about the implications of such a change. But also because the platform didn’t provide any tools for managing the context conversion.

This lack of tools for managing multiple contexts is behind some of the biggest complaints about Facebook and social network platforms (such as MySpace and LinkedIn). For Facebook, long-time residents realized they would like to still keep up their immature and embarrassing memories from college to share just with their college friends, just like before — they wanted to preserve that context in its own space. But Facebook provided no capabilities for segmenting the experience. It was all or nothing, for every “friend” you added. And then, when Facebook launched its News feed — showing all your activities to your friends, and those of your friends to you — users rebelled in part because they hadn’t been given adequate tools for managing the contexts where their information might appear. This is to say nothing of the disastrous launch of Facebook’s “Beacon” service, where all users were opted in by default to share information about their purchases on other affiliated sites.

On MySpace, the early bugbear was the threat of predator activity and the lack of privacy. Again, the platform was built with the assumption that users were fine with collapsing their contexts into one space, where everything was viewable by every “friend” added. And on LinkedIn, users have often complained the platform doesn’t allow them to keep legitimate peer connections separate from others such as recruiters.

Not all platforms have made these mistakes. The Flickr photo site has long distinguished between Family and Friends, Private and Public. LiveJournal, a pioneering social platform, has provided robust permissions controls to its users for years, allowing creation of many different user-and-group combinations.

However, there’s still an important missing feature, one which should be considered for all social platforms even as they add new context-creation abilities. It’s either impossible or difficult for users to review their profiles and posts from others’ point of view.

Giving users the ability to create new contexts is a great step, but they also need the ability to easily simulate each user-category’s experience of their space. If a user creates a “co-workers” group and tries to carefully expose only their professional information, there’s no straightforward way to view their own space using that filter. With the Reply All problem described earlier, we at least get a chance to proof-read our message before hitting the button. But most social platforms don’t even give us that ability.

This function — perhaps call it “View as Different User Type” — is just one example of a whole class of design patterns we still need for managing the mind-bending complexity we’ve created for ourselves on the web. There are certainly others waiting to be explored. For example, what if we had more than just one way to say “no thank you” to an invitation or request, depending on type of person requesting? Or a way to send a friendly explanatory note with your refusal, thereby adding context to an otherwise cold interaction? Or what about the option to simply turn off whole portions of site functionality for some groups and not others? Maybe I’d love to get zombie-throwing-game invitations from my relatives, but not from people I haven’t seen since middle school?

In the rush to allow everyone to do everything online, designers often forget that some of the limitations of physical life are actually helpful, comforting, and even necessary. We’re a social species, but we’re also a nesting species, given to having our little nook in the tribal cave. Maybe we should take a step back and think of these patterns not unlike their originator, Mr Alexander, did — how have people lived and interacted successfully over many generations? What can we learn from the best of those structures, even in the structureless clouds of cyberspace? Ideally, the result would be the best of both worlds: architectures that fit our ingrained assumptions about the world, while giving us the magical ability to link across divides that were impossible to cross before.

It appears someone has posted the now-classic episode of Nightline about Ideo (called the Deep Dive) to YouTube. I hope it’s legit and Disney/ABC isn’t going to make somebody take them down. But here’s the link, hoping that doesn’t happen.

About 10 years ago, I started a job as an “Internet Copywriter” at a small web consultancy in North Carolina. By then, I’d already been steeped in the ‘net for seven or eight years, but mainly as a side-interest. My day jobs had been web-involved but not centrally, and my most meaningful learning experiences designing for the web had been side projects for fun. When I started at the new web company job, I knew there would need to be more to my role than just “concepting” and writing copy next to an art director, advertising-style. Our job was to make things people could *use* not just look at or be inspired to action by. But to be frank, I had little background in paid design work.

I’d been designing software of one kind or another off and on for a while, in part-time jobs while in graduate school. For example, creating a client database application to make my life easier in an office manager job (and then having to make it easy enough for the computer-phobic clerical staff to use as well). But I’d approached it as a tinkerer and co-user — making things I myself would be using, and iterating on them over time. (I’d taken a 3-dimensional design class in college, but it was more artistically focused — I had yet to learn much at all about industrial design, and had not yet discovered the nascent IA community, usability crowd, etc.)

Then I happened upon a Nightline broadcast (which, oddly, I never used to watch — who knows why I had it on at this point) where they engaged the design company Ideo. And I was blown away. It made perfect sense… here was a company that had codified an approach to design that I had been groping for intuitively, but not fully grasped and articulated. It put into sharp clarity a number of crucial principles such as behavioral observation and structured creative anarchy.

I immediately asked my new employer to let me order the video and share it with them. It served as a catalyst for finding out more about such approaches to design.

Since then, I’ve of course become less fully enamored of these videos… after a while you start to see the sleight-of-hand that an edited, idealized profile creates, and how it was probably the best PR event Ideo ever had. And ten years gives us the hind-sight to see that Ideo’s supposedly genius shopping cart didn’t exactly catch on — in retrospect we see that it was a fairly flawed design in many ways (in a busy grocery store, how many carts can reasonably be left at the end-caps while shoppers walk about with the hand-baskets?).

But for anyone who isn’t familiar with the essence of what many people I know call “user experience design,” this show is still an excellent teaching tool. You can see people viscerally react to it — sudden realization about how messy design is, by nature, how interdependent it is with physically experiencing your potential users, how the culture needed for creative collaboration has to be cultivated, protected from the Cartesian efficiencies and expectations of the traditional business world, and how important it is to have effective liaisons between those cultures, as well as a wise approach to structuring the necessary turbulence that creative work brings.

Then again, maybe everybody doesn’t see all that … but I’ve seen it happen.

What I find amazing, however, is this: even back then, they were saying this was the most-requested video order from ABC. This movie has been shown countless times in meetings and management retreats. And yet, the basic approach is still so rare to find. The Cartesian efficiencies and expectations form a powerful presence. What it comes down to is this: making room for this kind of work to be done well is hard work itself.

And that’s why Ideo is still in business.

I’ve been puzzling over what I was getting at last year when I was writing aboutflourishing.” And for a while I’ve been more clear about what I was getting at… and realized it wasn’t the right term. Now I’m trying “mixpression” on for size.

What I meant by “flourishing” is the act of extemporaneously mixing other media besides verbal or written-text language in our communication. That is: people using things like video clips or still images with the same facility and immediacy that they now use verbal/written vocabulary. “Mixpression” is an ungainly portmanteau, I’ll admit. But it’s more accurate.

(Earlier, I think I had this concept overlapping too much with something called “taste performance” — more about which, see bottom of the post.)

Victor Lombardi quotes an insightful bit from Adam Gopnik on his blog today: Noise Between Stations » Images That Sum Up Our Desires.

We are, by turn — and a writer says it with sadness — essentially a society of images: a viral YouTube video, an advertising image, proliferates and sums up our desires; anyone who can’t play the image game has a hard time playing any game at all.
– Adam Gopnik, Angels and Ages: A Short Book About Darwin, Lincoln, and Modern Life, p 33

When I heard Michael Wesch (whom I’ve written about before) at IA Summit earlier this month, he explained how his ethnographic work with YouTube showed people having whole conversations with video clips — either ones they made themselves, or clips from mainstream media, or remixes of them. Conversations, where imagery was the primary currency and text or talk were more like supporting players.

Here’s the thing — I’ve been hearing people bemoan this development for a while now. How people are becoming less literate, or less “literary” anyway, and how humanity is somehow regressing. I felt that way for a bit too. But I’m not so sure now.

If you think about it, this is something we’ve always had the natural propensity to do. Even written language evolved from pictographic expression. We just didn’t have the technology to immediately, cheaply reproduce media and distribute it within our conversations (or to create that media to begin with in such a way that we could then share it so immediately).
Read the rest of this entry »

Here’s the presentation I did for A Summit 2009 in Memphis, TN. It’s an update of what I did for IDEA 2008; it’s not hugely different, but I think it pulls the ideas together a little better. The PDF is downloadable from SlideShare. The notes are legible only at full-screen or on the PDF.