Search Results

Your search for context returned the following results.

Context Collapse

First of all, I didn’t realize that Michael Wesch had a blog. Now that I’ve found it, I have a lot of back-reading to do.

But here’s a recent post on the subject of Context, as it relates to web-cams and YouTube-like expression. Digital Ethnography — Context Collapse

The problem is not lack of context. It is context collapse: an infinite number of contexts collapsing upon one another into that single moment of recording. The images, actions, and words captured by the lens at any moment can be transported to anywhere on the planet and preserved (the performer must assume) for all time. The little glass lens becomes the gateway to a blackhole sucking all of time and space – virtually all possible contexts – in upon itself.

By the way, I’m working on a talk on context for IDEA Conference. Are you registered yet?

Within a larger, and more political, point in his column, George Will explains something about structuring systems so as to “nudge” people toward a particular behavior pattern, without mandating anything: George F. Will: Nudge Against the Fudge

Such is the power of inertia in human behavior, and the tendency of individuals to emulate others’ behavior, that there can be huge social consequences from the clever framing of the choices that nudgeable people—almost all of us—make. Choice architects understand that every choice is made in a context, and that contexts are not “neutral”—they inevitably encourage certain outcomes. Organizing the context can promote outcomes beneficial to choosers and, cumulatively, to society.

It’s describing a thesis behind the book “Nudge: Improving Decisions about Health, Wealth and Happiness” from a couple of people who just happen to also be advising Obama.

Will’s examples are things like automatic-yet-optional enrollment in an employer’s 401k, or automatic-yet-optional defaulting organ-donor checkboxes on drivers’ licenses.

But, beyond the implications for government (which I think are fascinating, but don’t have time to get into right now), I think this is an excellent way of articulating something I’ve been trying to explain for quite a while about digital environments. Basically, that even in digital environments, there are ways to ‘nudge’ people’s decisions — both explicit and tacit — with the way you shape the focus of an interface, the default choices, the recommended paths. But you still give them plenty of freedom.

To the more libertarian or paranoid folks, this might sound horribly big-brother. But that’s only if you have a choice between a system and no system at all. The assumption is that — as with government — anarchy isn’t an option and you have to build *something*. Once you acknowledge that you have to build it, then you have to make these decisions anyway. Why not make them with some coherent, whole understanding of the healthiest, most beneficial outcomes?

The question then becomes, what is “beneficial” and to whom? That’ll be driven by a given organization’s goals and values. But the technique is neutral — and should be considered in the design of any system.

Ubiquitous computing research from AIGA

AIGA – Augmenting the City: The Design of a Context-Aware Mobile Web Site

The produced solution augments the city through web-based access to a digital layer of information about people, places and activities adapted to users’ physical and social context and their history of social interactions in the city.

I spotted an article in the latest Metropolis Magazine touting a new, “innovative” approach to “make research a fundamental factor in all phases of product development.” (The article isn’t on the site, though.)

I’m glad the approach is getting press, but I was a little surprised to see it called “innovative” since I thought more people knew about this?

IIT’s website luckily has a PDF of the article that you’d normally have to buy the magazine to see.

Institute of Design : In The News

Here’s a quick link to the PDF download. (Not large at all.)

I’ve been pretty busy since my last blog post in December, since Understanding Context launched. Some really great work with clients, lots of travel, and a number of appearances at events have kept me happily occupied. Some highlights:

Talks and Things

O’Reilly: Webcast for Understanding Context, presented on June 10. Luckily, with a quick registration, you can sign up to watch the whole thing for free!

IA Summit:, where I co-facilitated a workshop on Practical Conceptual Modeling with my TUG colleagues Kaarin Hoff and Joe Elmendorf. (See the excellent post Kaarin created at TUG. summarizing choice bits of the workshop).

SXSW Workshop: I taught an invited workshop at SXSW with my colleague Dan Klyn, on “Information Architecture Essentials” — which was wildly successful and well-reviewed. We’re happy to say we’ll be teaching versions of this workshop again this year, at IA Summit Italy, and WebVisions Chicago!

UX Lisbon: where I taught a workshop on analyzing and modeling context for user experiences (which I also taught in abbreviated form at IA Summit, and which I’ll be reprising at UX Week later this summer).

UX Podcast: While in Lisbon, I had the pleasure of doing a joint podcast interview jointly with Abby Covert, hosted by the nice folks at UX Podcast.

Upcoming Appearances

As mentioned above, there are some upcoming happenings — I encourage you to sign up for any that aren’t already sold out!

Throughout 2013 and part of 2014, I gave various versions of a talk entitled “The World is the Screen”. (The subtitle varied.)

The general contention of the talk: as planners and makers of digital things and places that are increasingly woven into the fabric of the world around us, we have to expand our focus to understanding the whole environment that people inhabit, not just specific devices and interfaces.

As part of that mission, we need to bring a more rigorous perspective to understanding our materials. Potters and masons and painters, as they mature in their work, come to understand their materials better and more deeply than they would expect the users of their creations to understand them. I argue that our primary material is information … but we don’t have a good, shared concept of what we mean when we say “information.”

Rather than trying to define information in just one way, I picked three major ways in which information affects our world, and the characteristics behind each of those modes. Ultimately, I’m trying to create some foundations for maturing how we understand our work, and how it is more about environments than objects (though objects are certainly critical in the context of the whole).

Anyway … the last version of the talk I gave was at ConveyUX in Seattle. It is a shorter version, but I think it’s the most concisely clear one. So I’m embedding it below. [Other, prior (and longer) versions are also on Speakerdeck – one from IA Summit 2013, and one from Blend Conference 2013. I also posted about it at The Understanding Group.]

I haven’t been inkblurting much here for a few months. There are a few reasons.

1. I’ve been writing and revising a book that I’ve been hammering away at for the last two years. I started writing it based on hunches about its subject, and vaguely literary aspirations of a “thought piece” sort of nonfiction tome that would be just so fascinating… only to discover that I really didn’t know what the hell I was writing about, and had to learn some actual science and stuff before I could say anything with any credibility. I mean, I’ve been doing information architecture and interaction design for a pretty long time, so I had that credibility, but when it comes to things like embodied cognition or how language works, well … it feels like I’ve been going back to grad school. But I’m glad I did the work, and it’s turning out nicely, at least from what I can tell from my bleary-eyed perspective, knotted like a homunculus in my digital bunker, gutting my overlong, meandering first draft, and wrangling what’s left into something I hope will do the job. Writing, man. Whaddya do?

2. I’ve also been posting the occasional bit over at the blog for my delightful employer, TUG (The Understanding Group). A few of them have included some thoughts on how no project is ever just what we see on its face, so we should design the “meta” side of the project as much as the thing the project is supposedly for. Another on how information architecture and business strategy have a long relationship, that’s becoming even more interdependent. And a couple of posts about stuff I presented at Midwest UX, including a workshop I co-led with colleague Dan Eizans, on Making Places with IA & Content Strategy, and my solo talk about maps and territories and how language creates places. I’m fairly obsessed with this whole language-as-infrastructure thing, which is leading me to also do a talk on that topic at IA Summit this March in San Diego.

3. Speaking of IA Summit, I’m proud and pleased to be co-teaching a new workshop there with the brilliant and wise Jorge Arango. It’s about Information Architecture Essentials, and proceeds will go to the Information Architecture Institute. We hope the content will be enlightening and useful, and a nice overview of some basic IA stuff, but also where IA is headed as a practice & discipline. We think old hands will get something out of it, not just newcomers. Fear not, although there will be “theory,” we’re packing it with practical goodness, and structuring it along a typical project timeline. Groundedness FTW.

I’ve been thinking a lot lately about responsiveness in design, and how we can build systems that work well in so many different contexts, on so many different devices, for so many different scenarios. So many of our map-like ways of predicting and designing for complexity are starting to stretch at the seams. I have to think we are soon reaching a point where our maps simply will not scale.

Then there are the secret-sauce, “smart” solutions that promise they can take care of the problem. It seems to happen on at least every other project: one or more stakeholders are convinced that the way to make their site/app/system truly responsive to user needs is to employ some kind of high-tech, cutting-edge technology.

This can take the form of clippy-like “helpers” that magically know what the user needs, to “conversation engines” that try to model a literal conversational interaction with users, like Jellyvision, or established technologies like the “collaborative filtering” technique pioneered by places like Amazon.

Most of the time, these sorts of solutions hold out more promise than they can fulfill. They aren’t bad ideas — even Clippy had merit as a concept. But to my mind, more often than not, these fancy approaches to the problem are a bit like building a 747 to take people across a river — when all that’s needed is a good old-fashioned bridge. That is, most of the time the software in question isn’t doing the basics. Build a bridge first, then let’s talk about the airliner.

Of course, there are genuine design challenges that do seem to still need that super-duper genius-system approach. But I still think there are more “primitive” methods that can do most of the work by combining simple mechanisms and structures that can actually handle a great deal of complexity.

We have a cognitive bias that makes us think that anything that seems to respond to a situation in a “smart” way must be “thinking” its way through the solution. But it turns out, that’s not how nature solves complex problems — it’s not even really how our bodies and brains work.

I think the best kind of responsiveness would follow the model we see in nature — a sort of “embodied” responsiveness.

I’ve been learning a lot about this through research for the book on designing context I’m working on now. There’s a lot to say about this … a lot … but I need to spend my time writing the book rather than a blog post, so I’ll try to explain by pointing to a couple of examples that may help illustrate what I mean.

Consider two robots.

One is Honda’s famous Asimo. It’s a humanoid robot that is intricately programmed to handle situations … for which it is programmed. It senses the world, models the world in its brain and then tells the body what to do. This is, by the way, pretty much how we’ve assumed people get around in the world: the brain models a representation of the world around us and tells our body to do X or Y. What this means in practice, however, is that Asimo has a hard time getting around in the wild. Modeling the world and telling the limbs what to do based on that theoretical model is a lot of brain work, so Asimo has some major limitations in the number of situations it can handle.  In fact, it falls down a lot (as in this video) if the terrain isn’t predictable and regular, or if there’s some tiny error that throws it off. Even when Asimo’s software is capable of handling an irregularity, it often can’t process the anomaly fast enough to make the body react in time. This, in spite of the fact that Asimo has one of the most advanced “brains” ever put into a robot.

Another robot is nicknamed Big Dog, by a company called Boston Dynamics. This robot is not pre-programmed to calculate its every move. Instead, its body is engineered to respond in smart, contextually relevant ways to the terrain. Big Dog’s brain is actually very small and primitive, but the architecture of its body is such that its very structure handles irregularity with ease, as seen in this video where, about 30 seconds in, someone tries to kick it over and it rights itself.

The reason why Big Dog can handle unpredictable situations is that its intelligence is embodied. It isn’t performing computations in a brain — the body is structured in such a way that it “figures out” the situation by the very nature of its joints, angles and articulation. The brain is just along for the ride, and providing a simple network for the body to talk to itself. As it turns out, this is actually much more like how humans get around — our bodies handle a lot more of our ‘smartness’ than we realize.

I won’t go into much more description here. (And if you want to know more, check this excellent blog post on the topic of the robots, which links/leads to more great writing on embodied/extended cognition & related topics.)

The point I’m getting at is that there’s something to be learned here in terms of how we design information environments. Rather than trying to pre-program and map out every possible scenario, we need systems that respond intelligently by the very nature of their architectures.

A long time ago, I did a presentation where I blurted out that eventually we will have to rely on compasses more than maps. I’m now starting to get a better idea of what I meant. Simple rules, simple structures, that combine to be a “nonlinear dynamical system.” The system should perceive the user’s actions and behaviors and, rather than trying to model in some theoretical brain-like way what the user needs, the system’s body (for lack of a better way to put it) should be engineered so that its mechanisms bend, bounce and react in such a way that the user feels as if the system is being pretty smart anyway.

At some point I’d like to have some good examples for this, but the ones I’m working on most diligently at the moment are NDA-bound. When I have time I’ll see if I can “anonymize” some work well enough to share. In the meantime, keep an eye on those robots.

 

 

 

I joined Path on December 1st, 2011. I know this because it says so, under my “path” in the application on my iPhone.
That same day, I posted this message in the app:

“Wondering how Path knew whom to recommend as friends?!?”

I’ve used a lot of social software over the years (technically since 1992 when the Internet was mainly a social platform, before the e-commerce era), and I do this Internet stuff for a living, so I have a pretty solid mental model for where my data is and what is accessing it. But this was one of those moments where I realized something very non-transparent was happening.

How did it know? 

Path was very smartly recommending users on Path to me, even though it knew nothing about me other than my email address and the fact that it was on my phone. I hadn’t given it a Twitter handle; I hadn’t given it the same email address I use on Facebook (which isn’t public anyway). So how did it know?
I recall in a dinner conversation with co-workers deciding that it must just be checking my address book on my phone. That bugged me, but I let it slide.
Now, I’m intrigued with why I let it go so easily. I suspect a few reasons:

  • Path had positioned itself as an app for intimate connections with close friends. It set the expectation that it was going to be careful and safe, more closed than most social platforms.
  • It was a very pleasing experience to use the app; I didn’t want to just stop using it, but wanted to keep trying it out.
  • I was busy and in the middle of a million other things, so I didn’t take the time to think much about it beyond that initial note of dismay.
  • I assumed it was only checking names of contacts and running some kind of smart matching algorithm — no idea why I thought this, but I suppose the character of the app caused me to assume it was using a very light touch.

Whatever the reasons, Path set me up to assume a lot about what the app was and what it was going to do. After a few weeks of using it sporadically, I started noticing other strange things, though.

  • It announces, on its own, when I have entered a new geographical area. I had been assuming it was only showing me this information, but then I looked for a preference to set it as public or private and found none. But since I had no way of looking at my own path from someone else’s point of view, I had to ask a colleague: can you see that I just arrived in Atlanta? He said yes, and we talked about how odd that was… no matter how close your circle of friends, you don’t necessarily want them all knowing where you are without saying so.
  • When someone “visited my path” it would tell me so. But it wasn’t entirely clear what that meant. “So and so visited your path” sounds like they walked up to the front of my house and spent a while meditating on my front porch, but in reality they may have just accidentally tapped something they thought would allow them to make a comment but ended up in my “path” instead. And the only way to dismiss this announcement was to tap it, which took me to that person’s path. Were they now going to get a message saying I had visited their path? I didn’t know … but I wondered if it would misconstrue to the other users what I’d done.
  • Path also relies on user pictures to convey “who” … if someone just posts a picture, it doesn’t say the name of the person, just their user picture. If the picture isn’t of the person (or is blank) I have no idea who posted it.

All of these issues, and others, add up to what I’ve been calling Context Management — the capabilities that software should be giving us to manage the multifaceted contexts it exposes us to, and that it allows us to create. Some platforms have been getting marginally better at this (Facebook with its groups, Google + with its circles) but we’re a long way from solving these problems in our software. Since these issues are so common, I mostly gave Path a pass — I was curious to see how it would evolve, and if they’d come up with interesting solutions for context management.

It Gets Worse

And now this news … that Path is actually uploading your entire address book to Path’s servers in order to run matching software and present possible friends.

Once I thought about it for half a minute, I realized, well yeah of course they are. There’s no way the app itself has all the code and data needed to run sophisticated matching against Path’s entire database. They’d have to upload that information, the same way Evernote needs you to upload a picture of a document in order to run optical character recognition. But Evernote actually tells me it’s doing this … that there’s a cloud of my notes, and that I have to sync that picture in order for Evernote to figure out the text. But Path mentioned nothing of the sort. (I haven’t read their license agreement that I probably “signed” at some point, because nobody ever reads that stuff — I’d get nothing else done in life if I actually read the terms & conditions of every piece of software I used; it’s a broken concept; software needs to explain itself in the course of use.)

When you read the discussion going on under the post I linked to, you see the Path CEO joining in to explain what they did. He seems like a nice chap, really. He seems to actually care about his users. But he evidently has a massive blind spot on this problem.

The Blind Spot

Here’s the deal: if you’re building an app like Path and look at user adoption as mainly an engineering problem, you’re going to come to a similar conclusion that Path did. To get people to use Path they have to be connected to friends and family, and in order to prime that pump, you have to go ahead and grab contact information from their existing social data. And if you’re going to do that effectively, you’re going to have to upload it to a system that can crunch it all so it surfaces relevant recommendations, making it frictionless for users to start seeding their network within the Path context.

But what Path skipped was the step that most such platforms take: asking your permission to look at and use that information. They essentially made the same mistake Google Buzz and Facebook Beacon did — treating your multilayered, complex social sphere as a database where everyone is suddenly in one bucket of “friends” and assuming that grabbing that information is more important than helping you understand the rules and structures you’ve suddenly agreed to live within.

Using The Right Lenses

For Path, asking your permission to look at your contacts (or your Twitter feed, or whatever else) would add friction to adoption, which isn’t good for growing their user base. So, like Facebook has done so many times, they err on the side of what is best for their growth rather than what is best for users’ peace of mind and control of their contextual reality. It’s not an evil, calculated position. There’s no cackling villain planning how to expose people’s private information.

It’s actually worse than that: it’s well-meaning people looking only through a couple of lenses and simply not seeing the problem, which can be far more dangerous. In this case, the lenses are:

  • Aesthetics (make it beautiful so people want to touch it and look at it),
  • Small-bore interaction design (i.e. delightful & responsive interaction controls),
  • Engineering (very literally meeting a list of decontextualized requirements with functional system capabilities), and
  • Marketing (making the product as viral as possible, for growth and market valuation purposes).

What’s missing?

  • Full-fledged interaction design (considering the entire interaction framework within which the small, delightful interactions take place — creating a coherent language of interaction that actually makes sense rather than merely window-dresses with novelty)
  • Content strategy (in part affecting the narrative around the service that clearly communicates what the user’s expectations should be: is it intimate and “safe” or just another social platform?)
  • Information architecture (a coherent model for the information environment’s structure and structural rules: where the user is, where their information lives, what is being connected, and how user action is affecting contexts beyond the one the user thinks they’re in — a structural understanding largely communicated by content & interaction design, by the way)

I’m sure there’s more. But what you see above is not an anomaly. This is precisely the diagnosis I would give nearly every piece of software I’m seeing launched. Path is just an especially egregious example, in part because its beauty and other qualities stand in such stark contrast to its failings.

Path Fail is UX Fail

This is in part what some of us in the community are calling the failure of “user experience design” culturally: UX has largely become a buzzword for the first list, in the rush to crank out hip, interactively interesting software. But “business rules” which effectively act as the architecture of the platform are driven almost entirely by business concerns; content is mostly overlooked for any functional purposes beyond giving a fun, hip tone to the brand of the platform; and interaction design is mainly being driven by designers more concerned with “taste” performance and “innovative” UI than creating a rigorously considered, coherent experience.

If a game developer released something like this, they’d be crushed. The incoherence alone would make players throw up their hands in frustration and move on to a competitor in a heartbeat; Metacritic would destroy its ability to make sales. How is it, then, that we have such low standards and give such leeway to the applications being released for everything else?

So, there’s my rant. Will I keep using Path? Well … damn… they already have most of my most personal information, so it’s not like leaving them is going to change that. I’m going to ride it out, see if they learn from mistakes, and maybe show the rest of the hip-startup software world what it’s like to fail and truly do better. They have an opportunity here to learn and come back as a real champion of the things I mentioned above. Let’s hope for the best.

From the point of view of a binary mindset, identity is a pretty simple thing. You, an object = [unique identifier]. You as an object represented in a database should be known by that identifier and none other, or else the data is a mess.

The problem is, people are a mess. A glorious mess. And identity is not a binary thing. It’s much more fluid, variegated and organic than we are comfortable admitting to ourselves.

Lately there’s been some controversy over policies at Facebook and the newly ascendant Google + that demand people use their “real” names. Both companies have gone so far as to actually pull the plug on people who they suspect of not following those guidelines.

But this is actually a pretty wrong-headed thing to do. Not only does the marketplace of ideas have a long, grand tradition of the use of pseudonyms (see my post here from a couple years ago), but people have complex, multifaceted lives that often require they not put their “public identification attribute” (i.e. their ‘real name’) out there on every expression of themselves online.

There are a lot of stories emerging, such as this one about gender-diverse people who feel at risk having to expose their real names, that are showing us the canaries in the proverbial coal mine — the ones first affected by these policies — dropping off in droves.

But millions of others will feel the same pressures in more subtle ways too. Danah Boyd has done excellent work on this subject, and her recent post explains the problem as well as anyone, calling the policies essentially an “abuse of power.”

I’m sure it comes across as abusive, but I do think it’s mostly unwitting. I think it’s a symptom of an engineering mindset (object has name, and that name should be used for object) and a naive belief in transparency as an unalloyed “good.” But on an internet where your name can be searched and found in *any* context in which you have ever expressed yourself, what about those conversations you want to be able to have without everyone knowing? What about the parts of yourself you want to be able to explore and discover using other facets of your personality? (Sherry Turkle’s early work is great on this subject.)

I can’t help but think a Humanities & Social Sciences influence is so very lacking among the code-focused, engineering-cultured wizards behind these massive information environments. There’s a great article by Paul Adams, formerly of Google (and Google +), discussing the social psychology angle and how it influenced “Circles,” how FaceBook got it somewhat wrong with “Groups,” and why he ended up at Facebook anyway. But voices like his seem to be in the minority among those who are actually making this stuff.

Seeing people as complex coalescences of stories, histories, desires, relationships and behaviors means giving up on a nice, clean entity-relationship-diagram-friendly way of seeing the world. It means having to work harder on the soft, fuzzy complicated stuff between people than the buckets you want people to put themselves in. We’re a long way from a healthy, shared understanding of how to make these environments human enough.

UPDATE:
I realize now that I neglected to mention the prevailing theory of why platforms are requiring real names: marketing purposes. That could very well be. But that, too, is just another cultural force in play. And I think there’s a valid topic to be addressed regarding the binary-minded approach to handling things like personal identity.

There’s an excellent post on the subject at The Atlantic. It highlights a site called My Name is Me, which describes itself as “Supporting your freedom to choose the name you use on social networks and other online services.”

On Cyberspace

A while back, I posted a rant about information architecture that invoked the term “cyberspace.” I, of course, received some flack for using that word. It’s played out, people say. It invokes dusty 80s-90s “virtual reality” ideas about a separate plane of existence … Tron-like cyber-city vistas, bulky goggles & body-suits, and dystopian worlds. Ok…yeah, whatever. For most people that’s probably true.

So let’s start from a different angle …

Over the last 20 years or so, we’ve managed to cause the emergence of a massive, global, networked dimension of human experience, enabled by digital technology.

It’s the dimension you visit when you’re sitting in the coffee shop catching up on your Twitter or Facebook feed. You’re “here” in the sense of sitting in the coffee shop. But you’re also “there” in the sense of “hanging out ‘on’ <Twitter/Facebook/Whatever>.”

It’s the dimension brave, unhappy citizens of Libya are “visiting” when they read, in real-time, the real words of regular people in Tunisia and Egypt, that inspire them to action just as powerfully as if those people were protesting right next to them. It may not be the dimension where these people physically march and bleed, but it’s definitely one dimension where the marching and bleeding matter.

I say “dimension” because for me that word doesn’t imply mutual exclusivity between “physical” and “virtual”: you can be in more than one “dimension” at once. It’s a facet of reality, but a facet that runs the length and breadth of that reality. The word “layer” doesn’t work, because “layer” implies a separate stratum. (Even though I’ve used “layer” off and on for a long time too…)

This dimension isn’t carbon-based, but information-based. It’s specifically human, because it’s made for, and bound together with, human semantics and cognition. It’s the place where “knowledge work” mostly happens. But it’s also the place where, more and more, our stories live, and where we look to make sense of our lives and our relationships.

What do we call this thing?

Back in 2006, Wired Magazine had a feature on how “Cyberspace is Dead.” They made the same points about the term that I mention above, and asked some well-known futurist-types to come up with a new term. But none of the terms they mentioned have seemed to stick. One person suggests “infosphere” … and I myself tried terms like “infospace” in the past. But I don’t hear anyone using those words now.

Even “ubiquitous computing” (Vint Cerf’s suggestion, but the late Mark Weiser’s coinage) has remained a specialized term of art within a relatively small community. Plus, honestly, it doesn’t capture the dimensionality I describe above … it’s fine as a term for the activity of  “computing” (hello, antiquated terminology) from anywhere, and for reminding us that computing technology is ubiquitously present, but doesn’t help us talk about the “where” that emerges from this activity.

There have been excellent books about this sort of dimension, with titles like Everyware, Here Comes Everybody, Linked, Ambient Findability, Smart Things … books with a lot of great ideas, but without a settled term for this thing we’ve made.

Of course, this begs the question: why do we need a term for it? As one of the people quoted in the Wired article says, aren’t we now just talking about “life”? Yeah, maybe that’s OK for most people. We used to say “e-business” because it was important to distinguish internet-based business from regular business … but in only a few years, that distinction has been effaced to meaninglessness. What business *isn’t* now networked in some way?

Still, for people like me who are tasked with designing the frameworks — the rule sets and semantic structures, the links and cross-experiential contexts, I think it’s helpful to have a term of art for this dimension … because it behaves differently from the legacy space we inherited.

It’s important to be able to point at this dimension as a distinct facet of the reality we’re creating, so we can talk about its nature and how best to design for it. Otherwise, we go about making things using assumptions hardwired into our brains from millions of years of physical evolution, and miss out on the particular power (and overlook the dangers) of this new dimension.

So, maybe let’s take a second look at “cyberspace” … could it be redeemed?

At the Institute for the Future, there’s a paper called “Blended Reality” (yet another phrase that hasn’t caught on). In the abstract, there’s a nicely phrased statement [emphasis mine]:

We are creating a new kind of reality, one in which physical and digital environments, media, and interactions are woven together throughout our daily lives. In this world, the virtual and the physical are seamlessly integrated. Cyberspace is not a destination; rather, it is a layer tightly integrated into the world around us.

The writer who coined the term, William Gibson, was quoted in the “Cyberspace is Dead” piece as saying, “I think cyberspace is past its sell-by, but the problem is that everything has become an aspect of, well, cyberspace.” This strikes me, frankly, as a polite way of saying “yeah I get your point, but I don’t think you get what I mean these days by the term.” Or, another paraphrase: I agree the way people generally understand the term is dated and feels, well, spoiled like milk … but maybe you need to understand that’s not cyberspace …”

Personally, I think Gibson sees the neon-cyberpunk-cityscape, virtual-reality conception of cyberspace as pretty far off the mark. In articles and interviews I’ve read over the years, he’s referenced it on and off … but seems conscious of the fact that people will misunderstand it, and finds himself explaining his points with other language.

Frankly, though, we haven’t listened closely enough. In the same magazine as the “Cyberspace is Dead” article, seven years prior, Gibson posted what I posit to be one of the foundational texts for understanding this… whatever … we’ve wrought. It’s an essay about his experience with purchasing antique watches on eBay, called “My Obsession.”  I challenge anyone to read this piece and then come up with a better term for what he describes.

It’s beautiful … so read the whole thing. But I’m going to quote the last portion here in full:

In Istanbul, one chill misty morning in 1970, I stood in Kapali Carsi, the grand bazaar, under a Sony sign bristling with alien futurity, and stared deep into a cube of plate glass filled with tiny, ancient, fascinating things.

Hanging in that ancient venue, a place whose on-site café, I was told, had been open, 24 hours a day, 365 days a year, literally for centuries, the Sony sign – very large, very proto-Blade Runner, illuminated in some way I hadn’t seen before – made a deep impression. I’d been living on a Greek island, an archaeological protectorate where cars were prohibited, vacationing in the past.

The glass cube was one man’s shop. He was a dealer in curios, and from within it he would reluctantly fetch, like the human equivalent of those robotic cranes in amusement arcades, objects I indicated that I wished to examine. He used a long pair of spring-loaded faux-ivory chopsticks, antiques themselves, their warped tips lent traction by wrappings of rubber bands.

And with these he plucked up, and I purchased, a single stone bead of great beauty, the color of apricot, with bright mineral blood at its core, to make a necklace for the girl I’d later marry, and an excessively mechanical Swiss cigarette lighter, circa 1911 or so, broken, its hallmarked silver case crudely soldered with strange, Eastern, aftermarket sigils.

And in that moment, I think, were all the elements of a real futurity: all the elements of the world toward which we were heading – an emerging technology, a map that was about to evert, to swallow the territory it represented. The technology that sign foreshadowed would become the venue, the city itself. And the bazaar within it.

But I’m glad we still have a place for things to change hands. Even here, in this territory the map became.

I’ve written before about how the map has become the territory. But I’d completely forgotten, until today, this piece I read over 10 years ago. Fitting, I suppose, that I should rediscover it now by typing a few words into Google, trying to find an article I vaguely remembered reading once about Gibson and eBay. As he says earlier in the piece quoted above, “We are mapping literally everything, from the human genome to Jaeger two-register chronographs, and our search engines grind increasingly fine.”

Names are important, powerful things. We need a name for this dimension that is the map turned out from itself, to be its own territorial reality. I’m not married to “cyberspace” — I’ll gladly call it something else.

What’s important to me is that we have a way to talk about it, so we can get better at the work of designing and making for it, and within it.

 

Note: Thanks to Andrea Resmini & Luca Rosati for involving me in their work on the upcoming book, Pervasive IA, from which I gleaned the reference to the Institute for the Future article I mentioned above.

I’m happy to announce I’m collaborating with my Macquarium colleague, Patrick Quattlebaum, and Happy Cog Philadelphia’s inimitable Kevin Hoffman on presenting an all-day pre-conference workshop for this year’s Information Architecture Summit, in Denver, CO. See more about it (and register to attend!) on the IA Summit site.

One of the things I’ve been fascinated with lately is how important it is to have an explicit understanding of the organizational and personal context not only of your users but of your own corporate environment, whether it’s your client’s or your own as an internal employee. When engaging over a project, having an understanding of motivations, power structures, systemic incentives and the rest of the mechanisms that make an organization run is immeasurably helpful to knowing how to go about planning and executing that engagement.

It turns out, we have excellent tools at our disposal for understanding the client: UX design methods like contextual inquiry, interviews, collaborative analysis interpretation, personas/scenarios, and the like; all these methods are just as useful for getting the context of the engagement as they are for getting the context of the user base.

Additionally, there are general rules of thumb that tend to be true in most organizations, such as how process starts out as a tool, but calcifies into unnecessary constraint, or how middle management tends to work in a reactive mode, afraid to clarify or question the often-vague direction of their superiors. Not to mention tips on how to introduce UX practice into traditional company hierarchies and workflows.

It’s also fascinating to me how understanding individuals is so interdependent with understanding the organization itself, and vice-versa. The ongoing explosion of new knowledge in social psychology and neuroscience  is giving us a lot of insight into what really motivates people, how and why they make their decisions, and the rest. These are among the topics Patrick & I will be covering during our portion of the workshop.

As the glue between the individual, the organization and the work, there are meetings. So half the workshop, led by Kevin Hoffman, will focus specifically on designing the meeting experience.  It’s in meetings, after all, where the all parties have to come to terms with their context in the organizational dynamics — so Kevin’s techniques for increasing not just the efficiency of meetings but the human & interpersonal growth that can happen in them, will be invaluable. Kevin’s been honing this material for a while now, to rave reviews, and it will be a treat.

I’m really looking forward to the workshop; partly because, as in the past, I’m sure to learn as much or more from the attendees as they learn from the workshop presenters.

« Previous results § More results »