My talk for Interaction 12 in Dublin, Ireland.
Another 10-minute, abbreviated talk.
Information Architecture, User Experience & Other Obsessions
I’ve been presenting on this topic for quite a while. It’s officially an obsession. And I’m happy to say there’s actually a lot of attention being paid to context lately, and that is a good thing. But it’s mainly from the perspective of designing for existing contexts in the world, and accommodating or responding appropriately to them.
For example, the ubicomp community has been researching this issue for many years — if computing is no longer tied to a few discrete devices and is essentially happening everywhere, in all sorts of parts of our environment, how can we make sure it responds in relevant, even considerate ways to its users?
Likewise, the mobile community has been abuzz about the context of particular devices, and how to design code and UI that shapes the experience based on the device’s form factor, and how to balance the strengths of native apps vs web apps.
And the Content Strategy practitioner community has been adroitly handling the challenges of writing for the existing audience, situational & media contexts that content may be published or syndicated into.
All of these are worthy subjects for our attention, and very complex challenges for us to figure out. I’m on board with any and all of these efforts.
But I genuinely think there’s a related, but different issue that is still a blind spot: we don’t only have to worry about designing for existing contexts, we also have to understand that we are often designing context itself.
In essence, we’ve created a new dimension, an information dimension that we walk around in simultaneously with the one where we evolved as a species; and this dimension can significantly change the meaning of our actions and interactions, with the change of a software rule, a link name or a label. There are no longer clear boundaries between “here” and “there” and reality is increasingly getting bent into disorienting shapes by this pervasive layer of language & soft-machinery.
My thinking on this central point has evolved over the last four to five years, since I first started presenting on the topic publicly. I’ve since been including a discussion of context design in almost every talk or article I’ve written.
I’m posting below my 10-minute “punchy idea” version developed for the WebVisions conference (iterations of this were given in Portland, Atlanta & New York City).
I’m also working on a book manuscript on the topic, but more on that later as it takes more shape (and as the publisher details are ironed out).
I’m really looking forward to delving into the topic with the attention and breadth it needs for the book project (with trepidation & anxiety, but mostly the positive kind ;-).
Of course, any and all suggestions, thoughts, conversations or critiques are welcome.
PS: as I was finishing up this post, John Seely Brown (whom I consider a patron saint) tweeted this bit: “context is something we constantly underplay… with today’s tools we can now create context almost as easily as content.” Synchronicity? More likely just a result of his writing soaking into my subconscious over the last 12-13 years. But quite validating to read, regardless :-)
I’m pasting the SlideShare-extracted notes below for reference.
Read the rest of this entry »
I’m using this post to give a home to a video clip from the show M*A*S*H. I sometimes use the clip in presentations, but it doesn’t seem to be compatible with YouTube, so I’m putting it here instead. QuickTime m4v format; just click the link to view: French Toast
So, the short version of my point in this post (the “tl;dr” as it were) is this: possibly the most significant value of Second Life is as a pioneering platform for navigating & comprehending the pervasive information dimension in a ubiquitous/pervasively networked physical environment.
It’s easy to dismiss Second Life as kitsch now. Even though it’s still up and running, and evidently still providing a fulfilling experience for its dedicated user-base, it no longer has the sparkle of the Next Big Thing that the hype of several years ago brought to it.
I’ll admit, I was quite taken by it when I first heard of it, and I included significant commentary about it in presentations and writings I did at the time. But after only a few months, I started realizing it had serious limitations as a mainstream medium. For one thing, the learning curve for satisfying creation was too steep.
Three-dimensional modeling is hard enough with even the best tools, but Second Life’s composition toolset at the height of its popularity was frustratingly clumsy. Even if it had been state-of-the-art, however, it takes special knowledge & ability to draw in three dimensions. Unlike text-based MUDs, where anyone with half decent grasp of language could create relatively convincing characters, objects, rooms, Second Life required everything to be made explicitly, literally. Prose allows room for gestalt — the reader can fill in the details with imagination. Not in an environment like Second Life, though.
Plus, to make anything interactive, you had to learn a fairly complex scripting language. Not a big deal for practiced coders, but for regular people it was daunting.
So, as Second Life attracted more users, it became more of a hideous tragedy-of-the-commons experience, with acres of random, gaudy crap lying about, and one strange shopping mall after another with people trying to make money on the platform selling clothing, dance moves, cars and houses — things that imaginative players would likely have preferred to make for themselves, but instead had to piece together through an expensive exercise in collage.
At the heart of what made so many end up dismissing the platform, though, was its claim to being the next Web … the new way everyone was supposed to interact digitally online.
I never understood why anyone was making that claim, because it always seemed untenable to me. Second Life was inspired by Neal Stephenson’s virtual reality landscape in Snow Crash (and somewhat more distantly, Gibson’s vision of “cyberspace”), and managed an adroit facsimile of how Stephenson’s fictional world sounded. But Stephenson’s vision was essentially metaphorical.
Still, beyond the metaphor issue, the essential qualities of the Web that made it so ubiquitous were absent from Second Life: the Web is decentralized, not just user-created but non-privatized and widely distributed. It exists on millions of servers run by millions of people, companies, universities and the like. The Web is also made of a technology that’s much simpler for creators to use, and perhaps most importantly, the Web is very open and easily integrated into everything else. Second Life never got very far with being integrated in that way, though it tried. The main problem was that the very experience itself was not easily transferable to other media, devices etc. Even though they tried using a URL-like linking method that could be shared anywhere as text, the *content* of Second Life was essentially “virtual reality” 3D visual experience, something that just doesn’t transfer well to other platforms, as opposed to the text, static images & videos we share so easily across the Web & so many applications & devices.
Well, now that I’ve said all that somewhat negative stuff about the platform, what do I mean by “what we learned”?
It seems to me Second Life is an example of how we sometimes rehearse the
future before it happens. In SL, you inhabit a world that’s essentially made of information. Even the physical objects are, in essence, information — code that only pretends to be corporeal, but that can transform itself, disappear, reappear, whatever — a reality that can be changed as quickly as editing a sentence in a word processor.
While it’s true that our physical world can’t literally be changed that way, the truth is that the information layer that pervades it is becoming more substantial, more meaningful, and more influential in our experience of the world around us.
If “reality” is taken to be the sum total of all the informational and sensory experience we have of our environs, and we acknowledge that the informational (and to some degree sensory, as far as sight and sound go) layer is becoming dominated by digitally mediated, networked experience, then we are living in a place that is not too far off from what Second Life presents us.
Back when I was on some panels about Second Life, I would explain that the most significant aspect of the platform for user experience wasn’t the 3D space we were interacting with, but the “Viewer” — the mediating interface we used for navigating and manipulating that space. Linden Labs continually revised and matured the extensive menu-driven interface and search features to help inhabitants navigate that world, find other players & interest groups, or to create layers of permissions rules for all the various properties and objects. It was flawed, frustrating, volatile — but it was tackling some really fascinating, complex problems around how to live in a fluid, information-saturated world where wayfinding had more to do with the information layer *about* the actual places than the “physical” places themselves.
If we admit that the meaning & significance of our physical world is becoming largely driven by networked, digital information, we can’t ignore the fact that Second Life was pioneering the tools we increasingly need for navigating, searching, filtering & finding our way through our “real life” environments.
What a city “means” to us is tied up as much in the information dimension that pervades it — the labels & opinions, statistics & rankings — the stuff that represents it on the grid, as it is the physical atoms we touch as we walk its sidewalks or drive through its streets, or as we sit in its restaurants and theaters. All those experiences are shaped powerfully by reviews and tips of Yelp, or the record of a friend having been in a particular spot as recorded in Foursquare, or a picture we see on Flickr taken at a particular latitude and longitude. Or the real-time information about where our friends are *right now* and which places are kinda dead tonight. Not to mention the market-generated information about price, quantity & availability.
It’s always been the case that the narrative of a place has as much to do with how we experience the reality of the place as the physical sensations we have of it in person. But now that narrative has been made explicit, as a matter of record, and cumulative as well — from the interactions of everyone who has gone before us there and left some shadow of their presence, thoughts, reactions.
One day it would be interesting to compare all the ways in which various bits of software are helping us navigate this information dimension to the tools invented for inhabiting and comprehending the pure-information simulacra of Second Life. I bet we’d find a lot of similarities.
I posted the content below over on the Macquarium Blog, but I’m repeating here for posterity, and to first add a couple other thoughts:
1. It’s amazing how easily corporations can fool themselves into feeling good about the experiences they create for their users by making elaborate dreamscapes & public theater — as if the fictions they’re creating somehow make up for the reality of what they deliver (and the hard work it takes to make reality square in any way with that imagined experience). This reminds me a bit of the excellent, well-executed dismemberment of this sort of thinking that Bret Victor posted this past week on the silliness & laziness behind things like the Microsoft “everything is a finger-tap slab” future-porn. Go read it.
2. Viral videos like the CocaCola Happiness Machine don’t only fool the originating brand into feeling overconfident — they make the audience seeing the videos mistake the bit of feel-good emotion they receive as substantial experience, and then wonder “how can my own company give such delight?” I’ve seen so many hours burned with brainstorming sessions where people are trying to come up with the answer to that — and they end up with more reality-numbing theatrics rather than fixing difficult problems with their actual product or service delivery.
A long time ago, in certain communities of practice in the “user experience” family of practices, an acronym was coined: “DTDT” aka “Defining the Damned Thing”.
For good or ill, it’s been used for years now like a flag on the play in a football game. A discussion gets underway, whether heated or not, and suddenly someone says “hey can we stop defining the damned thing? I have work to do here, and you’re cluttering my [inbox / Twitter feed / ear drums / whatever ...]”
Sometimes it rightly has reset a conversation that has gone well off the rails, and that’s fine. But more often, I’ve seen it used to shut down conversations that are actually very healthy, thriving and … necessary.
Why necessary? Because conversation *about* the practice is a healthy, necessary part of being a practitioner, and being in a community of other practitioners. It’s part of maturing a practice into a discipline, and getting beyond merely doing work, and on to being self-aware about how and why you do it.
It used to be that people weren’t supposed to talk about sex either. That tended to result in lots of unhappy, closeted people in unfulfilling relationships and unfulfilled desires. Eventually we learned that talking about sex made sex better. Any healthy 21st century couple needs to have these conversations — what’s sex for? how do you see sex and how is that different from how I see it? Stuff like that. Why do people tend to avoid it? Because it makes them uncomfortable … but discomfort is no reason to shun a healthy conversation.
The same goes for design or any other practice; more often than not, what people in these conversations are trying to do is develop a shared understanding of their practice, developing their professional identities, and challenging each other to see different points of view — some of which may seem mutually exclusive, but turn out to be mutually beneficial, or even interdependent.
I’ll grant that these discussions often have more noise than signal, but that’s the price you pay to get the signal. I’ll also grant that actually “defining” a practice is largely a red herring — a thriving practice continues to evolve and discover new things about itself. Even if a conversation starts out about clean, clinical definition, it doesn’t take long before lots of other more useful (but muddier, messier) stuff is getting sorted out.
It’s ironic to me that so many people in the “UX family” of practitioner communities utterly lionize “Great Figures” of design who are largely known for what they *wrote* and *said* about design as much as for the things they made, and then turn to their peers and demand they stop talking about what their practice means, and just post more pat advice, templates or tutorials.
A while back I was doing a presentation on what neuroscience is teaching us about being designers — how our heads work when we’re making design decisions, trying to be creative, and the rest. And one of the things I learned was the importance of metacognition — the ability to think about thinking. I know people who refuse to do such a thing — they just want to jump in and ACT. But more often than not, they don’t grow, they don’t learn. They just keep doing what they’re used to, usually to the detriment of themselves and the people around them. Do you want to be one of those people? Probably not.
So, enough already. It’s time we defend the D. Next time you hear someone pipe up and say “hey [eyeroll] can we stop the DTDT already?” kindly remind them that mature communities of practice discuss, dream, debate, deliberate, deconstruct and the rest … because ultimately it helps us get better, deeper and stronger at the Doing.
There are two things in particular that everyone struggles with on Twitter. Here are my humble suggestions as to how Twitter can do something about it.
1. The Asymmetrical Direct-Message Conundrum
What it is: User A is following user B, but User B is not following User A. User B direct-messages User A, and when User A tries to reply to that direct message, they cannot, because User B is not following them.
Fix: Give User B a way to set a message that will DM User A with some contact info automatically. Something like “Unfortunately I can’t receive direct messages from you, but please contact me at email@example.com.” A more complicated fix that might help would be to allow User B to set an optional exception for receiving direct messages for anyone User B has direct-messaged (but whom User B is not following), for a given amount of time or a number of messages. It’s not perfect, but it will handle the majority of these occurrences.
2. The “DM FAIL”
What it is: User A means to send a direct message to User B, but accidentally tweets it to the whole wide world.
There are a couple of variations:
a) The SMS Reflex Response: User A gets a text from Twitter with a direct message from User B; User A types a reply and hits “send” before realizing it’s from Twitter and should’ve had “d username” (or now “m username” ?!?) typed before it.
b) The Prefix Fumble: User A is in same situation as above, but does realize it’s a text from Twitter — however, since they’re so used to thinking of Twitter usernames in the form of “@username” they type that out, forgetting they should be using the other prefix instead.
Fix: allow me to turn *off* the ability to create a tweet via SMS; and reply to my SMS text with a “hey you can’t do that” reminder if I forget I have it turned off and try doing it anyway. Let me turn it on and off via SMS text with commands, so if I’m stuck on a phone where I need to tweet that way, I can still do it. But so many people have smart-phones with Twitter apps, there’s no reason why I can’t receive SMS from Twitter without being able to create via SMS as well.
There you go, Twitter! My gift to you :-)
(By the by, I have no illusions that I’m the only one thinking about how to solve for these problems, and the bright designers at Twitter probably already have better solutions. But … you know, I thought I’d share, just in case … )
To celebrate the recent publication of Resmini & Rosati’s “Pervasive Information Architecture,” I’m reprinting, here, my contribution to the book. Thank you, Andrea & Luca, for asking me to add my own small part to the work!
It’s strange how, over time, some things that were once rare and wondrous can become commonplace and practically unnoticed, even though they have as much or more power as they ever had. Consider things like these: fire; the lever; the wheel; antibiotics; irrigation; agriculture; the semiconductor; the book. Ironically, it’s their inestimable value that causes these inventions to be absorbed into culture so thoroughly that they become part of the fabric of societies adopting them, where their power is taken for granted.
Add to that list two more items, one very old and one very new: the map and the hyperlink.
Those of us who are surrounded by inexpensive maps tend to think of them as banal, everyday objects – a commoditized utility. And the popular conception of mapmaking is that of an antiquated, tedious craft, like book binding or working a letter-press – something one would only do as a hobby, since after all, the whole globe has been mapped by satellites at this point; and we can generate all manner of maps for free from the Internet.
But the ubiquity of maps also shows us how powerful they remain. And the ease with which we can take them for granted belies the depth of skill, talent and dedicated focus it takes for maps (and even mapping software and devices) to be designed and maintained. It’s easy to scoff at cartography as a has-been discipline – until you’re trying to get somewhere, or understand a new place, and the map is poorly made.
Consider as well the hyperlink. A much younger invention than the map, the hyperlink was invented in the mid-1960s. For years it was a rare creature living only in technology labs, until around 1987 when it was moderately popularized in Apple’s HyperCard application. Even then, it was something used mainly by hobbyists and educators and a few interactive-fiction authors; a niche technology. But when Tim Berners-Lee placed that tiny creature in the world-wide substrate of the Internet, it bloomed into the most powerful cultural engine in human history.
And yet, within only a handful of years, people began taking the hyperlink for granted, as if it had always been around. Even now, among the digital classes, mention of “the web” is often met with a sniff of derision. “Oh that old thing — that’s so 1999.” And, “the web is obsolete – what matters now are mobile devices, augmented reality, apps and touch interfaces.”
One has to ask, however, what good would any of the apps, mobile devices and augmented reality be without digital links?
Where these well-meaning people go wrong is to assume the hyperlink is just a homely little clickable bit of text in a browser. The browser is an effective medium for hyperlinked experience, but it’s only one of many. The hyperlink is more than just a clicked bit of text in a browser window — it’s a core element for the digital dimension; it’s the mechanism that empowers regular people to point across time and space and suddenly be in a new place, and to create links that point the way for others as well.
Once people have this ability, they absorb it into their lives. They assume it will be available to them like roads, or language, or air. They become so used to having it, they forget they’re using it — even when dazzled by their shiny new mobile devices, augmented reality software and touch-screen interfaces. They forget that the central, driving force that makes those technologies most meaningful is how they enable connections — to stories, knowledge, family, friends. And those connections are all, essentially, hyperlinks: pointers to other places in cyberspace. Links between conversations and those conversing — links anybody can create for anybody to use.
This ability is now so ubiquitous, it’s virtually invisible. The interface is visible, the device is tangible, but the links and the teeming, semantic latticeworks they create are just short of corporeal. Like gravity, we can see its physical effects, but not the force itself. And yet these systems of links — these architectures of information — are now central to daily life. Communities rely on them to constructively channel member activity. Businesses trust systems of links to connect their customers with products and their business partners with processes. People depend on them for the most mundane tasks — like checking the weather — to the most important, such as learning about a life-changing diagnosis.
In fact, the hyperlink and the map have a lot in common. They both describe territories and point the way through them. They both present information that enables exploration and discovery. But there is a crucial difference: maps describe a separate reality, while hyperlinks create the very territory they describe.
Each link is a new path — and a collection of paths is a new geography. The meaningful connections we create between ourselves and the things in our lives were once merely spoken words, static text or thoughts sloshing around in our heads. Now they’re structural — instantiated as part of a digital infrastructure that’s increasingly interwoven with our physical lives. When you add an old friend on a social network, you create a link unlike any link you would have made by merely sending a letter or calling them on the phone. It’s a new path from the place that represents your friend to the place that represents you. Two islands that were once related only in stories and memories, now connected by a bridge.
Or think of how you use a photograph. Until recently, it was something you’d either frame and display on a shelf, carry in your wallet, or keep stored in a closet. But online you can upload that photo where it has its own unique location. By creating the place, you create the ability to link to it — and the links create paths, which add to the the ever-expanding geography of cyberspace.
Another important difference between the hyperlinks and traditional maps is that digital space allows us to create maps with conditional logic. We can create rules that cause a place to respond to, interact with, and be rearranged by its inhabitants. A blog can allow links to add comments or have them turned off; a store can allow product links to rearrange themselves on shelves in response to the shopper’s area of interest; a phone app can add a link to your physical location or not, at the flick of a settings switch. These are architectural structures for informational mediums; the machinery that enables everyday activity in the living web of the networked dimension.
The great challenge of information architecture is to design mechanisms that have deep implications for human experience, using a raw material no one can see except in its effects. It’s to create living, jointed, functioning frameworks out of something as disembodied as language, and yet create places suitable for very real, physical purposes. Information architecture uses maps and paths to create livable habitats in the air around us, folded into our daily lives — a new geography somehow separate, yet inseparable, from what came before.
I was lucky enough to be part of a panel at this year’s IA Summit that included Andrea Resmini and Jorge Arango (thanks to Jorge for suggesting the idea and including me!). We had at least 100 show up to hear it, and it seemed to go over well. Eventually there will be a podcast, I believe. Please also read Andrea’s portion, and Jorge’s portion, because they are both excellent.
A while back, I posted a rant about information architecture that invoked the term “cyberspace.” I, of course, received some flack for using that word. It’s played out, people say. It invokes dusty 80s-90s “virtual reality” ideas about a separate plane of existence … Tron-like cyber-city vistas, bulky goggles & body-suits, and dystopian worlds. Ok…yeah, whatever. For most people that’s probably true.
So let’s start from a different angle …
Over the last 20 years or so, we’ve managed to cause the emergence of a massive, global, networked dimension of human experience, enabled by digital technology.
It’s the dimension you visit when you’re sitting in the coffee shop catching up on your Twitter or Facebook feed. You’re “here” in the sense of sitting in the coffee shop. But you’re also “there” in the sense of “hanging out ‘on’ <Twitter/Facebook/Whatever>.”
It’s the dimension brave, unhappy citizens of Libya are “visiting” when they read, in real-time, the real words of regular people in Tunisia and Egypt, that inspire them to action just as powerfully as if those people were protesting right next to them. It may not be the dimension where these people physically march and bleed, but it’s definitely one dimension where the marching and bleeding matter.
I say “dimension” because for me that word doesn’t imply mutual exclusivity between “physical” and “virtual”: you can be in more than one “dimension” at once. It’s a facet of reality, but a facet that runs the length and breadth of that reality. The word “layer” doesn’t work, because “layer” implies a separate stratum. (Even though I’ve used “layer” off and on for a long time too…)
This dimension isn’t carbon-based, but information-based. It’s specifically human, because it’s made for, and bound together with, human semantics and cognition. It’s the place where “knowledge work” mostly happens. But it’s also the place where, more and more, our stories live, and where we look to make sense of our lives and our relationships.
What do we call this thing?
Back in 2006, Wired Magazine had a feature on how “Cyberspace is Dead.” They made the same points about the term that I mention above, and asked some well-known futurist-types to come up with a new term. But none of the terms they mentioned have seemed to stick. One person suggests “infosphere” … and I myself tried terms like “infospace” in the past. But I don’t hear anyone using those words now.
Even “ubiquitous computing” (Vint Cerf’s suggestion, but the late Mark Weiser’s coinage) has remained a specialized term of art within a relatively small community. Plus, honestly, it doesn’t capture the dimensionality I describe above … it’s fine as a term for the activity of ”computing” (hello, antiquated terminology) from anywhere, and for reminding us that computing technology is ubiquitously present, but doesn’t help us talk about the “where” that emerges from this activity.
There have been excellent books about this sort of dimension, with titles like Everyware, Here Comes Everybody, Linked, Ambient Findability, Smart Things … books with a lot of great ideas, but without a settled term for this thing we’ve made.
Of course, this begs the question: why do we need a term for it? As one of the people quoted in the Wired article says, aren’t we now just talking about “life”? Yeah, maybe that’s OK for most people. We used to say “e-business” because it was important to distinguish internet-based business from regular business … but in only a few years, that distinction has been effaced to meaninglessness. What business *isn’t* now networked in some way?
Still, for people like me who are tasked with designing the frameworks — the rule sets and semantic structures, the links and cross-experiential contexts, I think it’s helpful to have a term of art for this dimension … because it behaves differently from the legacy space we inherited.
It’s important to be able to point at this dimension as a distinct facet of the reality we’re creating, so we can talk about its nature and how best to design for it. Otherwise, we go about making things using assumptions hardwired into our brains from millions of years of physical evolution, and miss out on the particular power (and overlook the dangers) of this new dimension.
So, maybe let’s take a second look at “cyberspace” … could it be redeemed?
At the Institute for the Future, there’s a paper called “Blended Reality” (yet another phrase that hasn’t caught on). In the abstract, there’s a nicely phrased statement [emphasis mine]:
We are creating a new kind of reality, one in which physical and digital environments, media, and interactions are woven together throughout our daily lives. In this world, the virtual and the physical are seamlessly integrated. Cyberspace is not a destination; rather, it is a layer tightly integrated into the world around us.
The writer who coined the term, William Gibson, was quoted in the “Cyberspace is Dead” piece as saying, “I think cyberspace is past its sell-by, but the problem is that everything has become an aspect of, well, cyberspace.” This strikes me, frankly, as a polite way of saying “yeah I get your point, but I don’t think you get what I mean these days by the term.” Or, another paraphrase: I agree the way people generally understand the term is dated and feels, well, spoiled like milk … but maybe you need to understand that’s not cyberspace …”
Personally, I think Gibson sees the neon-cyberpunk-cityscape, virtual-reality conception of cyberspace as pretty far off the mark. In articles and interviews I’ve read over the years, he’s referenced it on and off … but seems conscious of the fact that people will misunderstand it, and finds himself explaining his points with other language.
Frankly, though, we haven’t listened closely enough. In the same magazine as the “Cyberspace is Dead” article, seven years prior, Gibson posted what I posit to be one of the foundational texts for understanding this… whatever … we’ve wrought. It’s an essay about his experience with purchasing antique watches on eBay, called “My Obsession.” I challenge anyone to read this piece and then come up with a better term for what he describes.
It’s beautiful … so read the whole thing. But I’m going to quote the last portion here in full:
In Istanbul, one chill misty morning in 1970, I stood in Kapali Carsi, the grand bazaar, under a Sony sign bristling with alien futurity, and stared deep into a cube of plate glass filled with tiny, ancient, fascinating things.
Hanging in that ancient venue, a place whose on-site café, I was told, had been open, 24 hours a day, 365 days a year, literally for centuries, the Sony sign – very large, very proto-Blade Runner, illuminated in some way I hadn’t seen before – made a deep impression. I’d been living on a Greek island, an archaeological protectorate where cars were prohibited, vacationing in the past.
The glass cube was one man’s shop. He was a dealer in curios, and from within it he would reluctantly fetch, like the human equivalent of those robotic cranes in amusement arcades, objects I indicated that I wished to examine. He used a long pair of spring-loaded faux-ivory chopsticks, antiques themselves, their warped tips lent traction by wrappings of rubber bands.
And with these he plucked up, and I purchased, a single stone bead of great beauty, the color of apricot, with bright mineral blood at its core, to make a necklace for the girl I’d later marry, and an excessively mechanical Swiss cigarette lighter, circa 1911 or so, broken, its hallmarked silver case crudely soldered with strange, Eastern, aftermarket sigils.
And in that moment, I think, were all the elements of a real futurity: all the elements of the world toward which we were heading – an emerging technology, a map that was about to evert, to swallow the territory it represented. The technology that sign foreshadowed would become the venue, the city itself. And the bazaar within it.
But I’m glad we still have a place for things to change hands. Even here, in this territory the map became.
I’ve written before about how the map has become the territory. But I’d completely forgotten, until today, this piece I read over 10 years ago. Fitting, I suppose, that I should rediscover it now by typing a few words into Google, trying to find an article I vaguely remembered reading once about Gibson and eBay. As he says earlier in the piece quoted above, “We are mapping literally everything, from the human genome to Jaeger two-register chronographs, and our search engines grind increasingly fine.”
Names are important, powerful things. We need a name for this dimension that is the map turned out from itself, to be its own territorial reality. I’m not married to “cyberspace” — I’ll gladly call it something else.
What’s important to me is that we have a way to talk about it, so we can get better at the work of designing and making for it, and within it.
Note: Thanks to Andrea Resmini & Luca Rosati for involving me in their work on the upcoming book, Pervasive IA, from which I gleaned the reference to the Institute for the Future article I mentioned above.
Earlier I shared a post about designing context management, and wanted to add an example I’d seen. I knew I’d made this screenshot, but then couldn’t remember where; luckily I found it today hiding in a folder.
This little widget from Plaxo is the only example I’ve noticed where an online platform allows you to view information from different contextual points of view (other than very simple examples like “your public profile” and “preview before publish”).
Plaxo’s function actually allows you to see what you’re sharing with various categories of users with a basic drop-down menu. It’s not rocket science, but it goes miles further than most platforms for this kind of functionality.
If anybody knows of others, let me know?