Search Results

Your search for context returned the following results.

When I first heard about the Kozinski story (some mature content in the story), it was on NPR’s All Things Considered. The interviewer spoke with the LA Times reporter, who went on about how the judge had “published” offensive material on a “public website.”

I won’t go into detail on the story itself. But I urge anyone to take the LA Times article with a grain or two of salt. Evidently, the thing got started when someone who had an ax to grind with the judge sent links and info to the media, and said media went on to make it all look as horrible as possible. However, the more we learn about the details in the case, the more it sounds like the LA Times is twisting the truth a great deal. **

To me, though, the content issue isn’t as interesting (or challenging) as the “public website” idea.

Basically, this was a web server with an IP and URL on the Internet that was intended for family to share files on, and whatever else (possibly email server too? I don’t know). It’s the sort of thing that many thousands of people run — I lease one of my own that hosts this blog. But the difference is that Kozinski (or, evidently, his grown son) set it up to be private for just their use. Or at least he thought he had — he didn’t count on a disgruntled individual looking beyond the “index” page (that clearly signaled it as a private site) and discovering other directories where images and what-not were listed.

Lawrence Lessig has a great post here: The Kozinski mess (Lessig Blog). He makes the case that this wasn’t a ‘public’ site at all, since it wasn’t intended to be public. You could only see this content if you typed various additional directories onto the base URL. Lessig likens it to having a faulty lock on your front door, and someone snooping in your private stuff and then telling about it. (Saying it was an improperly installed lock would be more accurate, IMHO.)

The comments on the page go on and on — much debate about the content and the context, private and public and what those things mean in this situation.

One point I don’t see being made (possibly because I didn’t read it all) is that there’s now a difference between “public” and “published.”

It used to be that anything extremely public — that is, able to be seen by more than just a handful of people — could only be there if it was published that way on purpose. It was impossible for more than just the people in physical proximity to hear you, see you or look at your stuff unless you put a lot of time and money into making it that way: publishing a book, setting up a radio or TV station and broadcasting, or (on the low end) using something like a CB radio to purposely send out a public signal (and even then, laws limited the power and reach of such a device).

But the Internet has obliterated that assumption. Now, we can do all kinds of things that are intended for a private context that unwittingly end up more public than we intended. By now almost everyone online has sent an email to more people than they meant to, or accidentally sent a private note to everyone on Twitter. Or perhaps you’ve published a blog article that you only thought a few regular readers would see, but find out that others have read it who were offended because they didn’t get the context?

We need to distinguish between “public” and “published.” We may even need to distinguish between various shades of “published” — the same way we legally distinguish between shades of personal injury — by determining intent.

There’s an informative thread over at Groklaw as well.

**About the supposedly pornographic content, I’ll only say that it sounds like there was no “pornography” as typically understood on the judge’s server, but only content that had accumulated from the many “bad-taste jokes” that get passed around the net all the time. That is, nothing more offensive than you’d see on an episode of Jackass or South Park. Whether or not that sort of thing is your cup of tea, and whether or not you think it is harmfully degrading to any segment of society, is certainly your right. Some of the items described are things that I roll my eyes at as silly, vulgar humor, and then forget about. But describing a video (which is currently on YouTube) where an amorously confused donkey tries mount a guy who was (inadvisedly) trying to relieve himself in a field as “bestiality” is pretty absurd. Monty Python it ain’t; but Caligula it ain’t either.

IDEA 2008

idea08 badge

I’d like to encourage everyone to attend IDEA 2008, a conference (organized by the IA Institute) that’s been getting rave reviews from attendees since it started in 2006. It’s described as “A conference on designing complex information spaces of all kinds” — and it’s happening in grand old Chicago, October 7-8, 2008.

Speakers on the roster include people from game design, interaction design and new-generation advertising/marketing, and the list is growing, including (for some reason) my own self. I think I’m going to be talking about how context works in digital spaces … but I have until October, so who knows what it’ll turn into?

IDEA is less about the speakers, though, than the topics they spark, and the intimate setting of a few hundred folks all seeing the same presentations and having plenty of excuses to converse, dialog and generally brou some haha.

This is based on a slide I’ve been slipping into decks for over a year now as a “quick aside” comment; but it’s been bugging me enough that I need to get it out into a real blog post. So here goes.

We hear the words Strategy and Innovation thrown around a lot, and often we hear them said together. “We need an innovation strategy.” Or perhaps “We need a more innovative strategy” which, of course, is a different animal. But I don’t hear people questioning much exactly what we mean when we say these things. It’s as if we all agree already on what we mean by strategy and innovation, and that they just fit together automatically.

There’s a problem with this assumption. The more I’ve learned about Communities of Practice, the more I’ve come to understand about how innovation happens. And I’ve come to the conclusion that strategy and innovation aren’t made of the same cloth.

strategy and innovation

1. Strategy is top-down; Innovation is bottom-up

Strategy is a top-down approach. In every context I can think of, strategy is about someone at the top of a hierarchy planning what will happen, or what patterns will be invoked to respond to changes on the ground. Strategy is programmed, the way a computer is programmed. Strategy is authoritative and standardized.

Innovation is an emergent event; it happens when practitioners “on the ground” have worked on something enough to discover a new approach in the messy variety of practitioner effort and conversation. Innovation only happens when there is sufficient variety of thought and action; it works more like natural selection, which requires lots of mutation. Innovation is, by its nature, unorthodox.

2. Strategy is defined in advance; Innovation is recognized after the fact

While a strategy is defined ahead of time, nobody can seem to plan what an innovation will be. In fact, many (or most?) innovations are serendipitous accidents, or emerge from a side-project that wasn’t part of the top-down-defined work load to begin with. This is because the string of events that led to the innovation is never truly a rational, logical or linear process. In fact, we don’t even recognize the result as an innovation until after it’s already happened, because whether something is an innovation or not depends on its usefulness after it’s been experienced in context.

We fill in the narrative afterwards — looking back on what happened, we create a story that explains it for us, because our brains need patterns and stories to make sense of things. We “reify” the outcome and assume there’s a process behind it that can be repeated. (Just think of Hollywood, and how it tries to reproduce the success of surprise-hit films that nobody thought would succeed until they became successful.) I discuss this more in a post here.

3. Strategy plans for success in known circumstances; Innovation emerges from failure in unknown circumstances.

One explicit aim of a strategy is to plan ahead of time to limit the chance of failure. Strategy is great for things that have to be carried out with great precision according to known circumstances, or at least predicted circumstances. Of course strategy is more complex than just paint-by-numbers, but a full-fledged strategy has to have all predictable circumstances accounted for with the equivalent of if-then-else statements. Otherwise, it would be a half-baked strategy. In addition, strategy usually aims for the highest level of efficiency, because carrying something off with the least amount of friction and “wasted” energy often makes the difference between winning and losing.

However, if you dig underneath the veneer of the story behind most innovations, you find that there was trial and error going on behind the scenes, and lots of variety happening before the (often accidental) eureka moment. And even after that eureka moment, the only reason we think of the outcome as an innovation is because it found traction and really worked. For every product or idea that worked, there were many that didn’t. Innovation sprouts from the messy, trial-and-error efforts of practitioners in the trenches. Bell Labs, Xerox PARC and other legendary fonts of innovation were crucibles of this dynamic: whether by design or accident, they had the right conditions for letting their people try and fail often enough and quickly enough to stumble upon the great stuff. And there are few things less efficient than trial and error; innovation, or the activity that results in innovation, is inherently inefficient.

So Innovation and Strategy are incompatible?

Does this mean that all managers can do is cross their fingers and hope innovation happens? No. What it does mean is that to having an innovation strategy has nothing to do with planning or strategizing the innovation itself. To misappropriate a quotation from Ecclesiastes, such efforts are all in vain and like “striving after wind.”

Managing for innovation requires a more oblique approach, one which works more directly on creating the right conditions for innovation to occur. And that means setting up mechanisms where practitioners can thrive as a community of practice, and where they can try and fail often enough and quickly enough that great stuff emerges. It also means setting up mechanisms that allow the right people to recognize which outcomes have the best chance of being successes — and therefore, end up being truly innovative.

I’m as tired of hearing about Apple as anyone, but when discussing innovation they always come up. We tend to think of Apple as linear, controlled and very top-down. The popular imagination seems to buy into a mythic understanding of Apple — that Steve Jobs has some kind of preternatural design compass embedded in his brain stem.

Why? Because Jobs treats Apple like theater, and keeps all the messiness behind the curtain. This is one reason why Apple’s legal team is so zealous about tracking down leaks. For people to see the trial and error that happens inside the walls would not only threaten Apple’s intellectual property, it would sully its image. But inside Apple, the strategy for innovation demands that design ideas to be generated in multitudes like fish eggs, because they’re all run through a sort of artificial natural-selection mechanism that kills off the weak and only lets the strongest ideas rise to the top. (See the Business Week article describing Apple’s “10 to 3 to 1” approach. )

Google does the same thing, but they turn the theater part inside-out. They do a modicum of concept-vetting inside the walls, but as soon as possible they push new ideas out into the marketplace (their “Labs” area) and leverage the collective interest and energy of their user base to determine if the idea will work or not, or how it should be refined. (See accounts of this philosophy in a recent Fast Company article.) People don’t mind using something at Google that seems to be only half-successful as a design, because they know it’ll be tweaked and matured quickly. Part of the payoff of using a Google product is the fun of seeing it improved under your very fingertips.

One thing I wonder: to what extent do any of these places treat “strategy” as another design problem to be worked out in the bottom-up, emergent way that they generate their products? I haven’t run across anything that describes such an approach.

At any rate, it’s possible to have an innovation strategy. It’s just that the innovation and the strategy work from different corners of the room. Strategy sets the right conditions, oversees and cultivates the organic mass of activity happening on the floor. It enables, facilitates, and strives to recognize which ideas might fit the market best — or strives to find low-impact ways for ideas to fail in the marketplace in order to winnow down to the ones that succeed. And it’s those ideas that we look back upon and think … wow, that’s innovation.

In the “Linkosophy” talk I gave on Monday, I suggested that a helpful distinction between the practices of IxD & IA might be that IxD’s central concern is within a given context (a screen, device, room, etc) while IA’s central concern is how to connect contexts, and even which contexts are necessary to begin with (though that last bit is likely more a research/meta concern that all UX practices deal with).

But one nagging question on a lot of people’s minds seems to be “where did these come from? haven’t we been doing all this already but with older technology?”

I think we have, and we haven’t.

Both of these practices build on earlier knowledge & techniques that emerged from practices that came before. Card sorting & mental models were around before the IA community coalesced around the challenges of infospace, and people were designing devices & industrial products with their users’ interactions in mind long before anybody was in a community that called itself “Interaction Designers.” That is, there were many techniques, methods, tools and principles already in the world from earlier practice … but what happened that sparked the emergence of these newer practice identities?

The key catalyst for both, it seems to me, was the advent of digital simulation.

For IA, the digital simulation is networked “spaces” … infospace that’s made of bits and not atoms, where people cognitively experience one context’s connection to another as moving through space, even though it’s not physical. We had information, and we had physical architecture, but they weren’t the same thing … the Web (and all web-like things) changed that.

For IxD, the digital simulation is with devices. Before digital simulation, devices were just devices — anything from a deck chair to an umbrella, or a power drill to a jackhammer, were three-dimensional, real industrially made products that had real switches, real handles, real feedback. We didn’t think of them as “interactive” or having “interfaces” — because three-dimensional reality is *always* interactive, and it needs no “interface” to translate human action into non-physical effects. Designing these things is “Industrial Design” — and it’s been around for quite a while (though, frankly, only a couple of generations).

The original folks who quite consciously organized around the collective banner of “interaction designer” are digital-technology-centric designers. Not to say that they’ve never worked on anything else … but they’re leaders in that practitioner community.

Now, this is just a comment on origins … I’m not saying they’re necessarily stuck there.

But, with the digital-simulation layer soaking into everything around us, is it really so limiting to say that’s the origin and the primary milieu for these practices?

Of course, I’m not trying to build silos here — only clarify for collective self-awareness purposes. It’s helpful, I believe, to have shared understanding of the stories that make up the “history of learning and making” that forms our practices. It helps us have healthier conversations as we go forward.

Hey, I’m Andrew! You can read more about who I am on my About page.

If I had a “Follow” button on my forehead, and you met me in person and pushed that button, I’d likely give you a card that had the following text written upon it:

Here’s some explanation about how I use Twitter. It’s probably more than you want to read, and that’s ok. This is more a personal experiment in exploring network etiquette than anything else. If you’re curious about it and read it, let me know what you think?

Disclaimers

  • I use Twitter for personal expression & connection; self-promotion & “personal brand” not so much (that’s more my blog’s job, but even there not so much).
  • I hate not being able to follow everyone I want to, but it’s just too overwhelming. There’s little rhyme/reason to whom I follow or not. Please don’t be offended if I don’t follow you back, or if I stop following for a while and then start again, or whatever. I’d expect you to do the same to me. All of you are terribly interesting and awesome people, but I have limited attention.
  • Please don’t assume I’ll notice an @ mention within any time span. I sometimes go days without looking.
  • Direct-messages are fine, but emails are even better and more reliable for most things (imho).
  • If you’re twittering more than 10 tweets a day, I may have to stop following just so I can keep up with other folks.
  • If you add my feed, I will certainly check to see who you are, but if there’s zero identifying information on your profile, why would I add you back?

A Few Guidelines for Myself (that I humbly consider useful for everybody else too ;-)

  • I’ll try to keep tweets to about 10 or less a day, to avoid clogging my friends’ feeds.
  • I’ll avoid doing scads of “@” replies, since Twitter isn’t a great conversation mechanism, but is pretty ok as an occasional comment-on-a-tweet mechanism.
  • I won’t use any automated mechanism to track who “unfollows” me. And if I notice you dropped me, I won’t think about it much. Not that I don’t care; just seems a waste of time worrying about it.
  • I won’t try to game Twitter, or workaround my followers’ settings (such as defeating their @mentions filter by putting something before the @, forcing them to see replies they’d otherwise not have to skip.)
  • I’ll avoid doing long-form commentary or “live-blogging” using Twitter, since it’s not a great platform for that (RSS feed readers give the user the choice to read each poster’s feed separately; Twitter feed readers do not, and allow over-tweeting to crowd out other voices on my friends’ feeds.)
  • I’ll post links to things only now and then, since I know Twitter is very often used in (and was intended for) mobile contexts that often don’t have access to useful web browsers; and when I do, I’ll give some context, rather than just “this is cool …”
  • I will avoid using anything that automatically Tweets or direct-messages through my account; these things simply offend me (e.g. if I point to a blog post of mine, I’ll actually type a freaking tweet about it).
  • In spite of my best intentions, I’ll probably break these guidelines now and then, but hopefully not too much, whatever “too much” is.

Thanks for indulging my curmudgeonly Twitter diatribe. Good day!

Since so much of our culture is digitized now, we can grab clippings of it and spread it all over our identities the way we used to decorate our notebooks with stickers in grade school. Movies, music, books, periodicals, friends, and everything else. Everything that has a digital referent or avatar in the pervasive digital layer of our lives is game for this appropriation.

I just ran across a short post on honesty in playlists.

The what-I’m-listening-to thing always strikes me as aspirational rather than documentary. It’s really not “what I’m listening to” but rather “what I would be listening to if I were actually as cool as I want you to think I am.”

And my first thought was: but where, in any other part of our lives, are we that “honest”?

Don’t we all tweak our appearances in many ways — both conscious and unconscious — to improve the image we present to the world? Granted, some of us do it more than others. But everybody does it. Even people who say they’re *not* like this actually are … to choose to be style-free is a statement just as strong as being style-conscious, because it’s done in a social context too, either to impress your other style-free, logo-hating friends, or to define yourself over-against the pop-culture mainstream.

Now, of course it would be dishonest to list favorite movies and books and music that you neither consume nor even really like. But my guess is a very small minority do that.

Our decorations have always been aspirational. Always. From idealizing the hunt with wall cave wall drawings to hanging pictures of beautiful still-life scenes of stuff you can’t afford in middle-class homes in the Renaissance, all the way to choosing which books to put on the eye-level shelves in your apartment, or making a cool playlist of music for a party. We never expose *everything* in our lives, we always select subsets that tell others particular things about us.

The digital world isn’t going to be any different.

(See earlier post on Flourishing.)

IASummit 2008

Meet me at the IA Summit
Some very nice and well-meaning people have asked me to speak as the closing plenary at the IASummit conference this year, in Miami.

This is, as anyone who has been asked to do such a thing will tell you, a mixed blessing.

But I’m slogging through my insanely huge bucket of random thoughts from the last twelve months to surface the stuff that will, I dearly hope, be of interest and value to the crowd. Or, at the very least, keep their hungover cranial contents entertained long enough to stick around for Five-Minute Madness.

“Linkosophy” is a homely title. But it’s a hell of a lot catchier than “Information Architecture’s Role in the UX Context: What Got It Here, What It’s About, and Where It Might Be Headed.” Or some such claptrap.

Here’s the description and a link:

Closing Plenary: Linkosophy
Monday April 14 2008, 3:00 – 4:00PM

At times, especially in comparison to the industrial and academic disciplines of previous generations, the User Experience family of practices can feel terribly disorganized: so little clarity on roles and responsibilities, so much dithering over semantics and orthodoxy. And in the midst of all this, IA has struggled to explain itself as a practice and a domain of expertise.

But guess what? It turns out all of this is perfectly natural.

To explain why, we’ll use IA as an example to learn about how communities of practice work and why they come to be. Then we’ll dig deeper into describing the “domain” of Information Architecture, and explore the exciting implications for the future of this practice and its role within the bigger picture of User Experience Design.

In addition, I’ve been dragooned (but in a nice way … I just like saying “dragooned”) to participate in a panel about “Presence, identity, and attention in social web architecture” along with Christian Crumlish, Christina Wodtke, and Gene Smith, three people who know a heck of a lot more about this than I do. Normally when people ask me to talk about this topic, I crib stuff from slides those three have already written! Now I have to come up with my own junk. (Leisa Reichelt is another excellent thinker on this “presence” stuff, btw. And since she’s not going to be there, maybe I’ll just crib *her* stuff? heh… just kidding, Leisa. Really.)

Seriously, it should be a fascinating panel — we’ve been discussing it on a mailing list Christian set up, so there should be some sense that we actually prepared for it.

There’s been a lot of talk over the last couple of years about a collective Eureka moment where we’ve all come to realize that the Internet, the Web, and designing User Experiences for those platforms, is “really about People … not products and information.”

I think it’s great that more folks are coming to this realization.

But in the same breath, some of these folks will then say that Information Architecture is hopelessly out of touch with this reality … that IA is ‘dead’ or that there’s no such thing as an information architecture, since it’s all user-driven nowadays. I’m not going to point to specific instances, because I’m not posting this to start more flame wars … just to finally state something I wish I’d blogged over a year ago.

What these (I’m sure well-meaning) people don’t seem to grasp is that the IA community has been focused on social infrastructures for a very long time. Some of the most successful writing and design has come from members of this practitioner community — witness Epinions, Slideshare, and PublicSquare just to name a few platforms. Members of this community have published books and blogs at the forefront of social design thinking.

In fact, in the much-maligned “Manifesto” the IAI posted back in 2002, there was this language:

* One goal of information architecture is to shape information into an environment that allows users to create, manage and share its very substance in a framework that provides semantic relevance.
* Another goal of information architecture is to shape the environment to enable users to better communicate, collaborate and experience one another.
* The latter goal is more fundamental than the former: information exists only in communities of meaning. Without other people, information no longer has context, and no longer informs.

I’ll take the blame for some of the corny language in that document — but hey, it was a manifesto for crying out loud … a bit of purple prose is par for the course.

The point is, this has always been part of our community’s focus. If people don’t realize that, they’ve not been paying attention.

There. It feels good to get things off one’s chest, no? :-) Ok… carry on.

As networked social applications mature, they’re evolving more nuanced ways of constructing and maintaining an identity. Two of the major factors in online identity are How you present yourself, and Who you know.

How you present yourself: “Flourishing”

Flourishing is how we ornament ourselves and display ourselves to others. Think of peacocks flourishing their tail-feathers. It’s done to communicate something about oneself — to attract partners, distinguish oneself from the pack toward some end, or even dissuade the advances of enemies.

I don’t know if this behavior has another name, or if someone else has called it this yet. But it’s the best name I can think of for the technologically enhanced version of this behavior.

Humans have always used personal ornament to say something about themselves, from ancient tattoos and piercings, “war paint,” various kinds of dress, engagement and wedding rings, to larger things like their cars and homes. We’ve long used personal ornament to signal to others “I am X” in order to automatically set initial terms of any conversation or encounter.

It expands our context, and makes physical things about us that our bodies alone cannot communicate. Often these choices are controlled overtly or subtly by cultural norms. But in cultures where individual identity is given some play-room, these choices can become highly unique.

So, how has digital networked life changed this behavior? For a while, I’ve thought it’s fascinating how we can now decorate ourselves not only with things we’ve had to buy or make, but with a virtual version of almost anything we can think of, from any medium. My online identity as represented by one or more ‘avatars’ (whether that’s an avatar in an environment like Second Life, or a MySpace profile that serves a similar, though 2-D purpose) can be draped with all manner of cultural effluvia. I can express myself with songs, movie clips, pictures of products I love (even if I can’t afford them). Our ability to express ourselves with bits of our culture has increased to vertiginous heights.

Just as I started blogging about this thing that’s been on my mind for a while, I thought I’d look to see if anyone has done real work on it. I’m sure there’s a lot of it out there, but one piece I ran across was a paper from Hugo Liu at MIT, entitled “Social Network Profiles as Taste Performances,” which discusses this development at some length. From the introduction:

The materials of social identity have changed. Up through the 19th century in European society, identity was largely determined by a handful of circumstances such as profession, social class, and church membership (Simmel, 1908/1971a). With the rise of consumer culture in the late 20th century, possessions and consumptive choices were also brought into the fold of identity. One is what one eats; or rather, one is what one consumes—books, music, movies, and a plenitude of other cultural materials (McCracken, 2006).

… In the pseudonymous and text-heavy online world, there is even greater room for identity experimentation, as one does not fully exist online until one writes oneself into being through “textual performances” (Sundén, 2003).

One of the newest stages for online textual performance of self is the Social Network Profile (SNP). The virtual materials of this performance are cultural signs—a user’s self-described favorite books, music, movies, television interests, and so forth—composed together into a taste statement that is “performed” through the profile. By utilizing the medium of social network sites for taste performance, users can display their status and distinction to an audience comprised of friends, co-workers, potential love interests, and the Web public.

The article concerns itself mainly with users’ lists of “favorites” from things like music, movies and books, and how these clusters signal particular things about the individual.

What I mean by “flourishing” is this very activity, but expanded into all media. Thanks to ever-present broadband and the ability to digitize almost anything into a representative sample, users can decorate themselves with “quotes” of music, movies, posters, celebrity pictures, news feeds, etc. Virtual bling.

I think it was a major reason for MySpace’s popularity, especially the ability to not just *list* these things, but to bring them fully into the profile, as songs that play as soon as you load the profile page, or movie and music-video and YouTube clips.

This ability has been present for years in a more nascent form in physical life — the custom ring-tone. Evidently, announcing to all those around you something about yourself by the song or sound you use for your ring-tone is so important to people that it generates billions of US dollars in revenue.

Here’s what I’m thinking: are we far from the day when it’s not just ring-tones, but video-enabled fabric in our clothes, and sound-emitting handbags and sunglasses? What will the ability to “flourish” to others mean when we have all of this raw material to sample from, just like hip-hop artists have been doing for years?

For now, it’s only possible to any large extent online. But maybe that’s enough, and the cultural-quoting handbags won’t even be necessary? Eventually, the digital social network will become such a normal part of our lives that having a profile in the ether is as common and expected as phone numbers in the phone book used to be (in fact, people in their teens and 20s are already more likely to look for a Web profile than even consider looking in a giant paper phone-book).

As physical and digital spaces merge, and the distinction becomes less meaningful, that’s really all it’ll take.

Who you know: “Friending”

Alex Wright has a nice column in the NYT about Friending, Ancient or Otherwise, about research that’s showing common patterns between prehistoric human social behavior and the rise of social-network applications.

Academic researchers are starting to examine that question by taking an unusual tack: exploring the parallels between online social networks and tribal societies. In the collective patter of profile-surfing, messaging and “friending,” they see the resurgence of ancient patterns of oral communication.
“Orality is the base of all human experience,” says Lance Strate, a communications professor at Fordham University and devoted MySpace user. He says he is convinced that the popularity of social networks stems from their appeal to deep-seated, prehistoric patterns of human communication. “We evolved with speech,” he says. “We didn’t evolve with writing.”

I’m fascinated with the idea that recent technology is actually tapping into ancient behavior patterns in the human animal. I like entertaining the idea that something inside us craves this kind of interaction, because it’s part of our DNA somehow, and so we’ve collectively created the Internet to get back to it.

It’s not terribly far-fetched. Most organisms that find their natural patterns challenged in some way manage to return to those patterns by adaptation. At least, in my very limited understanding of evolution, that’s what happens, right? And a big chunk of the human race has been relegated to non-tribal community structures for only a tiny fraction of its evolutionary history — makes sense that we’d find a way back.

Regardless of the causes (and my harebrained conjecture aside), who you have as friends is vital to your identity, both your internal sense of self and the character you present externally to the world. “It’s not what you know, it’s who you know” is an old adage, and there’s a lot of truth to it, even if you just admit that you can know a heck of a lot, but it won’t get you anywhere without social connection to make you relevant.

What digital networks have done is made “friendship” something literal, and somewhat binary, when in fact friendship is highly variable and messy business. Online, a “friend” could be just about anyone from a friend-of-a-friend, to someone you ran into once at a conference, to someone from high school you haven’t actually spoken to in 10 years but just for grins is on your Facebook list.

Systems are starting to become more sophisticated in this regard — we can now choose ‘top friends’ and organize friends into categories on some sites, but that still forces us to put people in categories that are oversimplified, and don’t reflect the variability over time that actually exist in these relationships. Someone you were friends with and saw weekly six months ago may have a new job or new interests, or joined a new church or gym, and now you’re still “people who keep up with each other” but not anything like you were. Or maybe you just have a fight with a friend and things have soured, but not completely split — and months later it’s all good again?

The more we use networks for sharing, communicating, complaining and commiserating, sharing and confessing to our social connections, the more vexing it’s going to be to keep all these distinctions in check. I doubt any software system can really reflect the actual emotional variety in our friendships — if for no other reason than no matter how amazing the system iis, it still depends on our consciously updating it.

So that makes me wonder: which is going to change most? The systems, or the way we conceive of friendship? I wonder how the activity of friendship itself will feel, look and behave in ten or fifteen years for people who grew up with social networks? Will they meet new friends and immediately be considering what “filter” that friend might be safe to see on a personal blog? Will people change the way they create and maintain relationships in order to adapt to the limitations of the systems or vice-versa (or both)?

Moral Dimensions

Without going into a lot of detail about it (no time!) I wanted to quote from this article discussing the ideas of Jonathan Haidt. It’s actually supposed to be a review of George Lakoff’s writing on political language, but it gets further into Haidt’s ideas and research as a better alternative. He’s not so kind to dear Lakoff (whose earlier work is very influential among many of my IA friends).

Essentially, the article draws a distinction between Lakoff’s idea that people act based on their metaphorical-linguistic interpretation of the world and Haidt’s psycho-evolutionary (?) view that there are deeper things than what we think of as language that guide us individually and socially. And Haidt is working to name those things, and figure out how they function.

Oddly enough, I remembered once I’d gotten a paragraph into this post that I linked to and wrote about Haidt a couple of years before. But I hadn’t really looked into it much further. Now I’m really wanting to read more of his work.

Haidt maps five major scales against which we can categorize (or measure) our moral responses. One of those is the one that seems least changeable or approachable by reason, the one that describes our visceral reaction of elevation or disgust in the presence of certain things we find taboo, without necessarily being able to explain why in a purely rational or utilitarian way.

Will Wilkinson — What’s the Frequency Lakoff?

Most intriguing is the possibility of systematic left-right differences on the purity dimension, which Haidt pegs as the source of religious emotion. In a fascinating chapter in his illuminating recent book, The Happiness Hypothesis, Haidt explains how a primal biological system—the disgust system—designed to keep us clear of rotten meat, expanded over our evolutionary history to encompass sexual norms, physical deformations, and much more. …

The flipside of disgust is the emotion Haidt calls “elevation,” based in a sense of purification and transcendence of our animal incarnation. Cultures the world over picture humanity as midway on a ladder of being between the demonically disgusting and the divinely pure. Most world religions express it through taboos of food, body, and sex, and in rituals of de-animalizing purification and sacralization. The warm, open sense of elevation and the shivering nausea of disgust are high and low notes in the same emotional key.

Haidt’s suggestion is partly that morally broad-band conservatives are better able to exploit the emotional logic of religiosity by deploying rhetoric and imagery that calls on powerful sentiments of elevation and disgust. A bit deaf to the divine, narrow-band liberals are at a disadvantage to stir religious Americans. And there are a lot of religious Americans out there.

I like this approach because it doesn’t refute the linguistic approach so much as explain it in a larger context. (Lakoff has come under criticism for his possibly over-simplification about how people live by metaphor — I”ll leave that debate to the experts.)

And it explains how people can have a real change of heart in their lives, how their morals can shift. Just this week, the mayor of San Diego decided to reverse a view he’d held for years, both personally and as a campaign promise, to veto any marriage-equality bill. Evidently one of his scales changed the other — he was caught in a classic Euthyphro conundrum between loyalty to his party and loyalty to the reality of his daughter. Unlike with Euthyphro, family won out. Or perhaps the particular experience of his daughter convinced him that the general assumption of homosexuality as evil is flawed? Who knows.

Whatever the cause, once you get a bit of a handle on Haidt’s model, you can almost see the bars in the chart shifting in front of you when you hear of such a change in someone.

And you can see very plainly how Karl Rove and others have masterfully manipulated this tendency. They have an intuitive grasp of this gut-level “digust/elevation” complex, and how to use it to get voters to act. I wonder, too, if it helps explain the weird fixation “socially conservative” people of all stripes had with the “Passion of Christ” film? Just think — that extreme level of detailed violence to a human being ramping up the digust meter, with the elevation meter being cranked just as high from the sense of transcendent salvation and martyr’s love that the gruesome ritual killing represented. What a combination.

The downside to Democrats here is that they can’t fake it. According to Wilkinson, there’s no way to just word-massage their way into this emotional dynamic with the public on the current dominant issues that tap into it. In his words, “Their best long-term hopes rest in moving the fight to a battlefield with more favorable terrain.”

(PS: I dig Wilkinson’s blog name too — a nice oblique reference to Wittgenstein, who said the aim of Philosophy is to “shew the fly the way out of the bottle.” )

Edited to Add: There’s a nice writeup on Haidt in the Times here.

I only just heard about the Google Image Labeler via the IAI mailing list.

Here’s a description:

You’ll be randomly paired with a partner who’s online and using the feature. Over a two-minute period, you and your partner will be shown the same set of images and asked to provide as many labels as possible to describe each image you see. When your label matches your partner’s label, you’ll earn points depending on how specific your label is. You’ll be shown more images until time runs out. After time expires, you can explore the images you’ve seen and the websites where those images were found. And we’ll show you the points you’ve earned throughout the session.

So, Google didn’t just assume people would tag images for the heck of it. They build in a points system. I have no idea if the points even mean anything ouside of this context, but it’s interesting to see a game mechanic of points incentive, in a contest-like format, being used to jump-start the collective intelligence gathering.

POSTSCRIPT:

Later in the day, I hear from James Boekbinder that this system was invented (if he has it right) by a mathematician named Louis Ahn, and Google bought it. He points to a great presentation Ahn has on Google Video about his approach.

Ahn’s description says that people sometimes play the game 40 hours a week, while I’m hearing from other sources that research showed users putting a lot of effort into it for a short time, then dropping and not coming back (possibly because there’s no persistent or tranferable value to the ‘points’ given in the game?).

Wired has a great story explaining the profound implications of Google Maps and Google Earth, mainly due to the fact that these maps don’t have to come from only one point of view, but can capture the collective frame of reference from millions of users across the globe: Google Maps Is Changing the Way We See the World.

This quote captures what I think is the main world-changing factor:

“The annotations weren’t created by Google, nor by some official mapping agency. Instead, they are the products of a volunteer army of amateur cartographers. “It didn’t take sophisticated software,” Hanke says. “What it took was a substrate — the satellite imagery of Earth — in an accessible form and a simple authoring language for people to create and share stuff. Once that software existed, the urge to describe and annotate just took off.”

Some of the article is a little more utopian than fits reality, but that’s just Wired. Still, you can’t deny that it really does change, forever, the way the human geographical world describes itself. I think the main thing, for me, is the stories: that because we’re not stuck with a single, 2-dimensional map that can only speak of one or a frames of reference, we can now see a given spot of the earth and learn of its human context — the stories that happened there to regular people, or people you might not otherwise know or understand.

It really is amazing what happens when you have the right banana.

« Previous results § More results »