context

You are currently browsing articles tagged context.

The 2013 World IA Day was a huge success. Only its 2nd year in existence, and it had big crowds in 20+ locations (15 official). Congratulations to everyone involved in organizing the day, and to the intrepid board members of the IA Institute who decided to risk transforming the more US-based IDEA conference into this terrific, global, community-driven event.

I was fortunate to be asked to speak at the event in Ann Arbor, MI, where I talked about how information shapes context — the topic I’ve been writing a book about for a while now. I’ll probably continue having new permutations of this talk for quite some time, but here’s a snapshot at least, describing some central ideas I’m fleshing out in the book. I’m calling this “beta 2” — since it has somewhat different and/or updated content vs the one I did for CHI Atlanta back in the fall of 2012.

Video and Slides-with-notes embedded below. Enjoy!

 

 

I’ve been presenting on this topic for quite a while. It’s officially an obsession. And I’m happy to say there’s actually a lot of attention being paid to context lately, and that is a good thing. But it’s mainly from the perspective of designing for existing contexts in the world, and accommodating or responding appropriately to them.

For example, the ubicomp community has been researching this issue for many years — if computing is no longer tied to a few discrete devices and is essentially happening everywhere, in all sorts of parts of our environment, how can we make sure it responds in relevant, even considerate ways to its users?

Likewise, the mobile community has been abuzz about the context of particular devices, and how to design code and UI that shapes the experience based on the device’s form factor, and how to balance the strengths of native apps vs web apps.

And the Content Strategy practitioner community has been adroitly handling the challenges of writing for the existing audience, situational & media contexts that content may be published or syndicated into.

All of these are worthy subjects for our attention, and very complex challenges for us to figure out. I’m on board with any and all of these efforts.

But I genuinely think there’s a related, but different issue that is still a blind spot: we don’t only have to worry about designing for existing contexts, we also have to understand that we are often designing context itself.

In essence, we’ve created a new dimension, an information dimension that we walk around in simultaneously with the one where we evolved as a species; and this dimension can significantly change the meaning of our actions and interactions, with the change of a software rule, a link name or a label. There are no longer clear boundaries between “here” and “there” and reality is increasingly getting bent into disorienting shapes by this pervasive layer of language & soft-machinery.

My thinking on this central point has evolved over the last four to five years, since I first started presenting on the topic publicly. I’ve since been including a discussion of context design in almost every talk or article I’ve written.

I’m posting below my 10-minute “punchy idea” version developed for the WebVisions conference (iterations of this were given in Portland, Atlanta & New York City).

I’m also working on a book manuscript on the topic, but more on that later as it takes more shape (and as the publisher details are ironed out).

I’m really looking forward to delving into the topic with the attention and breadth it needs for the book project (with trepidation & anxiety, but mostly the positive kind ;-).

Of course, any and all suggestions, thoughts, conversations or critiques are welcome.

PS: as I was finishing up this post, John Seely Brown (whom I consider a patron saint) tweeted this bit: “context is something we constantly underplay… with today’s tools we can now create context almost as easily as content.” Synchronicity? More likely just a result of his writing soaking into my subconscious over the last 12-13 years. But quite validating to read, regardless :-)

I’m pasting the SlideShare-extracted notes below for reference.
Read the rest of this entry »

There are two things in particular that everyone struggles with on Twitter. Here are my humble suggestions as to how Twitter can do something about it.

1. The Asymmetrical Direct-Message Conundrum

What it is: User A is following user B, but User B is not following User A. User B direct-messages User A, and when User A tries to reply to that direct message, they cannot, because User B is not following them.

Fix: Give User B a way to set a message that will DM User A with some contact info automatically. Something like “Unfortunately I can’t receive direct messages from you, but please contact me at blahblah@domain.blah.” A more complicated fix that might help would be to allow User B to set an optional exception for receiving direct messages for anyone User B has direct-messaged (but whom User B is not following), for a given amount of time or a number of messages. It’s not perfect, but it will handle the majority of these occurrences.

2. The “DM FAIL”

What it is: User A means to send a direct message to User B, but accidentally tweets it to the whole wide world.

There are a couple of variations:
a) The SMS Reflex Response: User A gets a text from Twitter with a direct message from User B; User A types a reply and hits “send” before realizing it’s from Twitter and should’ve had “d username” (or now “m username” ?!?) typed before it.

b) The Prefix Fumble: User A is in same situation as above, but does realize it’s a text from Twitter — however, since they’re so used to thinking of Twitter usernames in the form of “@username” they type that out, forgetting they should be using the other prefix instead.

Fix: allow me to turn *off* the ability to create a tweet via SMS; and reply to my SMS text with a “hey you can’t do that” reminder if I forget I have it turned off and try doing it anyway. Let me turn it on and off via SMS text with commands, so if I’m stuck on a phone where I need to tweet that way, I can still do it. But so many people have smart-phones with Twitter apps, there’s no reason why I can’t receive SMS from Twitter without being able to create via SMS as well.

There you go, Twitter! My gift to you :-)

(By the by, I have no illusions that I’m the only one thinking about how to solve for these problems, and the bright designers at Twitter probably already have better solutions. But … you know, I thought I’d share, just in case … )

On Cyberspace

A while back, I posted a rant about information architecture that invoked the term “cyberspace.” I, of course, received some flack for using that word. It’s played out, people say. It invokes dusty 80s-90s “virtual reality” ideas about a separate plane of existence … Tron-like cyber-city vistas, bulky goggles & body-suits, and dystopian worlds. Ok…yeah, whatever. For most people that’s probably true.

So let’s start from a different angle …

Over the last 20 years or so, we’ve managed to cause the emergence of a massive, global, networked dimension of human experience, enabled by digital technology.

It’s the dimension you visit when you’re sitting in the coffee shop catching up on your Twitter or Facebook feed. You’re “here” in the sense of sitting in the coffee shop. But you’re also “there” in the sense of “hanging out ‘on’ <Twitter/Facebook/Whatever>.”

It’s the dimension brave, unhappy citizens of Libya are “visiting” when they read, in real-time, the real words of regular people in Tunisia and Egypt, that inspire them to action just as powerfully as if those people were protesting right next to them. It may not be the dimension where these people physically march and bleed, but it’s definitely one dimension where the marching and bleeding matter.

I say “dimension” because for me that word doesn’t imply mutual exclusivity between “physical” and “virtual”: you can be in more than one “dimension” at once. It’s a facet of reality, but a facet that runs the length and breadth of that reality. The word “layer” doesn’t work, because “layer” implies a separate stratum. (Even though I’ve used “layer” off and on for a long time too…)

This dimension isn’t carbon-based, but information-based. It’s specifically human, because it’s made for, and bound together with, human semantics and cognition. It’s the place where “knowledge work” mostly happens. But it’s also the place where, more and more, our stories live, and where we look to make sense of our lives and our relationships.

What do we call this thing?

Back in 2006, Wired Magazine had a feature on how “Cyberspace is Dead.” They made the same points about the term that I mention above, and asked some well-known futurist-types to come up with a new term. But none of the terms they mentioned have seemed to stick. One person suggests “infosphere” … and I myself tried terms like “infospace” in the past. But I don’t hear anyone using those words now.

Even “ubiquitous computing” (Vint Cerf’s suggestion, but the late Mark Weiser’s coinage) has remained a specialized term of art within a relatively small community. Plus, honestly, it doesn’t capture the dimensionality I describe above … it’s fine as a term for the activity of  “computing” (hello, antiquated terminology) from anywhere, and for reminding us that computing technology is ubiquitously present, but doesn’t help us talk about the “where” that emerges from this activity.

There have been excellent books about this sort of dimension, with titles like Everyware, Here Comes Everybody, Linked, Ambient Findability, Smart Things … books with a lot of great ideas, but without a settled term for this thing we’ve made.

Of course, this begs the question: why do we need a term for it? As one of the people quoted in the Wired article says, aren’t we now just talking about “life”? Yeah, maybe that’s OK for most people. We used to say “e-business” because it was important to distinguish internet-based business from regular business … but in only a few years, that distinction has been effaced to meaninglessness. What business *isn’t* now networked in some way?

Still, for people like me who are tasked with designing the frameworks — the rule sets and semantic structures, the links and cross-experiential contexts, I think it’s helpful to have a term of art for this dimension … because it behaves differently from the legacy space we inherited.

It’s important to be able to point at this dimension as a distinct facet of the reality we’re creating, so we can talk about its nature and how best to design for it. Otherwise, we go about making things using assumptions hardwired into our brains from millions of years of physical evolution, and miss out on the particular power (and overlook the dangers) of this new dimension.

So, maybe let’s take a second look at “cyberspace” … could it be redeemed?

At the Institute for the Future, there’s a paper called “Blended Reality” (yet another phrase that hasn’t caught on). In the abstract, there’s a nicely phrased statement [emphasis mine]:

We are creating a new kind of reality, one in which physical and digital environments, media, and interactions are woven together throughout our daily lives. In this world, the virtual and the physical are seamlessly integrated. Cyberspace is not a destination; rather, it is a layer tightly integrated into the world around us.

The writer who coined the term, William Gibson, was quoted in the “Cyberspace is Dead” piece as saying, “I think cyberspace is past its sell-by, but the problem is that everything has become an aspect of, well, cyberspace.” This strikes me, frankly, as a polite way of saying “yeah I get your point, but I don’t think you get what I mean these days by the term.” Or, another paraphrase: I agree the way people generally understand the term is dated and feels, well, spoiled like milk … but maybe you need to understand that’s not cyberspace …”

Personally, I think Gibson sees the neon-cyberpunk-cityscape, virtual-reality conception of cyberspace as pretty far off the mark. In articles and interviews I’ve read over the years, he’s referenced it on and off … but seems conscious of the fact that people will misunderstand it, and finds himself explaining his points with other language.

Frankly, though, we haven’t listened closely enough. In the same magazine as the “Cyberspace is Dead” article, seven years prior, Gibson posted what I posit to be one of the foundational texts for understanding this… whatever … we’ve wrought. It’s an essay about his experience with purchasing antique watches on eBay, called “My Obsession.”  I challenge anyone to read this piece and then come up with a better term for what he describes.

It’s beautiful … so read the whole thing. But I’m going to quote the last portion here in full:

In Istanbul, one chill misty morning in 1970, I stood in Kapali Carsi, the grand bazaar, under a Sony sign bristling with alien futurity, and stared deep into a cube of plate glass filled with tiny, ancient, fascinating things.

Hanging in that ancient venue, a place whose on-site café, I was told, had been open, 24 hours a day, 365 days a year, literally for centuries, the Sony sign – very large, very proto-Blade Runner, illuminated in some way I hadn’t seen before – made a deep impression. I’d been living on a Greek island, an archaeological protectorate where cars were prohibited, vacationing in the past.

The glass cube was one man’s shop. He was a dealer in curios, and from within it he would reluctantly fetch, like the human equivalent of those robotic cranes in amusement arcades, objects I indicated that I wished to examine. He used a long pair of spring-loaded faux-ivory chopsticks, antiques themselves, their warped tips lent traction by wrappings of rubber bands.

And with these he plucked up, and I purchased, a single stone bead of great beauty, the color of apricot, with bright mineral blood at its core, to make a necklace for the girl I’d later marry, and an excessively mechanical Swiss cigarette lighter, circa 1911 or so, broken, its hallmarked silver case crudely soldered with strange, Eastern, aftermarket sigils.

And in that moment, I think, were all the elements of a real futurity: all the elements of the world toward which we were heading – an emerging technology, a map that was about to evert, to swallow the territory it represented. The technology that sign foreshadowed would become the venue, the city itself. And the bazaar within it.

But I’m glad we still have a place for things to change hands. Even here, in this territory the map became.

I’ve written before about how the map has become the territory. But I’d completely forgotten, until today, this piece I read over 10 years ago. Fitting, I suppose, that I should rediscover it now by typing a few words into Google, trying to find an article I vaguely remembered reading once about Gibson and eBay. As he says earlier in the piece quoted above, “We are mapping literally everything, from the human genome to Jaeger two-register chronographs, and our search engines grind increasingly fine.”

Names are important, powerful things. We need a name for this dimension that is the map turned out from itself, to be its own territorial reality. I’m not married to “cyberspace” — I’ll gladly call it something else.

What’s important to me is that we have a way to talk about it, so we can get better at the work of designing and making for it, and within it.

 

Note: Thanks to Andrea Resmini & Luca Rosati for involving me in their work on the upcoming book, Pervasive IA, from which I gleaned the reference to the Institute for the Future article I mentioned above.

Earlier I shared a post about designing context management, and wanted to add an example I’d seen. I knew I’d made this screenshot, but then couldn’t remember where; luckily I found it today hiding in a folder.

This little widget from Plaxo is the only example I’ve noticed where an online platform allows you to view information from different contextual points of view (other than very simple examples like “your public profile” and “preview before publish”).

Plaxo’s function actually allows you to see what you’re sharing with various categories of users with a basic drop-down menu. It’s not rocket science, but it goes miles further than most platforms for this kind of functionality.

If anybody knows of others, let me know?

Context Management

Note: a while back, Christian Crumlish & Erin Malone asked me to write a sidebar for a book they were working on … an ambitious tome of design patterns for social software. The book, (Designing Social Interfaces) was published last year, and it’s excellent. I’m proud to be part of it. Christian encouraged contributors to publish their portions online … I’m finally getting around to doing so.

In addition to what I’ve posted below, I’ll point out that there have been several infamous screw-ups with context management since I wrote this … including Google Buzz and Facebook’s Groups, Places and other services.

Also to add: I don’t think we need a new discipline for context management. To my mind, it’s just good information architecture.

——————

There was a time when we could be fairly certain where we were at any given time. Just looking at one’s surroundings would let us know if we were in a public park or a quiet library, a dance hall or a funeral parlor. And our actions and conversations could easily adapt to these contexts: in a library, we’d know not to yell “heads up” and toss a football, and we’d know to avoid doing the hustle during someone’s eulogy.

But as more and more of our lives are lived via the web, and the contexts we inhabit are increasingly made of digits rather than atoms, our long-held assumptions about reality are dissolving under our typing-and-texting fingertips.

A pre-web example of this problem is something most people have experienced: accidentally emailing with “reply all” rather than “reply.”  Most email applications make it brutally easy to click Reply All by accident. In the physical world in which we evolved, the difference between a private conversation and a public one required more physical effort and provided more sensory clues. But in an email application, there’s almost no difference:  the buttons are usually identical and only a few pixels apart.

You’d think we would have learned something from our embarrassments with email, but newer applications aren’t much of an improvement. Twitter, for example, allows basically the same mistake if you use “@” instead of “d.” Not only that, but you have to put a space after the “d.”

Twitter users, by the time of this writing, are used to seeing at least a few of these errors made by their friends every week, usually followed by another tweet explaining that was a “mis-tweet” or cursing the d vs @ convention.

At least with those applications, it’s basically a binary choice for a single piece of data: one message goes either to one or multiple recipients: the contexts are straightforward, and relatively transparent. But on many popular social nework platforms, the problem becomes exponentially more complicated.

Because of its history, Facebook is an especially good example. Facebook started as a social web application with a built-in context: undergraduates at Harvard. Soon it expanded to other colleges and universities, but its contextual architecture continued to be based on school affiliation. The power of designing for a shared real-world context allowed Facebook’s structure to assume a lot about its users: they would have a lot in common, including their ages, their college culture, and circles of friends.

Facebook’s context provided a safe haven for college students to express themselves with their peers in all their immature, formative glory; for the first time a generation of late-teens unwittingly documented their transition to adulthood in a published format. But it was OK, because anybody on Facebook with them was “there” only because they were already “there” at their college, at that time.

But then, in 2006 when Facebook opened its virtual doors to anyone 13 or over with an email address, everything changed.  Graduates who were now starting their careers found their middle-aged coworkers asking to be friends on Facebook. I recall some of my younger office friends reeling at the thought that their cube-mates and managers might see their photos or read their embarrassing teenage rants “out of context.”

The Facebook example serves a discussion of context well because it’s probably the largest virtual place to have ever so suddenly unhinged itself from its physical place. Its inhabitants, who could previously afford an assumed mental model of “this web place corresponds to the physical place where I spent my college years,” found themselves in a radically different place. A contextual shift that would have required massive physical effort in the physical world was accomplished with a few lines of code and the flip of a switch.

Not that there wasn’t warning. The folks who run Facebook had announced the change was coming. So why weren’t more people ready? In part because such a reality shift doesn’t have much precedent; few people were used to thinking about the implications of such a change. But also because the platform didn’t provide any tools for managing the context conversion.

This lack of tools for managing multiple contexts is behind some of the biggest complaints about Facebook and social network platforms (such as MySpace and LinkedIn). For Facebook, long-time residents realized they would like to still keep up their immature and embarrassing memories from college to share just with their college friends, just like before — they wanted to preserve that context in its own space. But Facebook provided no capabilities for segmenting the experience. It was all or nothing, for every “friend” you added. And then, when Facebook launched its News feed — showing all your activities to your friends, and those of your friends to you — users rebelled in part because they hadn’t been given adequate tools for managing the contexts where their information might appear. This is to say nothing of the disastrous launch of Facebook’s “Beacon” service, where all users were opted in by default to share information about their purchases on other affiliated sites.

On MySpace, the early bugbear was the threat of predator activity and the lack of privacy. Again, the platform was built with the assumption that users were fine with collapsing their contexts into one space, where everything was viewable by every “friend” added. And on LinkedIn, users have often complained the platform doesn’t allow them to keep legitimate peer connections separate from others such as recruiters.

Not all platforms have made these mistakes. The Flickr photo site has long distinguished between Family and Friends, Private and Public. LiveJournal, a pioneering social platform, has provided robust permissions controls to its users for years, allowing creation of many different user-and-group combinations.

However, there’s still an important missing feature, one which should be considered for all social platforms even as they add new context-creation abilities. It’s either impossible or difficult for users to review their profiles and posts from others’ point of view.

Giving users the ability to create new contexts is a great step, but they also need the ability to easily simulate each user-category’s experience of their space. If a user creates a “co-workers” group and tries to carefully expose only their professional information, there’s no straightforward way to view their own space using that filter. With the Reply All problem described earlier, we at least get a chance to proof-read our message before hitting the button. But most social platforms don’t even give us that ability.

This function — perhaps call it “View as Different User Type” — is just one example of a whole class of design patterns we still need for managing the mind-bending complexity we’ve created for ourselves on the web. There are certainly others waiting to be explored. For example, what if we had more than just one way to say “no thank you” to an invitation or request, depending on type of person requesting? Or a way to send a friendly explanatory note with your refusal, thereby adding context to an otherwise cold interaction? Or what about the option to simply turn off whole portions of site functionality for some groups and not others? Maybe I’d love to get zombie-throwing-game invitations from my relatives, but not from people I haven’t seen since middle school?

In the rush to allow everyone to do everything online, designers often forget that some of the limitations of physical life are actually helpful, comforting, and even necessary. We’re a social species, but we’re also a nesting species, given to having our little nook in the tribal cave. Maybe we should take a step back and think of these patterns not unlike their originator, Mr Alexander, did — how have people lived and interacted successfully over many generations? What can we learn from the best of those structures, even in the structureless clouds of cyberspace? Ideally, the result would be the best of both worlds: architectures that fit our ingrained assumptions about the world, while giving us the magical ability to link across divides that were impossible to cross before.

Here’s the presentation I did for A Summit 2009 in Memphis, TN. It’s an update of what I did for IDEA 2008; it’s not hugely different, but I think it pulls the ideas together a little better. The PDF is downloadable from SlideShare. The notes are legible only at full-screen or on the PDF.

David Weinberger’s most recent JOHO post shows us some thinking he’s doing about the history (and nature) of “information” as a concept.

The whole thing is great reading, so go and read it.

Some of it explores a point that I touched on in my presentation for IDEA earlier this month: that computers are very literal machines that take the organic, nuanced ambiguities of our lived experience and (by necessity) chop it up into binary “is or is not” data.

Bits have this symbolic quality because, while the universe is made of differences, those differences are not abstract. They are differences in a taste, or a smell, or an extent, or a color, or some property that only registers on a billion dollar piece of equipment. The world’s differences are exactly not abstract: Green, not red. Five kilograms, not ten. There are no differences that are only differences.

The example I gave at IDEA was how on Facebook, you have about six choices to describe the current romantic relationship you’re in: something that normally is described to others through contextual cues (a ring on your finger, the tone of voice and phrasing you use when mentioning the significant other in conversation, how you treat other people of your sig-other’s gender, etc). These cues give us incredibly rich textures for understanding the contours of another person’s romantic life; but Facebook (again, out of necessity) has to limit your choices to a handful of terms in a drop-down menu — terms that the system renders as mutually exclusive, by the fact that you can only select one.

More and more of the substance of our lives is being housed, communicated & experienced (by ourselves and others) in the Network. And the Network is made of computers that render everything into binary choices. Granted, we’re making things more fine-grained in many systems, and giving people a chance to add more context, but that can only go so far.

Weinberger uses photography as an example:

We turn a visual scene into bits in our camera because we care about the visual differences at that moment, for some human motive. We bit-ify the scene by attending to one set of differences — visible differences — because of some personal motivation. The bits that we capture depend entirely on what level of precision we care about, which we can adjust on a camera by setting the resolution. To do the bit-ifying abstraction, we need analog equipment that stores the bits in a particular and very real medium. Bits are a construction, an abstraction, a tool, in a way that, say, atoms are not. They exist because they stand for something that is not made of bits.

All this speaks to the implications of Simulation, something I’m obsessing about lately as it relates especially to Context. (And which I won’t go into here… not another tangent!)

Dave’s example reminds me of something I remember Neil Young complaining about years ago (in Guitar Player magazine) in terms of what we lose when we put music into a digital medium. He likened it to looking out a screen door at the richly contoured world outside — but each tiny square in screen turn what is seen through its confines into an estimated average “pixel” of visible information. In all that averaging, something vital is inevitably lost. (I couldn’t find the magazine interview, but I did find him saying something similar in the New York Times in 1997: “When you do an analog recording, and you take it to digital, you lose everything. You turn a universe of sounds into an average. The music becomes more abrupt and more agitating, and all of the subtleties are gone.”)

Of course, since that interview (probably 15 years ago) digital music has become much more advanced — reconstructing incredibly dense, high-resolution information about an analog original. Is that the answer, for the same thing that’s happening to our analog lives as they’re gradually soaked up by the great digital Network sponge? Higher and higher resolution until it’s almost real? Maybe. But in every case where we’re supposed to decide on an input to that system (such as which label describes our relationship), we’re being asked to turn something ineffable into language — not only our own, expressively ambiguous language, but the predefined language of a binary system.

Given that many of our lives are increasingly experienced and mediated via the digital layer, the question arises: to what degree will it change the way we think about identity, humanity, even love?

Context Collapse

First of all, I didn’t realize that Michael Wesch had a blog. Now that I’ve found it, I have a lot of back-reading to do.

But here’s a recent post on the subject of Context, as it relates to web-cams and YouTube-like expression. Digital Ethnography — Context Collapse

The problem is not lack of context. It is context collapse: an infinite number of contexts collapsing upon one another into that single moment of recording. The images, actions, and words captured by the lens at any moment can be transported to anywhere on the planet and preserved (the performer must assume) for all time. The little glass lens becomes the gateway to a blackhole sucking all of time and space – virtually all possible contexts – in upon itself.

By the way, I’m working on a talk on context for IDEA Conference. Are you registered yet?

When I first heard about the Kozinski story (some mature content in the story), it was on NPR’s All Things Considered. The interviewer spoke with the LA Times reporter, who went on about how the judge had “published” offensive material on a “public website.”

I won’t go into detail on the story itself. But I urge anyone to take the LA Times article with a grain or two of salt. Evidently, the thing got started when someone who had an ax to grind with the judge sent links and info to the media, and said media went on to make it all look as horrible as possible. However, the more we learn about the details in the case, the more it sounds like the LA Times is twisting the truth a great deal. **

To me, though, the content issue isn’t as interesting (or challenging) as the “public website” idea.

Basically, this was a web server with an IP and URL on the Internet that was intended for family to share files on, and whatever else (possibly email server too? I don’t know). It’s the sort of thing that many thousands of people run — I lease one of my own that hosts this blog. But the difference is that Kozinski (or, evidently, his grown son) set it up to be private for just their use. Or at least he thought he had — he didn’t count on a disgruntled individual looking beyond the “index” page (that clearly signaled it as a private site) and discovering other directories where images and what-not were listed.

Lawrence Lessig has a great post here: The Kozinski mess (Lessig Blog). He makes the case that this wasn’t a ‘public’ site at all, since it wasn’t intended to be public. You could only see this content if you typed various additional directories onto the base URL. Lessig likens it to having a faulty lock on your front door, and someone snooping in your private stuff and then telling about it. (Saying it was an improperly installed lock would be more accurate, IMHO.)

The comments on the page go on and on — much debate about the content and the context, private and public and what those things mean in this situation.

One point I don’t see being made (possibly because I didn’t read it all) is that there’s now a difference between “public” and “published.”

It used to be that anything extremely public — that is, able to be seen by more than just a handful of people — could only be there if it was published that way on purpose. It was impossible for more than just the people in physical proximity to hear you, see you or look at your stuff unless you put a lot of time and money into making it that way: publishing a book, setting up a radio or TV station and broadcasting, or (on the low end) using something like a CB radio to purposely send out a public signal (and even then, laws limited the power and reach of such a device).

But the Internet has obliterated that assumption. Now, we can do all kinds of things that are intended for a private context that unwittingly end up more public than we intended. By now almost everyone online has sent an email to more people than they meant to, or accidentally sent a private note to everyone on Twitter. Or perhaps you’ve published a blog article that you only thought a few regular readers would see, but find out that others have read it who were offended because they didn’t get the context?

We need to distinguish between “public” and “published.” We may even need to distinguish between various shades of “published” — the same way we legally distinguish between shades of personal injury — by determining intent.

There’s an informative thread over at Groklaw as well.

**About the supposedly pornographic content, I’ll only say that it sounds like there was no “pornography” as typically understood on the judge’s server, but only content that had accumulated from the many “bad-taste jokes” that get passed around the net all the time. That is, nothing more offensive than you’d see on an episode of Jackass or South Park. Whether or not that sort of thing is your cup of tea, and whether or not you think it is harmfully degrading to any segment of society, is certainly your right. Some of the items described are things that I roll my eyes at as silly, vulgar humor, and then forget about. But describing a video (which is currently on YouTube) where an amorously confused donkey tries mount a guy who was (inadvisedly) trying to relieve himself in a field as “bestiality” is pretty absurd. Monty Python it ain’t; but Caligula it ain’t either.