socialsoftware

You are currently browsing articles tagged socialsoftware.

From the point of view of a binary mindset, identity is a pretty simple thing. You, an object = [unique identifier]. You as an object represented in a database should be known by that identifier and none other, or else the data is a mess.

The problem is, people are a mess. A glorious mess. And identity is not a binary thing. It’s much more fluid, variegated and organic than we are comfortable admitting to ourselves.

Lately there’s been some controversy over policies at Facebook and the newly ascendant Google + that demand people use their “real” names. Both companies have gone so far as to actually pull the plug on people who they suspect of not following those guidelines.

But this is actually a pretty wrong-headed thing to do. Not only does the marketplace of ideas have a long, grand tradition of the use of pseudonyms (see my post here from a couple years ago), but people have complex, multifaceted lives that often require they not put their “public identification attribute” (i.e. their ‘real name’) out there on every expression of themselves online.

There are a lot of stories emerging, such as this one about gender-diverse people who feel at risk having to expose their real names, that are showing us the canaries in the proverbial coal mine — the ones first affected by these policies — dropping off in droves.

But millions of others will feel the same pressures in more subtle ways too. Danah Boyd has done excellent work on this subject, and her recent post explains the problem as well as anyone, calling the policies essentially an “abuse of power.”

I’m sure it comes across as abusive, but I do think it’s mostly unwitting. I think it’s a symptom of an engineering mindset (object has name, and that name should be used for object) and a naive belief in transparency as an unalloyed “good.” But on an internet where your name can be searched and found in *any* context in which you have ever expressed yourself, what about those conversations you want to be able to have without everyone knowing? What about the parts of yourself you want to be able to explore and discover using other facets of your personality? (Sherry Turkle’s early work is great on this subject.)

I can’t help but think a Humanities & Social Sciences influence is so very lacking among the code-focused, engineering-cultured wizards behind these massive information environments. There’s a great article by Paul Adams, formerly of Google (and Google +), discussing the social psychology angle and how it influenced “Circles,” how FaceBook got it somewhat wrong with “Groups,” and why he ended up at Facebook anyway. But voices like his seem to be in the minority among those who are actually making this stuff.

Seeing people as complex coalescences of stories, histories, desires, relationships and behaviors means giving up on a nice, chapter 13 bankruptcy lawyer phoenix entity-relationship-diagram-friendly way of seeing the world. It means having to work harder on the soft, fuzzy complicated stuff between people than the buckets you want people to put themselves in. We’re a long way from a healthy, shared understanding of how to make these environments human enough.

UPDATE:
I realize now that I neglected to mention the prevailing theory of why platforms are requiring real names: marketing purposes. That could very well be. But that, too, is just another cultural force in play. And I think there’s a valid topic to be addressed regarding the binary-minded approach to handling things like personal identity.

There’s an excellent post on the subject at The Atlantic. It highlights a site called My Name is Me, which describes itself as “Supporting your freedom to choose the name you use on social networks and other online services.”

Note: a while back, Christian Crumlish & Erin Malone asked me to write a sidebar for a book they were working on … an ambitious tome of design patterns for social software. The book, (Designing Social Interfaces) was published last year, and it’s excellent. I’m proud to be part of it. Christian encouraged contributors to publish their portions online … I’m finally getting around to doing so.

In addition to what I’ve posted below, I’ll point out that there have been several infamous screw-ups with context management since I wrote this … including Google Buzz and Facebook’s Groups, Places and other services.

Also to add: I don’t think we need a new discipline for context management. To my mind, it’s just good information architecture.

——————

There was a time when we could be fairly certain where we were at any given time. Just looking at one’s surroundings would let us know if we were in a public park or a quiet library, a dance hall or a funeral parlor. And our actions and conversations could easily adapt to these contexts: in a library, we’d know not to yell “heads up” and toss a football, and we’d know to avoid doing the hustle during someone’s eulogy.

But as more and more of our lives are lived via the web, and the contexts we inhabit are increasingly made of digits rather than atoms, our long-held assumptions about reality are dissolving under our typing-and-texting fingertips.

A pre-web example of this problem is something most people have experienced: accidentally emailing with “reply all” rather than “reply.”  Most email applications make it brutally easy to click Reply All by accident. In the physical world in which we evolved, the difference between a private conversation and a public one required more physical effort and provided more sensory clues. But in an email application, there’s almost no difference:  the buttons are usually identical and only a few pixels apart.

You’d think we would have learned something from our embarrassments with email, but newer applications aren’t much of an improvement. Twitter, for example, allows basically the same mistake if you use “@” instead of “d.” Not only that, but you have to put a space after the “d.”

Twitter users, by the time of this writing, are used to seeing at least a few of these errors made by their friends every week, usually followed by another tweet explaining that was a “mis-tweet” or cursing the d vs @ convention.

At least with those applications, it’s basically a binary choice for a single piece of data: one message goes either to one or multiple recipients: the contexts are straightforward, and relatively transparent. But on many popular social nework platforms, the problem becomes exponentially more complicated.

Because of its history, Facebook is an especially good example. Facebook started as a social web application with a built-in context: undergraduates at Harvard. Soon it expanded to other colleges and universities, but its contextual architecture continued to be based on school affiliation. The power of designing for a shared real-world context allowed Facebook’s structure to assume a lot about its users: they would have a lot in common, including their ages, their college culture, and circles of friends.

Facebook’s context provided a safe haven for college students to express themselves with their peers in all their immature, formative glory; for the first time a generation of late-teens unwittingly documented their transition to adulthood in a published format. But it was OK, because anybody on Facebook with them was “there” only because they were already “there” at their college, at that time.

But then, in 2006 when Facebook opened its virtual doors to anyone 13 or over with an email address, everything changed.  Graduates who were now starting their careers found their middle-aged coworkers asking to be friends on Facebook. I recall some of my younger office friends reeling at the thought that their cube-mates and managers might see their photos or read their embarrassing teenage rants “out of context.”

The Facebook example serves a discussion of context well because it’s probably the largest virtual place to have ever so suddenly unhinged itself from its physical place. Its inhabitants, who could previously afford an assumed mental model of “this web place corresponds to the physical place where I spent my college years,” found themselves in a radically different place. A contextual shift that would have required massive physical effort in the physical world was accomplished with a few lines of code and the flip of a switch.

Not that there wasn’t warning. The folks who run Facebook had announced the change was coming. So why weren’t more people ready? In part because such a reality shift doesn’t have much precedent; few people were used to thinking about the implications of such a change. But also because the platform didn’t provide any tools for managing the context conversion.

This lack of tools for managing multiple contexts is behind some of the biggest complaints about Facebook and social network platforms (such as MySpace and LinkedIn). For Facebook, long-time residents realized they would like to still keep up their immature and embarrassing memories from college to share just with their college friends, just like before — they wanted to preserve that context in its own space. But Facebook provided no capabilities for segmenting the experience. It was all or nothing, for every “friend” you added. And then, when Facebook launched its News feed — showing all your activities to your friends, and those of your friends to you — users rebelled in part because they hadn’t been given adequate tools for managing the contexts where their information might appear. This is to say nothing of the disastrous launch of Facebook’s “Beacon” service, where all users were opted in by default to share information about their purchases on other affiliated sites.

On MySpace, the early bugbear was the threat of predator activity and the lack of privacy. Again, the platform was built with the assumption that users were fine with collapsing their contexts into one space, where everything was viewable by every “friend” added. And on LinkedIn, users have often complained the platform doesn’t allow them to keep legitimate peer connections separate from others such as recruiters.

Not all platforms have made these mistakes. The Flickr photo site has long distinguished between Family and Friends, Private and Public. LiveJournal, a pioneering social platform, has provided robust permissions controls to its users for years, allowing creation of many different user-and-group combinations.

However, there’s still an important missing feature, one which should be considered for all social platforms even as they add new context-creation abilities. It’s either impossible or difficult for users to review their profiles and posts from others’ point of view.

Giving users lake worth homes for sale ability to create new contexts is a great step, but they also need the ability to easily simulate each user-category’s experience of their space. If a user creates a “co-workers” group and tries to carefully expose only their professional information, there’s no straightforward way to view their own space using that filter. With the Reply All problem described earlier, we at least get a chance to proof-read our message before hitting the button. But most social platforms don’t even give us that ability.

This function — perhaps call it “View as Different User Type” — is just one example of a whole class of design patterns we still need for managing the mind-bending complexity we’ve created for ourselves on the web. There are certainly others waiting to be explored. For example, what if we had more than just one way to say “no thank you” to an invitation or request, depending on type of person requesting? Or a way to send a friendly explanatory note with your refusal, thereby adding context to an otherwise cold interaction? Or what about the option to simply turn off whole portions of site functionality for some groups and not others? Maybe I’d love to get zombie-throwing-game invitations from my relatives, but not from people I haven’t seen since middle school?

In the rush to allow everyone to do everything online, designers often forget that some of the limitations of physical life are actually helpful, comforting, and even necessary. We’re a social species, but we’re also a nesting species, given to having our little nook in the tribal cave. Maybe we should take a step back and think of these patterns not unlike their originator, Mr Alexander, did — how have people lived and interacted successfully over many generations? What can we learn from the best of those structures, even in the structureless clouds of cyberspace? Ideally, the result would be the best of both worlds: architectures that fit our ingrained assumptions about the world, while giving us the magical ability to link across divides that were impossible to cross before.

Erin Malone points to an article on the challenges of managing the Flickr community in the SF Chronicle:

"People bring their human relationships to Flickr, and we end up having to police them," Champ says. …

Lest your inner libertarian objects to such interventions, Champ is quick to correct the idea that the community would ultimately find its own balance.

"The amount of time it would take for the community to self-regulate — I don't think it could sustain itself in the meantime," she says. "Anyway, I can't think of any successful online community where the nice, quiet, reasonable voices defeat the loud, angry ones on their own."

This struck me as uncannily relevant to what’s going on right now in the US economy.

Once social platforms like Flickr reach a certain size, they really do become a weird amalgam of City & Economy, and they require governance. Heather Champ (Flickr’s estimable community manager) points out that, even if you truly believe a collective crowd like this will self-regulate, much damage will be done on mirrors and wall art online way to finding that balance.

Isn’t that precisely the perennial tension we have in terms of free-market economics?

It seems to me that User Experience design is increasingly needing to learn from Economics and Political Science — and it may even have a thing or two to teach them, as well.

I have lots of thoughts on this, but too many to get down here … just wanted to bring it up because I think it’s so damned fascinating.

This excellent report came out a couple of weeks ago. It shows that the ubiquity and importance of video games, and game culture, is even bigger than many of us imagined. I explored some of this in a presentation a few years ago: Clues to the Future. I’m itching to keep running with some of those ideas, especially now that they’re being taken more seriously in business & technology circles (not by my doing, of course, but just from increased exposure in mainstream publications and the like).

Pew Internet: Teens, Video Games and Civics

The first national survey of its kind finds that virtually all American teens play computer, console, or cell phone games and that the gaming experience is rich and varied, with a significant amount of social interaction and potential for civic engagement….

The primary findings in the survey of 1,102 youth ages 12-17 include –

* Game playing is universal, with almost all teens playing games and at least half playing games on a given day.
* Game playing experiences are diverse, with the most popular games falling into the racing, puzzle, sports, action and adventure categories.
* Game playing is also social, with most teens playing games with others at least some of the time and can incorporate many aspects of civic and political life.

I’m especially interested in the universality of game playing. It reinforces more than ever the idea that the language of games is going to be an increasingly universal language. The design patterns, goal-based behaviors, playfulness — these are things that have to be considered over the next 5-10 years as software design accommodates these kids as they grow up.

The social aspect is also key: we have an upcoming generation that expects their online & software-based experiences used authentic rolex integrate into their larger lives; they don’t assume that various applications and contexts are separate, and feel pleasantly surprised (or disturbed) to discover they’re connected. They’ll have a different set of assumptions about connectedness.

There’s been a lot of writing here and there about social networks and privacy, but I especially like how this professor from (one of my alma-maters) UNCG puts it in this article from the Washington Post:

“It’s the postmodern nightmare — to have all of your selves collide,” says Rebecca G. Adams, a sociologist at the University of North Carolina at Greensboro who edits Personal Relationships, the journal of the International Association for Relationship Research. … “If you really welcome all of your friends from all of the different aspects of your life and they interact with each other and communicate in ways that everyone can read,” Adams says, “you get held accountable for the person you are in all of these groups, instead of just one of them.”

It’s a pretty smart article. I also liked the phrase “participatory surveillance” bullying books describing what happens socially online.

Chris Brogan has a great post about 100 Personal Branding Tactics Using Social Media, with some helpful tips on creating that thing we keep hearing about “the Personal Brand.”

I’ve always struggled with this, though. I’ve been doing this “blogging” thing a long time. In fact, my first “home page” was a text-only index file. Why? Because there weren’t any graphical Web browsers yet. And even once there were, the only people who were online to look at any such thing were net-heads like myself. There was already a sense of informality and mutual understanding, and “netizens” seemed to prize a level of authenticity above almost anything else. Anything that looked like a personal “brand” was suspect.

cattlebrand.jpg

So, something about the DNA of my initial forays into personal expression on the ‘net has stuck with me. Namely, that it’s my little corner of the world, where I say what’s on my mind, take it or leave it, with very little concern about my brand or what-not. I am not saying this is a good thing. It just is.

Over the years, though, I’ve become more conscious of the shift in context. It’s like I had a little corner lot in a small town, with a ramshackle house and flotsam in the yard, and ten years later I look out to see somebody developed a new subdivision around me, with McMansions, chemically enhanced lawns, and joggers wearing those special clothes that you only wear if you’re really *into* jogging. You know what I mean.

And now I’m just not sure where my blog stands in all this. I don’t keep up with it often, but if I do it’s not because I’ve set a goal for myself, it’s just because my brainfartery is more active (and long-form) than usual. I feel the need to have a more polished, disciplined blog-presence, with fire watch new york the right trimmings … but then I’d miss having this thing here. And I know for a fact that if I had both, I’d be so short-circuited about which I should post on, I’d end up doing nothing with either of them.

Or maybe I’m just lazy?

Note: One of Brogan’s awesome tips is to add some visual interest with each post; hence a CC licensed image from mharrsch.

Hey, I’m Andrew! You can read more about who I am on my About page.

If I had a “Follow” button on my forehead, and you met me in person and pushed that button, I’d likely give you a card that had the following text written upon it:

Here’s some explanation about how I use Twitter. It’s probably more than you want to read, and that’s ok. This is more a personal experiment in exploring network etiquette than anything else. If you’re curious about it and read it, let me know what you think?

Disclaimers

  • I use Twitter for personal expression & connection; self-promotion & “personal brand” not so much (that’s more my blog’s job, but even there not so much).
  • I hate not being able to follow everyone I want to, but it’s just too overwhelming. There’s little rhyme/reason to whom I follow or not. Please don’t be offended if I don’t follow you back, or if I stop following for a while and then start again, or whatever. I’d expect you to do the same to me. All of you are terribly interesting and awesome people, but I have limited attention.
  • Please don’t assume I’ll notice an @ mention within any time span. I sometimes go days without looking.
  • Direct-messages are fine, but emails are even better and more reliable for most things (imho).
  • If you’re twittering more than 10 tweets a day, I may have to stop following just so I can keep up with other folks.
  • If you add my feed, I will certainly check to see who you are, but if there’s zero identifying information on your profile, why would I add you back?

A Few Guidelines for Myself (that I humbly consider useful for everybody else too ;-)

  • I’ll try to keep tweets to about 10 or less a day, to avoid clogging my friends’ feeds.
  • I’ll avoid doing scads of “@” replies, since Twitter isn’t a great conversation mechanism, but is pretty ok as an occasional comment-on-a-tweet mechanism.
  • I won’t use any automated mechanism to track who “unfollows” me. And if I notice you dropped me, I won’t think about it much. Not that I don’t care; just seems a waste of time worrying about it.
  • I won’t try to game Twitter, or workaround my followers’ settings (such as defeating their @mentions filter by putting something before the @, forcing them to see replies they’d otherwise not have to skip.)
  • I’ll avoid doing long-form commentary or “live-blogging” using Twitter, since it’s not a great platform for that (RSS feed readers give the user the choice to read each poster’s feed separately; Twitter feed readers do not, and allow over-tweeting to crowd out other voices on my friends’ feeds.)
  • I’ll post links to things only now and then, since I know Twitter is very often used in (and was intended for) mobile contexts that often don’t have access to useful web browsers; and when I do, I’ll give some context, rather than just “this mirrors and wall art online cool …”
  • I will avoid using anything that automatically Tweets or direct-messages through my account; these things simply offend me (e.g. if I point to a blog post of mine, I’ll actually type a freaking tweet about it).
  • In spite of my best intentions, I’ll probably break these guidelines now and then, but hopefully not too much, whatever “too much” is.

Thanks for indulging my curmudgeonly Twitter diatribe. Good day!

I wasn’t aware there was such debate over what makes a blog a blog, and a wiki a wiki. But Jordan Frank over at Traction Software makes a sensible distinction, one that I could’ve sworn everybody took for granted?

What is a Blog? A Wiki?

And that, finally, brings me to a baseline definition for both blogs and wikis:
A system for posting, editing, and managing a collection of hypertext pages (generally pertaining to a certain topic or purpose)…
Blog: …displayed as a set of pages in time order…
Wiki: …displayed by page as a set of linked pages…
…and optionally including comments, tags or categories or labels, permalinks, and RSS (or other notification mechanisms) among other features.
Both “blog” and “wiki” style presentations can make pages editable by a single individual or editable by a group (where group can include the general public, people who register, or a selected group). In the enterprise context, more advanced version control, audit trail, display flexibility, search, permission controls, and IT integration hooks may also be present.

He goes into the history of various debates over the terms, which I found enlightening. Mainly because they show that people invest the idea of “blog” or “wiki” with lots of philosophical and political baggage and emotional resonance.

Evidently some folks believed “A BLOG is what it is because it allows comments and conversation!” But that seems silly to me, since to some degree the grandfather of blogs was “Robot Wisdom” where a slightly obsessive polymath simply posted quick links (a “log” — like a ship captain’s log — of his travels on the web, hence “web log”) and little one-line comments on them. I’m happy to see that, as of this moment, he’s still at it. And it doesn’t have any comment capability whatsoever.

In fact, it’s very lean on opinion or exposition of any kind! But it is, in essence, what Jordan defines above — a system for posting a collection of pages (or, I would actually say, ‘entries’) in time order. Quintessential “weblogness.”

Now, I suppose some could argue that somewhere between “weblog” and the truncated nickname “blog” things shift, and blogs are properly understood as something more discursive? But I don’t think so. I think the DNA of a blog means it’s essentially a series of posts giving snapshots of what is on the mind of the blog’s writer, both posted and presented in chronological order. That might be a ‘collective’ writer — a group blog. But it’s what it is, nonetheless.

But that doesn’t mean the emotional attachment, philosophical significance and political impact aren’t just as important — they’re just not part of the definition. :-)

[Edited to add: while it's true that a wiki & blog *can* both make pages editable by one author or a group, in *practice* a blog tends to be about individual voices writing "posts" identified with author bylines, while a wiki tends to be about multiple authors writing each "article" through aggregated effort. Blogs & wikis started with these uses in their DNA, and the vast majority of them anti bullying video this pattern. Fore example, most blog platforms display the name of a post's author by default, while most wikis don't bother displaying author names on articles, because there's an assumption the articles will be written & refined over time by multiple users.]