Human Systems

You are currently browsing articles tagged Human Systems.

There are references everywhere — I saw it on the news while I was travelling — but here’s an article at USA Today

IPSWICH, England — Tear down the traffic lights, remove the road markings and sell off the signs: Less is definitely more when it comes to traffic management, some European engineers believe.

They say drivers tend to proceed more cautiously on roads that are stripped of all but the most essential markings — and that helps cut the number of accidents in congested areas.

“It’s counterintuitive, but it works,” said urban planner Ben Hamilton-Baillie, who heads the British arm of a four-year European project, Shared Spaces, to test the viability of what some planners call “naked roads.”

I often get confused driving around trying to parse the many signs everywhere, and wonder if they’re really helping things — and marvel at how ugly they are. It didn’t occur to me that a more ‘zen’ approach might be better, and possibly even safer. Fascinating how when you take away some of the cues, you force people to *think* as they drive. (As long as you have just enough other cues to keep them somewhat managed.)

In some ways it’s kind of a wikipediazation of public roadway signage. Rather than dictating every move, put just enough of the right cues out there to get people to structure their own behavior appropriately.

Confessions of an Aca/Fan: The Official Weblog of Henry Jenkins: Confronting the Challenges of Participatory Culture: Media Education for the 21st Century (Part One)
Participatory Culture

For the moment, let’s define participatory culture as one:
1. With relatively low barriers to artistic expression and civic engagement
2. With strong support for creating and sharing one’s creations with others
3. With some type of informal mentorship whereby what is known by the most experienced is passed along to novices
4. Where members believe that their contributions matter
5. Where members feel some degree of social connection with one another (at the least they care what other people think about what they have created).

(via Terra Nova)

Two remarkable things get said in the recent Boing-Boing post Disney exec: Piracy is just a business model

First, Disney’s co-exec chair admits they’ve had an enlightened paradigm shift on piracy:

We understand now that piracy is a business model,” said Sweeney, twice voted Hollywood’s most powerful woman by the Hollywood Reporter. “It exists to serve a need in the market for consumers who want TV content on demand. Pirates compete the same way we do – through quality, price and availability. We we don’t like the model but we realise it’s competitive enough to make it a major competitor going forward

Pretty amazing that. The fact that they realize this isn’t so much criminal activity as it is the collective effort of its customers emerging as a competitive entity that routes around the impediments of traditional media delivery.

But evidently she also said Disney’s strategy is primarily about content because “content drives everything else.” And Cory Doctorow (who posted this at BB) makes a stellar point:

Content isn’t king. If I sent you to a desert island and gave you the choice of taking your friends or your movies, you’d choose your friends — if you chose the movies, we’d call you a sociopath. Conversation is king. Content is just something to talk about.

I love that… he nails it.

For example, American Idol’s content isn’t what makes it a sensation — it’s the fact that it inspires conversations among people. It’s set up as a participatory exercise — regular people competing, and regular people voting. The same with sports, serial drama television and even video games. Content can be engineered to be more or less conducive to conversation — and I guess in that way it ‘drives everything else’ — but that has as much to do with the nuances of delivery (the ‘architecture’ of the content, if you will) as it does with the content itself.

And in that sense, you could see piracy as not only a business model, but another form of discourse. Piracy is a sort of conversation — people share things because they’re seeking social capital, influence, validation, or even just shared communal experience.

It’s pretty obvious to most people who watch users act and react that they do a lot of what they do based on somewhat primal and/or emotionally driven impulses. And I’m sure there’s a lot of neuroscience stuff out there that explains how this works, but I haven’t encountered any until I read the article Mind Games in last week’s New Yorker.

Here are a couple of salient bits:

The first scenario [in the MRI study] corresponds to the theoretical ideal: investors facing a set of known risks. The second setup was more like the real world: the players knew something about what might happen, but not very much. As the researchers expected, the players’ brains reacted to the two scenarios differently. With less information to go on, the players exhibited substantially more activity in the amygdala and in the orbitofrontal cortex, which is believed to modulate activity in the amygdala. “The brain doesn’t like ambiguous situations,” Camerer said to me. “When it can’t figure out what is happening, the amygdala transmits fear to the orbitofrontal cortex.”

The results of the experiment suggested that when people are confronted with ambiguity their emotions can overpower their reasoning, leading them to reject risky propositions. This raises the intriguing possibility that people who are less fearful than others might make better investors . . .

Today, most economists agree that, left alone, people will act in their own best interest, and that the market will coördinate their actions to produce outcomes beneficial to all.

Neuroeconomics potentially challenges both parts of this argument. If emotional responses often trump reason, there can be no presumption that people act in their own best interest. And if markets reflect the decisions that people make when their limbic structures are particularly active, there is little reason to suppose that market outcomes can’t be improved upon.

Part of the article also describes how the researchers used oxytocin (a hormone generated during pleasurable and intimate activities) via nasal inhalers. I have to quote this too because it’s so fascinating.

Trust plays a key role in many economic transactions, from buying a secondhand car to choosing a college. In the simplest version of the trust game, one player gives some money to another player, who invests it on his behalf and then decides how much to return to him and how much to keep. The more the first player invests, the more he stands to gain, but the more he has to trust the second player. If the players trust each other, both will do well. If they don’t, neither will end up with much money.

Fehr and his collaborators divided a group of student volunteers into two groups. The members of one group were each given six puffs of the nasal spray Syntocinon, which contains oxytocin, a hormone that the brain produces during breast-feeding, sexual intercourse, and other intimate types of social bonding. The members of the other group were given a placebo spray.

Scientists believe that oxytocin is connected to stress reduction, enhanced sociability, and, possibly, falling in love. The researchers hypothesized that oxytocin would make people more trusting, and their results appear to support this claim. Of the twenty-nine students who were given oxytocin, thirteen invested the maximum money allowed, compared with just six out of twenty-nine in the control group. “That’s a pretty remarkable finding,” Camerer told me. “If you asked most economists how they would produce more trust in a game, they would say change the payoffs or get the participants to play the game repeatedly: those are the standard tools. If you said, ‘Try spraying oxytocin in the nostrils,’ they would say, ‘I don’t know what you’re talking about.’ You’re tricking the brain, and it seems to work.”

I wonder what this tells us about the focus we should be placing on the emotional response people have to what we’ve designed? Especially when it comes to systems they use to make important decisions about which they may have anxieties or confusion.

Also, I wonder what this means for information architecture specifically, since so much of our most basic daily work is about reducing semantic ambiguity — to what degree does the user’s emotional context affect their ability to reason through what we’re giving them? And, in a Heisenbergian twist, to what degree does the ambiguity of choice within the designed experience exacerbate the user’s context?

For a year or so now, “innovation” has been bobbing around at the very top of the memepool. Everybody wants to bottle the stuff and mix it into their corporate water supplies.

I’ve been on the bandwagon too, I confess. It fascinates me — where do ideas come from and how do they end up seeing the light of day? How does an idea become relevant and actionable?

There’s a recent commercial for Fedex where a group of pensive executives are sitting around a conference table, their salt-and-pepper haired and square-jawed CEO (I assume) sitting at the head of the group and a weak-chinned, rumpled and dorky underling sitting next to him. The CEO asks how they can cut costs (I’m paraphrasing) and the little younger dorky guy recommends one of Fedex’s new services. He’s ignored. But then the CEO says exactly the same thing, and everybody nods in agreement and congratulates him on his genius.

The whole setup is a big cliche. We’ve seen it time and again in sitcoms and elsewhere. But what makes this rendition different is how it points out the difference in delivery and context.

In looking for a transcript of this thing, I found another blog that summarizes it nicely, so I’ll point to it and quote here.

The group loudly concurs as the camera moves to the face of the worker who proposed the idea in the first place. Perplexed, he declares, “You just said what I just said only you did this,” as he mimics his boss’s hand motions.
The boss looks not at him, but straight ahead, and says, “No, I did this,” as he repeats his hand motion. The group of sycophants proclaims, “Bingo, Got it, Great.” The camera captures the contributor, who has a sour grimace on his face.

(Thanks Joanne Cini for the handy recap.)

What it also captures is the reaction of an older colleague sitting next to the grimacing dorky guy who gives a little nod to him that shows a mixture of pity, complicity in what just happened, and a sort of weariness that seems to say, “yeah, see? that’s how it works young fella.”

It’s a particularly insightful bit of comedy. It lampoons the fact that so much of how ideas happen in a group environment depends on context, delivery, and perception (and here I’m going to pick on business, but it happens everywhere in slightly different flavors). Dork-guy not only doesn’t get the language that’s being used (physical and tonal), but doesn’t “see” it well enough to even be able to imitate it correctly. He doesn’t have the literacy in that language that the others in the room do, and feels suddenly as if he’s surrounded by aliens. Of course, they all perceive him as alien (or just clueless) as well.

I know I’m reading a lot into this slight character, but I can’t help it. By the way, I’m not trying to insult him by calling him dork-guy — it’s just the way he’s set up in the commercial; I think the dork in all of us identify with him. I definitely do.

In fact, I know from personal experience that, in dork-guy’s internal value matrix, none of the posturing means a hill of beans. He and his friends probably make fun of people who put so much weight on external signals — they think of it as a shallow veneer. Like most nerdy people, the assumption is that your gestures, haircut or tone of voice doesn’t affect whether you win the chess match or not. But in the corporate game of social capital, “presence” is an essential part of winning.

Ok, so back to innovation. There’s a tension among those who talk and think about innovation between Collective Intelligence (CI) and Individual Genius (IG). To some degree there are those who favor one over the other, but I think most people who think seriously about innovation and try to do anything about it struggle with the tension within themselves. How do we create the right conditions for CI and IG to work in synergy?

The Collective Intelligence side has lots of things in its favor, especially lately. With so many collective, emergent activities happening on the Web, people now have the tools to tap into CI like never before — when else in history did we have the ability for people all over the world to collaborate almost instantaneously in rapid conversation, discussion and idea-vetting? Open Source philosophy and the “Wisdom of Crowds” have really found their moment in our culture.

I’m a big believer too, frankly. I’m not an especially rabid social constructivist, but I’m certainly a convert. Innovation (outside of the occasional bit that’s just for an individual privately) derives its value from communal context. And most innovations that we encounter daily were, in one way or another, vetted, refined and amplified by collaboration.

Still, I also realize that the Eureka Moments don’t happen in multiple minds all at once. There’s usually someone who blurts out the Eureka thought that catalyzes a whole new conversation from that “so perfect it should’ve been obvious” insight. Sometimes, of course, an individual can’t find anyone who hears and understands the Eureka thought, and their Individual Genius goes on its lonely course until either they do find the right context that “gets” their idea or it just never goes anywhere.

This tension betwen IG and CI is rich for discussion and theorizing, but I’m not going to do much of that here. It’s all just a very long setup for me to write down something that was on my mind today.

In order for individuals to care enough to have their Eureka thoughts, they have to be in a fertile, receptive environment that encourages that mindset. People new to a company often have a lot of that passion, but it can be drained away long before their 401k matching is vested. But is what these people are after personal glory? Well, yeah, that’s part of it. But they also want to be the person who thought of the thing that changed everybody’s lives for the better. They want to be able to walk around and see the results of that idea. Both of these incentives are crucial, and they’re both important ingredients in the feed and care of the delicate balance that brings forth innovation.

Take the Fedex commercial from above. The guy had the idea and he’ll see it executed. Why wouldn’t he be gratified to see the savings in the company’s bottom line and to see people happier? Because that’s only part of his incentive. The other part is for his boss, at the quarterly budget meeting, to look over and say “X over there had a great idea to use this service, and look what it saved us; everybody give a round of applause to X!” A bonus or promotion wouldn’t hurt either, but public acknowledgement of an idea’s origins goes a very very long way.

I’ve worked in a number of different business and academic environments, and they vary widely in how they handle this bit of etiquette. And it is a kind of etiquette. It’s not much different from what I did above, where I thanked the source of the text I quoted. Maybe it’s my academic experience that drilled this into me, but it’s just the right thing to do to acknowledge your sources.

In some of my employment situations, I’ve been in meetings where an idea I’ve been evangelizing for months finally emerges from the lips of one of my superiors, and it’s stated as if it just came to them out of the blue. Maybe I’m naive, but I usually assume the person just didn’t remember they’d heard it first from me. But even if that’s the case, it’s a failure of leadership. (I’ve heard it done not just to my ideas but to others’ too. I also fully acknowledge I could be just as guilty of this as anyone, because I’m relatively absent-minded, but I consciously work to be sure I point out how anything I do was supported or enhanced by others.) It’s a well-known strategy to subliminally get a boss to think something is his or her own idea in order to make sure it happens, but if that strategy is the rule rather than the exception, it’s a strong indicator of an unhealthy place for ideas and innovation (not to mention people).

But the Fedex commercial does bring a harsh lesson to bear — a lesson I still struggle with learning. No matter how good an idea is, it’s only as effective as the manner in which it’s communicated. Sometimes you have no control over this; it’s just built into the wiring. In the (admittedly exaggerated, but not very much) situation in the Fedex commercial, it’s obvious that most of the dork-guy’s problem is he works in a codependent culture full of sycophants who mollycoddle a narcissistic boss.

But perhaps as much as half of dork-guy’s problem is that he’s dork-guy. It’s possible that there are some idyllic work environments where everyone respects and celebrates the contributions of everyone else, no matter what their personal quirks. But chances are it’s either a Kindergarten classroom or a non-profit organization. And I happen to be a big fan of both! I’m just saying, I’m learning that if you want to play in certain environments, you have to play by their rules, both written and unwritten. And I think we all know that the ratio of unwritten-to-written is something like ten-to-one.

In dork-guy’s company, sitting up straight, having a good haircut and a pressed shirt mean a lot. But what means even more is saying what you have to say with confidence, and an air of calm inevitability. Granted, his boss probably would still steal the idea, but his colleagues will start thinking of him as a leader and, over time, maybe he’ll manage to claw his way higher up the ladder. I’m not celebrating this worldview, by the way. But I’m not condemning it either. It just is. (There is much written hither and yon about how gender and ethnicity complicate things even further; speaking with confidence as a woman can come off negatively in some environments, and for some cultural and ethnic backgrounds, it would be very rude. Whole books cover this better than I can here, but it’s worth mentioning.)

Well, it may be a common reality, but it certainly isn’t the best way to get innovation out of a community of coworkers. In environments like that, great ideas flower in spite of where they are, not because of it. The sad thing is, too many workplaces assume that “oh we had four great ideas happen last year, so we must have an excellent environment for innovation,” not realizing that they’re killing off hundreds of possibly better seedlings in the process.

I’ve managed smaller teams on occasion, sometimes officially and sometimes not, but I haven’t been responsible for whole departments or large teams. Managing people isn’t easy. It’s damn hard. It’s easy for me to sit at my laptop and second-guess other people with responsibilities I’ve never shared. That said, sometimes I’m amazed at how ignorant and self-destructive as a group some management teams can be. They can talk about innovation or quality or whatever buzzword du jour, and they can institute all sorts of new activities, pronouncements and processes to further said buzzword, but not do anything about the major rifts in their own ranks that painfully hinder their workers from collaborating or sharing knowledge; they reinforce (either on purpose or unwittingly) cultural norms that alienate the eccentric-but-talented and give comfort to the bland-but-mediocre. They crow about thinking outside the box, while perpetuating a hierarchical corporate system that’s one of the most primitive boxes around.

Ok, that last bit was a rant. Mea Culpa.

My personal take-away from all this hand-wringing? I can’t blame the ‘system’ or ‘the man’ for anything until I’ve done an honest job of playing by the un/written rules of my environment. It’s either that, or play a new game. To me, it’s an interesting challenge if I look at it that way; otherwise it’s just disheartening. I figure either I’ll succeed or I’ll get so tired of beating myself against the cubicle partitions, I’ll give up and find a new game to play.

Still, eventually? It’d be great to change the environment itself. Maybe I should go stand in front of my bathroom mirror and practice saying that with authority? First, I have to starch my shirts.

Mao Mao Mao

There’s been a lot of buzz over the last week or so about Jaron Lanier’s “DIGITAL MAOISM: The Hazards of the New Online Collectivism”
[http://edge.org/3rd_culture/lanier06/lanier06_index.html] in which he warns of a sort of irrational exuberance about “collective intelligence.”

I found myself taking mental notes as I read it, ticking off what I agreed and disagreed with and why. But then I read Douglas Rushkoff’s response:
http://edge.org/discourse/digital_maoism.html#rushkoff

And I realized he’d already expressed everything in my tick-list, and then some, and better than I would’ve.

Lanier’s essay and all the responses to it at Edge are excellent reading for anyone who thinks deeply about what the Internet means to the social fabric, culture, learning and history.

Just a couple of personal reactions:

I found myself feeling a little mollified reading Lanier’s essay. I already knew what it was about and was ready to find mostly disagreement with his points, but ended up realizing I had been guilty of some of the foolishness he calls us on and agreeing with most of what he says.

But then I thought about what I’ve actually believed on the subject and realized, I don’t think I’ve ever thought or said the collective is superior to the individual. Only that “architectures of participation” allow even more individuals to participate in the marketplace of ideas in ways that they simply couldn’t have before. Lanier runs the risk of equating “collective intelligence” with “collectivism” — which is a bit like equating free-market capitalism with Social Darwinism (itself a misnomer).

His main bugbear is Wikipedia. I agree there’s too much hype and not enough understanding of the realities of Wikipedia’s actual creation, use and relevance. But I think that’ll sort itself out over time. It’s still very new. Wikipedia doesn’t replace (and never will) truly authoritative peer-reviewed-by-experts information sources. Even if people are currently referencing it like it’s the highest authority, over time we’ll all start learning to be more authority-literate and realize what’s ok to reference at Wikipedia and what isn’t (just like War of the Worlds tricked thousands in the earlier days of radio — but you really can’t imagine that happening now, could you?)

One thing Lanier doesn’t seem to realize, though, is that Wikipedia isn’t faceless. Underneath its somewhat anonymous exterior is an underculture of named content creators who discuss, argue, compromise and whatever else in order to make the content that ends up on the site. Within that community, people *do* have recognizable personalities. In the constrained medium of textual threaded forums, some of them manage to be leaders who gain consensus and marshall qualitative improvement. They’re far from anonymous, and the “hive” they’re a part of is much closer to a meritocracy than Lanier seems to think.

Not that Wikipedia’s perfect, and not that it meets the qualifications of conventional “authoritative” information sources. But we’re all figuring out what the new qualifications are for this kind of knowledge-share.

At any rate, his essay is very good and has important stuff we have to consider.

Back from the IA Summit, and my brain is full… brimming and spilling over.

One thing that I came away with was a newly energized zeal to preach the wisdom of Information Architecture as a practice of creating digital spaces for people to collaborate, live, work and play in. The focus being not on the individual-to-interface interaction (or individual-to-retrieved-information interaction), but between the individual and other individuals or groups.

Focusing on tags or taxonomies or even “organization” itself is focusing on the particular raw materials we use to get the social-engineering result. A city planner’s job isn’t defined by “deciding where to put streets and sewers.” But knowing where those go is central to their job of making urban spaces conducive to particular kinds of living — commerce, residence, play, etc.

That is, an urban planner’s real focus is human systems. But the materials used to affect human systems are concrete, steel, electricity, signage, roads and the rest. Lots of specialties are required, and knowledge of many of them is necessary.

Anyway, I ran across this today: Here’s an Idea: Let Everyone Have Ideas – New York Times

The concept is maybe a little cheesy, but evidently it works. This software is essentially “an internal market where any employee can propose [an idea]. These proposals become stocks, complete with ticker symbols, discussion lists and e-mail alerts. Employees buy or sell the stocks, and prices change to reflect the sentiments of the company’s engineers, computer scientists and project managers — as well as its marketers, accountants and even the receptionist.”

It seems to me an excellent example of information architecture — creating this application to enable, encourage and refine the collective idea-making wisdom of a whole organization. Getting the labeling right in such an application isn’t the focus of “IA” anymore to me. That’s taxonomy or c.v. or interaction design work that is essential to the success of the architecture, of course.

But the Information Architecture is the larger issue of understanding what structures are made out of those materials (vocabularies, search tools, labels) to enable and encourage the human system inhabiting that structured environment.

In discussing some weird policies in the World of Warcraft online game, Cory Doctorow nicely articulates an important insight about environments like WoW:

Online games are incredibly, deeply moving social software that have hit on a perfect formula for getting players to devote themselves to play: make play into a set of social grooming negotiations. Big chunks of our brains are devoted to figuring out how to socialize with one another — it’s how our primate ancestors enabled the cooperation that turned them into evolutionary winners.

http://www.boingboing.net/2006/01/27/world_of_warcraft_do.html

Virtual worlds can have a deep emotional impact on people. This is as true of an old-fashioned BBS or discussion forum like The Well, as well as for MMOGs (Massively Multiplayer Online Games) like the recently deceased Asheron’s Call 2.

Unfortunately, the more resources it takes to run a particular world, the more money it has to make. If it doesn’t keep in the black, it dies. Someone posted a sad little log of the last moments with their friends in this world here.

Things like this intrigue me to no end. I realize that this wasn’t a truly real world that disappeared. That is, the people behind the avatars/characters they played are still alive, sitting at their screens. They had plenty of time to contact one another and make sure they could all meet again in some other game, so it wasn’t necessarily like a tragic sudden diaspora (though some people do go through such an experience if the world they’ve counted on has suddenly had the plug pulled).

Still, the human mind (and heart?) only needs a few things to make a virtual place feel emotionally significant, if not ‘real.’ Reading the log linked above, you see that the participants do have perspective on their reality, even if you think their pining is a little ren-faire cheesy. But they can’t help being attached to the places they formed friendships in, played and talked in, for so long. It seems a little like leaving college — if you made meaningful friendships there, you can never really go back to that context again, even if you keep up with friends afterward. Except instead of graduation, you stand in the quad and part of you “dies” along with the whole campus.

I think the discussion linked above about the Well articulates pretty well just what these kinds of communities can mean to people. Further discussion and inquiry goes on all over the ‘net, including a site called “Project Daedalus” about the “psychology of mmorpgs”. (Edited to add: I also found a new publication called “Games & Culture” with at least one article specific to serious academic study of MMOGs. And I’m sure there are plenty more at places like Academic Gamers and Gamasutra.)

What Web 2.0 Means

I’m not much of a joiner. I’m not saying I’m too good for it. I just don’t take to it naturally.

So I tend to be a little Johnny-come-lately to the fresh stuff the cool kids are doing.

For example, when I kept seeing “Web 2.0” mentioned a while back, I didn’t really think about it much, I thought maybe I’d misunderstood … since Verizon was telling me my phone could now do Wap 2.0, I wondered if it had something to do with that?

See? I’m the guy at the party who was lost in thought (wondering why the ficus in the corner looks like Karl Marx if you squint just right) and looks up after everybody’s finished laughing at something and saying, “what was that again?”

So, when I finally realize what the hype is, I tend to already be a little contrary, if only to rescue my pride. (Oh, well, that wasn’t such a funny joke anyway, I’ll go back to squinting at the ficus, thank you.)

After a while, though, I started realizing that Web 2.0 is a lot like the Mirror of Erised in the first Harry Potter novel. People look into it and see what they want to see, but it’s really just a reflection of their own desires. They look around at others and assume they all see the same thing. (This is just the first example I could think of for this trope: a common one in literature and especially in science fiction.)

People can go on for quite a while assuming they’re seeing the same thing, before realizing that there’s a divergence.

I’ve seen this happen in projects at work many times, in fact. A project charter comes out, and several stakeholders have their own ideas in their heads about what it “means” — sometimes it takes getting halfway through the project before it dawns on some of them that there are differences of opinion. On occasion they’ll assume the others have gone off the mark, rather than realizing that nobody was on the same mark to begin with.

I’m not wanting to completely disparage the Web 2.0 meme, only to be realistic about it. Unlike the Mirror of Erised (“desire” backwards) Web 2.0 is just a term, not even an object. So it lends itself especially well to multiple interpretations.

A couple of weeks ago, this post by Nicholas Carr went up: The amorality of Web 2.0. It’s generated a lot of discussion. Carr basically tries to put a pin in the inflated bubble of exuberance around the dream of the participation model. He shows how Wikipedia isn’t actually all that well written or accurate, for example. He takes to task Kevin Kelly’s Wired article (referenced in my blog a few days ago) about the new dawning age of the collectively wired consciousness.

I think it’s important to be a devil’s advocate about this stuff when so many people are waxing religiously poetic (myself included at times). I wondered if Carr really understood what he was talking about at certain points — for example, doing a core sample of Wikipedia and judging the quality of the whole based on entries about Bill Gates and Jane Fonda sort of misses the point of what Wikipedia does in toto. (But in the comments to his post, I see he recognizes a difference between value and quality, and that he understands the problems around “authority” of texts.) Still, it’s a useful bit of polemic. One thing it helps us do is remember that the ‘net is only what we make it, and that sitting back and believing the collective conscious is going to head into nirvana without any setbacks or commercial influence is dangerously naive.

At any rate, all we’re doing with all this “Web 2.0” talk is coming to the realization that 1) the Web isn’t about a specific technology or model of browsing, but that all these methods and technologies will be temporary or evolved very quickly, and that 2) it’s not, at its core, really about buying crap and looking things up — it’s about connecting people with other people.

So I guess my problem with the term “Web 2.0” is that it’s actually about more than the Web. It’s about internetworking that reduces the inertia of time and space and creates new modes of civilization. Not utopian modes — just new ones. (And not even, really, that completely new — just newly global, massive and immediate for human beings.) And it’s not about “2.0” but about “n” where “n” is any number up to infinity.

But then again, I’m wrong. I can’t tell people what “Web 2.0” means because what it means is up to the person thinking about it. Because Web 2.0 is, after all, a sign or cypher, an avatar, for whatever hopes and dreams people have for infospace. On an individual level, it represents what each person’s own idiosyncratic obsessions might be (AJAX for one person, Wiki-hivemind for the next). And on a larger scale, for the community at large, it’s a shorthand way of saying “we’re done with the old model, we’re ready for a new one.” It’s a realization that, hey, it’s bigger and stranger than we realized. It’s also messy, and a real mix of mediocrity and brilliance. Just like we are.

Wladawsky-Berger writes about the big picture of the Internet and the rise of collaborative work … here he references a lecture he heard:

Irving Wladawsky-Berger: The Economic and Social Foundations of Collaborative Innovation

[In his lecture] Professor Benkler is essentially saying that collaborative innovation is a serious mode of economic production that has arisen because the Internet and related technologies and standards now permit large numbers of individuals to organize themselves for productive work, in a decentralized, non-market way. A similar argument has been made by Steven Weber, Professor of Political Sciences at UC Berkeley and Director of Berkeley’s Institute of International Studies, in his writings, and in particular in his recently published book The Success of Open Source.

This is an excellent article… makes a lot of great points.

I think one thing, though, that a lot of “cross-corporate collaboration” thinking is missing is that so many corporations need the same thing just within their own walls — cross-silo collaboration. Most major American corporations are like collections of companies with a shared logo.

This is a post I wrote only a couple of days after Katrina first hit the Gulf Coast (Sept 1, from what my timestamp now says, apparently), but I didn’t put it up because it seemed a little early to be opining about quasi-political technical philosophy in the midst of an emergency.

Now that I’m seeing others post about it (example here) I suddenly recalled my unpublished post … so here it is…

Stewart Brand and others have promoted the idea of open architectures, simple open systems, for meeting human needs more readily, efficiently and sustainably (and more humanely and intimately for that matter).

The Katrina situation shows how simple web structures that allow great emergence and complexity with social interaction can be useful in a pinch.

For example, the Katrina Help Wiki: Main Page – KatrinaHelp

As well as the use of Craigs List for Lost and Found as well as housing coordination.

Craigs List is a beautiful example: it’s so open and easy to use, and so simplified, that it becomes the path of least resistance. People can check it easily on slow connections or even their phones (I think). Precisely why more commercial and glitzy complex sites aren’t being used for the purpose.

Is CraigsList making money from it? Not directly. They make their money from paid job postings. But when New Orleans rebuilds and people need to hire workers, I wonder what site will occur to them first?

[Edited to add: Craig wrote in the comments that they “have no plans at all to charge in New Orleans… and have provided free job postings related to Katrina, and have actually lost money on that.” My point above was only that organizations that don’t take advantage of adversity, but show generosity, come out better in the end with more loyal and trusting constituents.]

« Older entries § Newer entries »