An unmoderated community is its own worst enemy

Here is a very interesting talk, by Clay Shirky, the writer on Internet Technologies and Society

Public Enemy In the talk, he points out that over the history of online collaborative groups using social software, there is a predictable pattern which emerges time after time in open unmoderated groups, namely that the behaviour of the group will (if unchecked) subvert the purpose of the group.

He mentions three behaviours;

the first is that the group will move away from chatting about the purpose of the group, to online jokey, rowdy or flirtatious behaviour.

The second is that the group will move away from chatting about the purpose of the group, to ranting about a “common enemy”.

The third is that the group will select a person or a document or principle to “venerate” to the point where it becomes unchallengeable. The second and the third are, I think, the reasons behind the silo mentality that creeps into online groups.

What happens as a result of these tendencies, is that the conversation becomes trivial, entrenched or off-topic to an extent that the group cannot deliver its purpose. Either the group is shut down, or stays at a trival or entrenched level, or else it develops a system of internal governance, such as a moderator, tiers of contribution, a constitution or a charter. This system of internal governance will act to protect the aims of the group from the behaviours of the individuals (the groups “own worst enemy”) and is generally the only chance for survival of the group.

Clay’s point is that you cannot separate the social issues within the group from the technological issues. The technology becomes a new playground on which the old battles are fought. And yet he says that time-and-again organisations will introduce a new technology, expect certain behaviours to emerge as a result, and be surprised and frustrated by the fact that “the users don’t behave like they should”.

He concludes three things

  1. As you cant separate the social from the technical issues, then ensure the group addresses the social issues from the start. This is where the bedrocks of Communities of Practice come in – the facilitator moderator, the community charter, the behaviour ground-rules.
  2. There will always be a core group. Clay calls these “members” as opposed to “users”, and they are the people who care about the purpose of the group.
  3. The core group has rights that trump the rights of the individual. Generally it is the core group that writes and “enforces” the charter.

I think this is an excellent reminder NOT to just introduce a social technology and “expect knowledge to be shared”. History has shown, time and again, that this does not happen.

Instead you need to plan for the social dynamics, operate the group as a community of practice, appoint a core team from the start, and a leader, and develop a charter that sets the purpose of the group, and lays out some ground rules to protect the group from its own worst enemy – itself.

View Original Source (nickmilton.com) Here.

Leave a Reply

Your email address will not be published. Required fields are marked *

Shared by: Nick Milton

Tags: ,