Last time we talked about user content, and how it would inevitably have to be brought in, and properly managed. But now we’ll see the other next best thing we’ll surely be talking about in ten or fifteen years: Population Control.
This one is funny because it immediately gets people’s panties in a bunch, and forces them to think. That’s why I like it so much. It’s also a bit more nebulous and theoretical than user content, so we’ll have to draw from a few places in order to offer something presentable. Namely, the history and evolution of both the real world and virtual worlds, human nature and a nice dose of prophet complex.
Different as they might be, the real world in which we have lived (one way or another) for millions of years shares a lot of similarities with the virtual worlds, games and spaces we’re now creating day in and day out. When you distill those similarities into easily understandable points, you end up with a little list that shows how both the real and the virtual worlds:
– Are populated by human minds.
– Sometimes, according to how the virtual world is made, they have some shared rules and a common set of observable phenomena.
– Essentially count with finite resources
– Have populations that grow steadily when conditions are favorable, and stagnate when conditions are unfavorable.
– Have repeating patterns created by their respective populations that can be observed, to a fault, time and time again. Sometimes these patterns are essentially the same in both.
… and so on. Once we understand this, it’s only one more step to understand the issue of population control in virtual worlds, and how it will become first relevant, and then probably essential.
In any social group, real or virtual, two things can be easily observed: One, that there will always be a segment of that group that will not follow the established rules, to different degrees. Two, that as the size of the group increases, the size of the unruly segment increases proportionally as well. This is easy to understand and we see it every day; there will always be bad apples, and if in a group of 100 people you have 10 bad apples, it’s very likely that in a group of 1000 you’d have 100 bad apples. Statistically, if nothing else. This is nothing new, it’s human nature as we have been observing it for thousands of years, and it has translated almost verbatim to the context of virtual worlds.
So we’re beginning to see the problem we’ll have in ten or fifteen years time: To be blunt, the larger your virtual world and the more people it holds, the more bad apples you’ll have around until there will come a point where the number of bad apples by itself will outgrow your capacity to deal with them with the current methods. Even a casual observation of the real world supports this: The Police Department in, say, a city like New York pretty much has to be larger and consume more resources than the Police Department in a community like, say, Punxsutawney, PA.
Small communities (even the virtual ones) are quite good at moderating themselves and keeping their bad apples under control because of their very nature (once again, crime is lower in smaller communities because of their very nature. You don’t even have to bring the socioeconomic context into the picture. It’s self-evident why). But there comes a point, as population increases and communities grow, where self-regulation breaks. The number of bad apples simply outgrows the community’s capacity to deal with them.
This fact was the very basis for the creation of the first proto-Police arrangements in Greece, Rome and China: Their communities grew too numerous to keep the order by itself and guarantee the law was followed. A special group of citizens (or sometimes even slaves) was tasked with this. In time, it became more professionalized and structured, but that was (and is) the principle: Active action beyond self-moderation so the community doesn’t cease to function completely. We will empower some select and (hopefully) able individuals to enforce the law, so the community can keep cohesion and continue functioning, without being unreasonably burdened with this problem.
But, I hear you say, aren’t GMs in virtual worlds now the Police Department equivalent? I’d say yes and no. Yes, in the sense that they are the enforcers of the law. Your common, garden variety virtual world citizen or player is simply not given the authority, nor the means with which to enforce virtual law. In that sense, yes. However, if we consider GMs as the Police Department equivalent we’d also have to concede that more often than not they are woefully understaffed (your normal server population-to-GM ratio is not very conductive to have effective and expedient enforcement) and that they are, by default, a reactive entity instead of an active one.
At some point in the future, the increased size of our virtual worlds will prompt a reevaluation of the GM role as we understand it now, and we’ll probably see it transformed into something closer to a Police Officer of sorts. The whole issue of moderation and application of the community’s virtual law will have to be brought closer to the citizens almost by necessity, as mandated by the community’s size; the concept of a nebulous, invisible moderation as we have it now will no longer cut it. As population grows, the number of specific and repeated incidents will necessitate, for example, the assignment of more or less permanent and dedicated law enforcement to particular spots and situations, just to prevent them by presence, more or less. Ex.,: A gang of bad apples that is forcibly keeping other players from a certain area or resources, and is strong enough that citizen action cannot dislodge them. Under current standards of moderation, we have to wait until (x) number of complaints are reached, an investigation to take place, determinate measures conceived and taken, etc. An active Police system would, first dislodge the gang, and then increase Police presence (personally or remotely by vigilance) at the spot so they cannot come back.
If this sounds far fetched and a complete waste of resources, you’re still thinking in the context of today’s virtual worlds. Worlds sharded by servers, in which the total community size is usually between 3,000 and 10,000 individuals tops. With that size of community, yes it’s be a waste to assign resources permanently to make sure a trouble spot stays trouble free. However, when you see things from the perspective of future virtual worlds having populations of 50,000 , 100,000 or more concurrent (and we’re not very, very far from that. Didn’t EVE have something like 15K or 20K concurrent once? I forgot the number. But we’re getting there), then it’s not a waste. It becomes a necessity.
Take WoW, for example. The average server size is, what… 3,000 to 5,000 large? Something like that? Let’s say 5,000. Suppose 10% of that are bad apples (which is not an outlandish assumption, really, when you consider that not all bad apples are bad apples 24/7). That’s 500 bad apples. And things can get bad enough as it is in terms of rule breaking, exploiting, verbal/channel abuse, etc. Increase the community size just twice, and suddenly you have to take care of 1,000 of those. Increase it ten-fold (as it’s coming), and suddenly you have 10,000 players that won’t follow the rules at their earliest opportunity. Self moderation is impossible under those circumstances, and our modern, rather passive GM enforcement just won’t cut it.
Yes, you can always shard your worlds as an artificial attempt to keep your community size manageable. But we can guess with some certainty that not only we’re moving to one-server worlds as technology allows us. Not only that, we can also make a safe bet that as soon someone creates a very successful world or game with only one server, and this is pointed as a reason to its success, everyone else will go for it too if they can.
So when I put my prophet hat on, what do I see in 20 years or so? Unsharded, extremely large worlds where the community size is so big at any given time that active enforcement of world rules will be required. I could see GMs stepping out of their nebulous veneer of invisible existence, and coming closer to the people and communities they help keep working, in the shape of actual law enforcement that is visually present and able to passively prevent law breaking or abuse just by being there. I could even see jails or time-out boxes for repeat abusers or law breakers. Why not ban them like we do now? Because banning, while exemplary to the rest of would-be law breakers, is also invisible. A stronger deterrent, a more visual one, would be needed.
I purposely left for the end a very important cue, one that ties everything together and gives this prophecy a nice push. That is, quite simply, the nature of people while they operate behind the anonimity the Internet provides. This is an issue that real world communities do not have, and virtual communities suffer doubly: The fact that people are jackasses on the Internet, simply because they can, and consequences are non-existent.
So even if you don’t buy the social argument, or the size argument, or the historical argument, there’s no escaping this fact: The very nature of virtual worlds makes jackasses of people whom, in different circumstances, wouldn’t be, simply because they can get away with it. The more of these “weekend jackasses” that will be around, the stronger the sentiment to make it hard for them to get away with it will be.
Yes, in many cases it’s a pain in the butt to have Police around, and to have to maintain it. But once your community gets large enough, sooner or later, there comes a point where you have to accept that it’s worse not to have them. Active enforcement is coming.