Seering et al. propose there are three simultaneous processes which define moderation and its relation to online communities. These are:
(1) Over the course of weeks or months, new moderators are chosen, learn through daily interactions, and develop a moderation philosophy.
(2) Moderators interact on a daily basis with users and make individual short-term decisions about specific incidents, ranging from warnings to light penalties and eventually to bans if necessary.
(3) Finally, throughout the life cycle of the community, moderators make important decisions about policies that impact how the community evolves, usually in reaction to problems that emerge. [Seering et al. 2019]
Moderation can be thought of as an act of public work--defined by Boyte and Kari as an activity of cooperative citizenship that creates social as well as material culture. danah boyd likewise considers volunteer moderation as creating, maintaining, and defining “networked publics,” imagined collective spaces that “allow people to gather for social, cultural, and civic purposes”. [Matias 2019a]
Online communities tend to follow a pattern wherein a “group of early members consolidate and exercise a monopoly of power within the organization as their interests diverge from the collective’s.” [Matias 2019a]
When people feel like they belong in a space, they are more influenced by the posted norms of a space; they do likewise when behavior is less private, more monitored, and norms are more explicitly enforced. [Matias 2019b]
No Comments