Preliminaries

First we load the required libraries. The library used for the empirical game-theoretic analysis is empiricalGameTheory.

require(knitr)
require(fBasics)
require(moments)
require(empiricalGameTheory)

Evolutionary Game-Theory

In an evolutionary game-theoretic model, pairs of agents are chosen at random from an idealised infinite population. By assuming an infinite population and no mutation we can specify the dynamics of the system using a simple ordinary differential equation. The standard approach is to use the replicator dynamics equation (Weibull, 1997) to model how the frequency of each strategy in the larger population changes over time in response to the within-group payoffs:

\(\dot{m}_i = \left[u(e_i,\vec{m}) - u(\vec{m},\vec{m})\right]m_i\)

where \(\vec{m}\) is a mixed-strategy vector, \(u(\vec{m},\vec{m})\) is the mean payoff when all players play \(\vec{m}\), and \(u(e_i,\vec{m})\) is the average payoff to pure strategy \(i\) when all players play \(\vec{m}\), and \(\dot{m}_i\) is the first derivative of \(m_i\) with respect to time. Strategies that gain above-average payoff become more likely to be played, and this equation models a simple co-evolutionary process of adaptation.

Heuristic Payoff Matrices

When computing payoffs in the above equation, the empirical game-theory method uses a payoff matrix which is calibrated by running very many simulations. The payoff matrix is said to be heuristic because several simplifying assumptions are made in the interests of tractability. We can make one important simplification by assuming that the game is symmetric, and therefore that the payoff to a given strategy depends only on the of agents within the group adopting each strategy. Thus for a game with \(j\) strategies, we represent the payoff matrix as a pair \((\vec{p}, \vec{q})\). The vector \(\vec{p} = \left( p_1, \ldots, p_j \right)\) represents the group composition, where \(p_i\) specifies the number of agents who are playing the \(i^{\mathrm{th}}\) strategy. Each entry \(\vec{p} \in P\) is mapped onto an outcome vector \(\vec{q} \in Q\) of the form \(\vec{q} = \left( q_1, \ldots, q_j \right)\) where \(q_i\) specifies the expected payoff to the \(i^{\mathrm{th}}\) strategy.

For a game with \(n\) agents, the number of entries in the payoff matrix is given by $ s = $. For example, for \(n=10\) agents and \(j=5\) strategies, we have a payoff matrix with \(s = 1001\) entries.

For each entry in the payoff matrix we estimate the expected payoff to each strategy by running \(5 \times 10^5\) simulations and taking the mean fitness rounded according to the corresponding standard error.

For example, if we have an entry in the payoff matrix \(\vec{p} = \left( 5, 4, 1, 0, 0 \right)\) then we would run \(10^5\) simulations with \(n=5+4+1+0+0=10\) agents, five of which would be initialised to use the first strategy, four to use the second strategy, one using the third strategy, and zero agents using the remaining strategies.

Paper-Rock-Scissors Example

Initially, rather than using simulations to populate the payoff matrix we will consider a simple example where the payoffs are known apriori without having to resort to simulation.

The game of paper-rock-scissors is a standard normal-form game with 2-players and 3-strategies, labelled rock (\(R\)), paper (\(P\)) and scissors (\(S\)). If we write the number of players adopting a particular strategy as \(N_X\), and the payoff to the player(s) adopting strategy \(X\) as \(u(e_X)\), then the payoff matrix for this game can be specified according to the table below.

data(payoff.matrix.rps)
kable(payoff.matrix.rps, col.names = c('$N_R$', '$N_P$', '$N_S$', '$u(e_R)$', '$u(e_P)$', '$u(e_S)$'))
\(N_R\) \(N_P\) \(N_S\) \(u(e_R)\) \(u(e_P)\) \(u(e_S)\)
0 0 2 0 0 0
0 1 1 0 -1 1
2 0 0 0 0 0
1 1 0 -1 1 0
0 2 0 0 0 0
1 0 1 1 0 -1

The function HeuristicPayoff can be used to estimate the function \(u(e_i, \vec{m})\). For example, suppose we want the payoff to pure strategy \(e_R\) against the strategy \(e_S\), we can express paper as a mixed-strategy where the probability of playing paper is \(1\), i.e. \(\vec{m} = (0, 1, 0)\). In R:

R = 1
P = 2
S = 3
HeuristicPayoff(payoff.matrix.rps, R, c(0, 1, 0))
## [1] -1

We can generalise this to expected payoffs when the second strategy is a vector representing the probability with which each pure strategy is adopted:

So the payoff to rock, when our opponent plays rock with probability \(\frac{1}{5}\), paper with probability \(\frac{1}{5}\) and scissors with probability \(\frac{3}{5}\) can be written as \(u(e_R, (0.2, 0.2, 0.6))\), and in R:

HeuristicPayoff(payoff.matrix.rps, R, c(0.2, 0.2, 0.6))
## [1] 0.4

We can encapsulate this inside an R object belonging to the class HeuristicGame:

game.rps <- HeuristicGameFromPayoffMatrix(payoff.matrix.rps, strategies = c('R', 'P', 'S'))

We can also solve the replicator-dynamics diffential equation using finite-difference methods implementing in R’s deSolve library. The method ComputeTrajectory will take an intial condition in the form of a mixed-strategy representing the initial fraction of the population playing each pure strategy, and then iteratively integrate the equation. So if we start with \(x_0 = (0.2, 0.2, 0.6)\), we can see how the population will evolve as a time-series:

t <- seq(0, 100, by=0.1)
population.frequencies <- as.ts(ComputeTrajectory(game.rps, c(0.2, 0.2, 0.6), 
times = t))
ts.plot(population.frequencies, lty=1:3)

If we do this for a large sample of randomly-chosen initial conditions, then we can plot the phase-space of the corresponding trajectories in a 2-dimensional space:

# Compute trajectories for all specified initial conditions over the specified time interval
if (!var.exists('game.rps.analysed'))
  game.rps.analysed <- Analyse(game.rps, initial.values = initial.values.random, times = t, parallel=T)
result <- plot(game.rps.analysed)