## Tuesday, November 14, 2006

### You Have Been Numberized. Topic: Calculus/Statistics.

Introduction: Consider devising a rating system in which matches consisting of an arbitrary number of "players" are "played." After each match, the rating of each player should reflect the position of that player among all competitors.

Solution: Obviously, this is only an idea. There's not a set "solution" to his. Let the players be $P_1, P_2, \ldots, P_n$ with corresponding ratings $R_1, R_2, \ldots, R_n$ and volatilities $V_1, V_2, \ldots, V_n$. We will let the initial values of $R_i$ be $1000$ and the initial values of $V_i$ be $100$. Basically, the performance of player $P_i$ will be a random variable with normal distribution with mean $R_i$ and standard deviation $V_i$. The performance distribution will be

$S_i(x) = \frac{1}{V_i \sqrt{2\pi}} e^{-\frac{(x-R_i)^2}{2V_i^2}}$.

We let $\displaystyle D_i(x) = \int_{-\infty}^x S_i(t) dt$ be the cumulative distribution.

Assume $k$ players $P_1, P_2, \ldots, P_k$ participate in a match. We will recalculate ratings according to the following steps:

(1) Find the weight of the competition, depending on the total rating of the participants. We would like it to take on values between zero and one, so try something like

$W = \left(\frac{\sum_{i=1}^k R_i}{\sum_{i=1}^n R_i}\right)^{\frac{1}{C}}$,

for a suitable constant $C > 1$. Basically, this is small when there are not a lot of players, but increases rapidly as the rating of the participants approaches the total rating.

(2) For a player $P_m$, we will calculate his expected rank based on the normal distribution. Let $P(m, j)$ be the probability that $P_m$ loses to $P_j$. It turns out to be

$\displaystyle P(m,j) = \int_{-\infty}^{\infty} S_j(x) \cdot D_m(x) dx$.

So if we sum up the probabilities that $P_m$ will lose to each player (including himself), we get the expected rank $E_m$ of $P_m$, or

$\displaystyle E_m = 0.5+\sum_{j=1}^k P(m,j)$,

where the $0.5$ is to shift the best ranking to $1$.

(3) The actual rank of $P_m$ is $A_m$, and we want to base rating changes on the difference $E_m-A_m$. If $I_m$ is the number of competitions $P_m$ has participated in, calculate the new $R^{\prime}_m$ as

$R^{\prime}_m = R_m+\frac{WK}{I_m+1}(E_m-A_m)$,

where $K$ is a suitable positive constant for equating differences in ranking to differences in rating.

(4) We now update the volatility $V_m$. If the player's performance is relatively close to the expected performance, we want $V_m$ to decrease to imply stability of the player. On the other hand, if the player performs exceptionally well or poorly, we want $V_m$ to increase to reflect inconsistency. We propose the following change:

$V^{\prime}_m = V_m+WQ_1(|E_m-A_m|-Q)$,

where $Q_1, Q_2$ are suitable positive constants. $Q_1$ determines the magnitude of volatility change and $Q_2$ determines the threshold for which performance is considered relatively constant.

All in all, this rating system depends a lot on experimentally finding the constants to put into working use. Otherwise, it seems like it would work.

--------------------

Comment: Rating systems are always controversial and people are always trying to devise ways to get around it. On a lighter note, many restrictions can be added like a rating change cap or a volatility cap. Stabilizers for higher ratings are also a possibility (i.e. reducing the weight of each match). Any thoughts?