Let me start by saying I'm a PhD in a relevant field. I know maths and I think the quake skill rating system sucks. If the skill rating system worked, since people (at least in tier4) actively shuffle before games, you would expect close games. Instead, win rates vary wildly between players, and match outcomes are too skewed towards extreme results (10-0). I think it's a system that rewards some players and punishes others.
For a starting point, if I understand correctly, the system works as follows (for team games):
- Shuffle balances teams by trying to match the sum of ratings on both teams to be equal.
After the match,
- Top 1/3 performers in points/time played gain skill rating
- Bottom 1/3 performers in points/time played lose skill rating
1) Skillrating is based on individual performance (points) instead of game outcome (e.g. in CA).
This is never the way to go. The problem is that points probably don't match to actual impact in the game. Points awarded on damage in CA can't factor in how well the player plays to his team. A more camping style of player may have, on average, higher points per match than their impact on the team performance really is. This means that when shuffled, teams who get the camper styled player will mostly lose. This hypothetical 'camper' archetype is just an example of why its very important in all skill ranking systems to only look at the aggregate outcome in the game. There may be other playstyles with a similar fate, and other playstyles that benefit from it.
2) Skillrating isn't based on a mathematical model.
I can't explain how overly simplistic the skill rating model is. Statistical and mathematical models should be based on some experimentally grounded ideas that CAN be build into implementable models. Any model that takes into account main features of data in a mathematically self-consistent way is much more likely to succeed than an ad hoc model, even if the model is very simple. For example, based on success of Elo, one of the more important factors in a skill model is to adjust ratings after the game MORE if the outcome was very improbable. Just because Elo takes this single factor consistently into consideration, it performs relatively well. Of course, Elo isn't directly applicable to multiplayer team games. We could e.g. assume its multiple 1on1 games. Instead, by googling the subject for 3min I found an article http://www.csie.ntu.edu.tw/~cjlin/pa...ne_journal.pdf . I have no idea if its results are correct, but my point is the following:
For me at least, the most aggravating aspect of quake live is how the teams are crappy (10-0 games) all the time. The reason is a half-assed ranking system that isn't based on any mathematics. Please improve it.