1 / 11

ODI & Test Team Rankings : How they work and their suitability for determining tournament qualification

Understand how ODI and Test Team Rankings work and their suitability for determining tournament qualification.

marquis
Download Presentation

ODI & Test Team Rankings : How they work and their suitability for determining tournament qualification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ODI & Test Team Rankings: How they work and their suitability for determining tournament qualification David Kendix June 2011

  2. Background to the rankings • In early 2002, ICC decided to create an official ranking system for ODI cricket • I was retained by ICC Management to advise on the criteria for the system and then to build and maintain the model. It was launched in October 2002 • A similar process led to the creation of a Test ranking system that was launched in May 2003 (replacing an earlier simpler but flawed model devised by Wisden) • Both models have remained in place and virtually unchanged since then, later attracting sponsorship and prize money for the top-ranked teams

  3. ODI model criteria • The ICC Executive Board decided that the ODI model should have the following criteria:

  4. ODI model design – key features • Accordingly the ODI model was built with the following features • After every completed ODI, each team earns a certain number of points, which depends on just two factors: • The result of the match (won, lost or tied) • The rating of the opponent • An average performing team will have a rating of close to 100 • By reflecting the strength of opposition, this removes any ‘bias’ arising from the very different mix of fixtures played by different countries • Also because the rating is like a batting average, there is no bias towards teams that play more or less frequently over the rating period, but subject to a minimum number of matches (eight)

  5. ODI Model – a numerical example • If you win a match you earn 50 points more than your opponent’s rating; but if you lose a match you earn 50 points less than your opponent’s rating. (A slightly different rule applies if the teams are more than 40 rating points apart) • Team A has 960 points from 8 matches, rating = 120 • Team B has 1000 points from 10 matches, rating = 100 • Team A should win. If it does it earns 150 points, B gets 70 points; A’s new rating is then 1110 / 9 = 123, B’s is 1070 /11 = 97 • But if Team B wins, it earns 170 points but A gets only 50 points; A’s new rating is then 1010 / 9 = 112, B’s is 1170 /11 = 106 • So beating a stronger team gives a large reward, but losing to a stronger team leads to only a small ratings penalty. Similarly, there is a big penalty for losing to a weaker team, but not much reward for beating a weaker team.

  6. ODI model design - weightings • The rating are based on results over the past 2-3 years • This period is long enough to cover a good mix of opponents and venues, but short enough to be a fair indicator of form • Every August, the oldest year of results is dropped from the model, at which point the ratings are based on exactly two years’ results, with new results added over the following year • More recent results are given a higher weighting than older results so that the ratings properly reflect current rather than historic form. The first year of results have a one third weight, the following year two thirds, with matches played since the previous August are fully weighted • So as of June 2011, the ratings include only ODIs played since August 2008, but with only matches played since August 2010 being fully weighted

  7. How ODI ratings move • How quickly a team’s rating moves up or down depends on their results but also on how many matches they’ve played. Again, it is like a batting average, where the more innings it is based on, the less movement results from one further innings • If a team plays 20 ODIs per year, then a win against a similarly rated opponent will typically be worth 1-2 points, but beating a much higher rated team would be worth 3 points. • A country that plays only 12 ODIs per year would typically gain 3 points for beating a similarly rated opponent, with of course a corresponding 3 point drop for losing • The impact of the annual update on a team depends on how much their form has changed over each of the previous three years. A consistent performer will be unaffected by the annual update, but a side that has improved year on year will benefit greatly from older results being dropped • What a team needs to do to climb a certain number of positions in the rankings depends on the gap in rating points between them and the teams above. As with any league table, if a few teams are closely bunched, then a ranking can move sharply from match to match. • Most importantly, because the points for winning and losing exactly reflect the ratings of the teams on the date the match is played, there is no inbuilt rating advantage from playing particular opponents

  8. ODI model design - developments • The formula, weightings and update process have all remained unchanged since launch. (If it’s not broken, don’t fix it!) • However a mechanism was introduced to allow Associates to join the rankings, which involved beating a Full Member in an ODI, winning the majority of ODIs against other Associates and playing the minimum number of matches • Consequently, both Ireland and Netherlands have joined the rankings since 2007. Kenya already had ODI status at launch, meaning the table currently ranks 13 teams. • The table was used to determine 2009 Champions Trophy qualification

  9. Test model design – key features • The Test model is very similar to the ODI model, but with the following variations • The same points are awarded for the result of a Test series as for each individual Test, so in effect the series result is like a bonus point. This allows each Test to count (no dead rubbers for ranking purposes) while still recognising the primacy of a series • The last 3-4 years of results are used, with the oldest two years being weighted at 50% and the most recent 1-2 years being fully weighted. So as at June 2011, series completed since August 2007 are included, with series completed since August 2009 receiving a full weighting

  10. Using rankings for qualification • How suitable are the current ranking models for determining qualifiers for ICC Events? • In general, since they fairly measure the top performing teams in both forms of the game, it seems reasonable to use them in this way. • However it is still worthwhile considering whether any changes are necessary to the models since they were not originally designed for this purpose • The following are a list of issues that will need careful consideration

  11. Issues for consideration • The rankings are a snapshot of performance at a single point in time, not an assessment of overall performance over the four year interval between tournaments • The rating periods (2-3 years for ODIs, 3-4 years for Tests) will mean some matches played soon after one tournament may no longer count towards the rankings by the cut-off date for the next tournament. • Whatever ranking model is used, regulations will be required to preserve the integrity of the qualification process, so neither FMs nor Associates can “game the system” • The criterion for a minimum number of matches may need strengthening, so a country can not ‘sit on’ its existing rating nor qualify despite rarely playing in the final qualifying year

More Related