Context Based Adoptionof Rankingand Indexing Measuresfor Cricket Team Ranks
Context Based Adoptionof Rankingand Indexing Measuresfor Cricket Team Ranks
net/publication/342976689
Context based Adoption of Ranking and Indexing Measures for Cricket Team
Ranks
CITATIONS READS
2 839
7 authors, including:
All content following this page was uploaded by Anwar Ghani on 23 August 2020.
Raja Sher Afgun Usmani1, Syed Muhammad Saqlain Shah 1, *, Muhammad Sher
Ramzan2, Abdullah Saad AL-Malaise AL-Ghamdi2, Anwar Ghani1, Imran Khan1
and Farrukh Saleem2
Abstract: There is an international cricket governing body that ranks the expertise of all
the cricket playing nations, known as the International Cricket Council (ICC). The
ranking system followed by the ICC relies on the winnings and defeats of the teams. The
model used by the ICC to implement rankings is deficient in certain key respects. It
ignores key factors like winning margin and strength of the opposition. Various measures
of the ranking concept are presented in this research. The proposed methods adopt the
concepts of h-Index and PageRank for presenting more comprehensive ranking metrics.
The proposed approaches not only rank the teams on their losing/winning stats but also
take into consideration the margin of winning and the quality of the opposition. Three
cricket team ranking techniques are presented i.e., (1) Cricket Team-Index (ct-index), (2)
Cricket Team Rank (CTR) and (3) Weighted Cricket Team Rank (WCTR). The proposed
metrics are validated through the collection of cricket dataset, extracted from Cricinfo,
having instances for all the three formats of the game i.e., T20 International (T20i), One
Day International (ODI) and Test matches. The comparative analysis between the
proposed and existing techniques, for all the three formats, is presented as well.
1 Introduction
Since the very first game played in the history of sports, it was important to keep the
records of the competition. In history, various ranking systems have been used to rank
teams and players, but more comprehensive sports ranking systems have been around for
nearly 80 years. During the early years of technical ranking, calculations were rendered on
paper rather than on a machine. Rating systems use a variety of methods to rank teams, and
the most widely used criterion is power ranking. Power rating is the power of the team
relative to the teams competing in the same division or league. In power rating, an analyst
1
Department of CS&SE, International Islamic University, H-10 Sector, Islamabad, 44000, Pakistan.
2
Department of Information System, Faculty of Computing and Information Technology, King Abdulaziz
University, Jeddah, 21589, Saudi Arabia.
*
Corresponding Author: Syed Muhammad Saqlain Shah. Email: [email protected].
Received: 29 March 2020; Accepted: 14 July 2020.
attempts to find a transitive relationship in a given dataset. For instance, if Team One wins
over Team Two and Team Two wins over Team Three, it can be said that: (team one>team
two>team three). However, complications may occur while relying on a system that is
completely dependent upon winning and losing. If Team Three wins a game played against
Team One, the relationship in the data is intransitive, as (team one>team two>team
three>team one) and if it is the only data available, violation in ranking may take place.
Situations like this may repetitively prevail in sports and need to be tackled.
The International Cricket Council (ICC) is the governing body of crickets. In the past, it
managed team ranking for all cricket playing nations, using an impromptu system that is
based primarily on winning and losing. The ICC ranking process was simply a system
that was used to regulate all international cricket matches on a regular schedule.
Afterward, a new concept was introduced and implemented, all the teams are assigned a
certain amount of points based on their opponent's performance as well as the result of
the match. The ICC implemented the idea to ensure that such dead rubbers still have
some significance. In the past, if a team won the first three matches of a five-game series,
they didn’t have much to play for in the final two. The series was decided and there was
no advantage in winning it 5-0 compared to 3-2. Teams could rest key players and give
inexperienced players a chance to get away. But now, the 5-0 series gives them more
points and more opportunities to move up the points table.
This paper proposes the Cricket Team Index for the cricket team ranking, adoption of the
h-index [Hirsch (2005)]. The h-index is a state-of-the-art indexing strategy that is used to
measure the productivity and citation impact of scholars, based on their most cited work
i.e., the research papers and the number of citations received in other publications. This
paper maps the citations, used in the h-index, to the winning margin in terms of the
number of wickets and runs. Higher the average citations, the higher h-index. Therefore,
the higher the winning margin, the higher it should be ranked. This paper argues that a
single run can’t be worth a wicket, so the average wicket worth is computed using batting
statistics from the last two years. The ct-index only considers the statistical figures of the
wickets and runs in terms of winning margin, but the strength of the opponent teams is
neglected. To measure the rank and strength of the team, this paper proposes the cricket
team rank (CTR) which is an adoption of the PageRank [Page, Brin, Motwani et al.
(1999)]. Instinctively, the more matches a team wins in competition to a stronger team,
the higher its rank will be. Team Ranking observes the strength and impotence of teams
while neglecting the numeric figures of runs and wickets by which a match is won. The
third proposed technique is Weighted Cricket Team Rank (WCTR) which is also a
modification of PageRank like CTR but it includes weight considering the figures of runs
and wickets by which a match is won.
The rest of the paper is organized as follows: Section two presents a literature review,
section three discusses the current ranking methods and the proposed methods in more
detail, and section four provides the data which was used to experiment and the results of
the experiments. Section five provides a discussion and brief analysis of the results while
section six concludes the presented research.
Context Based Adoption of Ranking and Indexing Measures 1115
2 Related work
As sport is such a finely tuned competitive endeavor, and because multiple millions of
dollars can be connected to just one match, the task of accurate team ranking is critical.
By relying upon outdated ranking techniques, the rankings are not reliable. The h-index
[Hirsch (2005)] and PageRank [Farooq, Khan, Malik et al. (2016); Page, Brin, Motwani
et al. (1999)] approaches are more modern and deliver more reliable results. The ranking
is a practice used for almost all sports, and different methods for producing ranks are
presented in the past. Looking at the batsman’s performance using a parametric control
chart, Bracewell and Ruggiero documented interesting outcomes [Bracewell and
Ruggiero (2009)]. Qader et al. [Qader, Zaidan, Zaidan et al. (2017)] presented a technique
for ranking football players. They used multiple criteria for decision making, i.e., 12 tests
belonging to the three categories (five fitness, three anthropometrics, and four skill tests).
As test data, twenty-four players from U17 were taken, and the results were similar to the
existing system. Applying a social network analysis, Duch et al. [Duch, Waitzman and
Amaral (2010)] created a method of ranking an individual soccer player. Previous
researchers have attempted to use PageRank for delivering a reliable ranking of various
cricket teams [Mukherjee (2012)] and/or cricket players, but these attempts did not
harness the power the h-index brings to ranking teams, nor did they employ a graphical or
non-graphical evaluation routine to calculate the values of runs and wickets. Mukherjee
[Mukherjee (2012)] concluded that there is no real way to accurately determine rank
based on only the number of wins. Quality of a win is also important in creating a metric
to analyze a team’s strength in play. Using the PageRank algorithm, the author created a
formula better to understand the strength of a team and its captain. Likewise, Borooah et
al. [Borooah and Mangan (2010)] see that the existing traditional ranking system has
several drawbacks. When creating a batsman ranking system that relies on the batting
average alone, the system does not take into account the time factor throughout the
matches. A batsman with a consistent scoring of lower value might fare better, at least
temporarily, than a batsman who has a typically high average but suffers from a rough
patch. In the current system, authors claim that the runs a player scores for his team are
entirely discounted, and should be represented with value. The proposed research is an
attempt to resolve these perceived flaws. Amin et al. [Amin and Sharma (2014)]
presented a cricket batsman ranking mechanism for the Indian Premier League (IPL). The
authors adopted ordered weighted averaging (OWA) parameter by using the highest score,
batting average, strike rate, number of fours, and sixes hit by the batsman. The OWA
score was subject to regression for the final ranking of the player. Pradhan et al. [Pradhan,
Paul, Maheswari et al. (2017)] argued that h-index and popular adaptations were good at
ranking highly-cited authors but not much successful in resolving the ties in between
medium and low cited authors. As the majority of the authors comes under the category
of low-medium cited ones, they proposed methodology, C3-index, to resolve the ties
between low and medium cited categories and predicting the future rankings of the
authors during their earlier career. It was shown that the proposed C3-index remained
more consistent and efficient than h-index and its well-known adoptions. Citation-based
metrics like Relative Citation Ratio (RCR) are used as alternate ranking techniques to
different PageRank adoptions. Dadelo et al. [Dadelo, Turskis, Zavadskas et al. (2014)]
argued that current Basketball player ranking systems lack objectivity as they use
1116 CMC, vol.65, no.2, pp.1113-1136, 2020
these techniques have major design flaws and proposed heuristic scheme based on
PageRank to maximize the impact on social media.
• t-index gives the same weight to winning margins in terms of runs and wickets,
which is not correct as winning by seven runs is a close margin while winning by
seven wickets is a considerable margin. This needs to be tackled.
• In case of presenting TeamRank (TR) technique, authors [Daud, Muhammad,
Dawood et al. (2015)] used a constant damping factor for all the teams and only
considered the winning ratio of a team A against B to winning ratios of other teams
against B. TR does involve how many matches B played against A and the other
teams. This gives false results as if B is a newer team which, if has not played the
matches ultimately did not lose many matches from other teams. Winning from such
a team A should not be awarded maximum reward.
• While presenting their techniques of WTR and UWTR [Daud, Muhammad, Dawood
et al. (2015)] did the same as said above that they did not use the strength of the
opponent team along with winning margins in a proper manner.
• ICC ranks the teams based on the strength of the teams, and they do not count the
winning margin while ranking the teams.
the highest margin of win should score higher, and in fact, that is what happens when
using the h-index.
ct-index is adopted as follows,
Tw is the sum of winning margin by wickets, and Tr is the sum of winning margin by runs.
The value of “a” can be chosen between 1 and 5. We have used a=4 for experimentation.
This is chosen to avoid the fractional value in the denominator of Eq. (2). The value of a
wicket is assigned by accumulating the batting records of each team for three years (in
this case, 2013-2015). By calculating the average of runs scored against lost wickets, it is
determined that the value of a single wicket is 30.02 runs for the ODI matches, 32.04
runs for the Test matches, and 21.45 runs for the T20i matches. When calculating the ct-
index, it is necessary to substitute a consistent value of wickets.
where,
𝐺𝐺𝐺𝐺(𝑈𝑈𝑖𝑖 ) 𝐺𝐺𝐺𝐺(𝑂𝑂𝑖𝑖 )
𝑅𝑅(𝑈𝑈𝑖𝑖 ) = 𝑅𝑅(𝑂𝑂𝑖𝑖 ) =
𝑇𝑇𝑇𝑇(𝑈𝑈𝑖𝑖 ) 𝑇𝑇𝑇𝑇(𝑂𝑂𝑖𝑖 )
CTR(A) is the Cricket Team Rank of team A, GL(Ui) is the number of games lost against A
by team i and TG(Ui) are the total games played between A and team i, consequently, R(Ui) is
the ratio between GL(Ui) and TG(Ui). GL(Oi) is the games lost against other opponents
Context Based Adoption of Ranking and Indexing Measures 1121
(excluding A) by team i and TG(Oi) is the games played between team i and other opponents
(excluding A), so R(Oi) is the ratio between GL(Oi) and TG(Oi). Where di is the damping
factor, the value of di depends on the number of matches played by the opponent team. If the
number of matches played by the opponent is greater than or equal to mean matches, the
value of di is 1. If the number of matches played by the opponent is less than the mean
matches, the value of di is the ratio of the number of matches played by the opponent and the
number of mean matches. This is to handle the situations in which winning from a new team
who has not played enough matches, resultantly haven’t lost many matches should not be
given a high weightage. The benefit is reduced as di would be in a fraction for new teams.
3.3 Measuring team ranks through Weighted Cricket Team Rank (WCTR)
Weighted Cricket Team Rank (WCTR) uses winning margins of a team in terms of
wickets and runs while calculating its rank. The WCTR is defined as:
Definition 3. Given the set of teams T={T1, T2, ......,Tn}, WCTR measurement ranks the
team A∈T based on statistics of results (win/loss), R(Ui) and R(Oi) and statistics of the
margin of win/loss, M(Ui) and M(Oi) ∀i≤1≤n, and dynamic damping factor di for every
opponent team Ti.
WCTR is an improvised form of CTR that relies upon weightage. The weights are added
by taking into account the margin of matches lost by runs/wickets by the opponent teams.
The proposed WCTR asserts that an opponent team Ti should have a higher impact on the
ranking of the team A if it loses to team A by a big margin of runs/wickets but loses to
other teams at low margin and lower impact if it loses to team A by a small margin of
wickets/runs but loses to other teams by a big margin of wickets/runs. The WCTR score
of a team A is calculated as:
R(Ui ) M(Ui ) R(Un ) M(Un )
WCTR(A) = �di � ∗ �+ ⋯ + dn � ∗ �� (5)
R(Oi ) M(Oi ) R(On ) M(On )
where,
𝐺𝐺𝐺𝐺(𝑈𝑈𝑖𝑖 ) 𝐺𝐺𝐺𝐺(𝑂𝑂𝑖𝑖 )
𝑅𝑅(𝑈𝑈𝑖𝑖 ) = 𝑅𝑅(𝑂𝑂𝑖𝑖 ) =
𝑇𝑇𝑇𝑇(𝑈𝑈𝑖𝑖 ) 𝑇𝑇𝑇𝑇(𝑂𝑂𝑖𝑖 )
𝑀𝑀𝑀𝑀𝑀𝑀(𝑈𝑈𝑖𝑖 ) 𝑀𝑀𝑀𝑀𝑀𝑀(𝑂𝑂𝑖𝑖 )
𝑀𝑀(𝑈𝑈𝑖𝑖 ) = 𝑀𝑀(𝑂𝑂𝑖𝑖 ) =
𝑇𝑇𝑇𝑇𝑇𝑇(𝑈𝑈𝑖𝑖 ) 𝑀𝑀𝑀𝑀𝑀𝑀(𝑂𝑂𝑖𝑖 )
TG(Ui) is the number of total games played between team A and team i while GL(Ui)
represents the number of games lost by team i to team A. Consequently, R(Ui) is the ratio
between GL(Ui) and TG(Ui). GL(Oi) is the number of games lost against other opponents
(excluding A) by team i and TG(Oi) is the games played between team i and other
opponents (excluding A). Therefore, R(Oi) is the ratio between GL(Oi) and TG(Oi).
MGL(Ui) is the losing margin in games lost by team i against A and MTG(Ui)is the sum
of margin in total games played between A and ith team. Consequently, M(Ui) is the ratio
between MGL(Ui) and MTG(Ui). MGL(Oi) is the losing margin in games lost against
other opponents (excluding A) by team i and MTG(Oi) is the sum of margins in games
played between team i and other opponents (excluding A). Therefore, R(Oi) is the ratio
between GL(Oi) and TG(Oi). Where di is the damping factor; the value of di depends on
the number of matches played by the opponent team. If the number of matches played by
1122 CMC, vol.65, no.2, pp.1113-1136, 2020
the opponent is greater than or equal to mean matches, the value of di is 1. If the number
of matches played by the opponent is less than the mean matches, the value of di is the
ratio of the number of matches played by the opponent and the number of mean matches.
4 Experiments
The research in this paper used a dataset that is specified in this section. This section also
illustrates the results of each of the techniques investigated in this research and discusses
the results in the context of international cricket (ODI, Test, and T20i matches). This
section also discusses the use of a damping factor for each of the proposed techniques. A
comparative analysis of the techniques is presented, as well.
4.1 Dataset
The experiments are conducted using the CricInfo website’s dataset. This data
corresponds to the data used in the latest rankings provided by the ICC of ODI, Test and
T20i matches (as of July 20, 2016). The batting statistics are captured from January 2013-
December 2015 from each international match, and these statistics were used to
determine the weighted average of a single wicket.
Tab. 4 shows the different rankings using the different rank measuring techniques, for
ODI matches. Conceptually the team rankings may be visualized into two halves. In the
Context Based Adoption of Ranking and Indexing Measures 1123
top five of the rankings, from all the three proposed techniques, are Australia, India, New
Zealand, South Africa, and Sri Lanka. There are, however, differences in the results
achieved from each formula. The ct-index results in India ranking as the number one
team. India’s wins are by high-margin in terms of runs and wickets. CTR ranks New
Zealand higher since New Zealand won the most matches against other high-ranking
teams compared to other member teams. The WCTR ranking ranks Australia as the top-
ranking team. The WCTR took into account Australia’s multiple wins against highly
ranked teams, and it also won many matches with high margins in terms of runs and
wickets. The latter half of Tab. 4 illustrates the lower-achieving teams. No matter which
method is used, the same teams result at the bottom of the list: Bangladesh, England,
Pakistan, West Indies, and Zimbabwe. No matter which method is used, Zimbabwe
maintains its rank as the tenth i.e., bottom team. This is due to the fact that Zimbabwe
won matches against lower-ranked competitors, won those by low margins of runs and
wickets, and did not win many matches against stronger ranked teams.
The rank of the test cricketing nations based on the “test cricket data” is shown in Tab. 5.
The ranking shows the ranking and scores that are gained through the proposed
measurements. The results show that the ranking is easily read divided into two halves
using the Test cricket data. The top-ranking half and the bottom ranking half of the teams
each have five teams. The highest-ranking are Australia, England, India, Pakistan and
South Africa. When the ct-index, CTR and WCTR methods are examined on the same
data set, the highest-ranking nations are England, South Africa and Pakistan. Using ct-
index, the English team scores highest at 49.85. This high score is a result of England’s
wins with high margins. The CTR and WCTR methods resulted in South Africa being
issued the highest performance ranking. South Africa won a high number of matches,
1124 CMC, vol.65, no.2, pp.1113-1136, 2020
with a very strong ratio against other strong teams. The lower half of the ranking list
resulted in the same team names no matter what evaluation technique was used. The
teams in the second half of the results were: Bangladesh, New Zealand, Sri Lanka, West
Indies and Zimbabwe. For each of the three different methods tested, Zimbabwe was the
lowest ranking team. This is because Zimbabwe did not emerge the victor in any test
cricket match during the test data time frame.
Tab. 6 is showing the cricket team ranking results of the proposed measurements when
applied over the T20i matches dataset. Unlike the ODI and test matches, we cannot divide
the achieved ranking into two of the groups. Using the ct-index, the Indian team is ranked
number one. Team India won many matches with extremely high margins. For CTR and
WCTR evaluations, the English team stayed on the top of the rankings. England won many
matches against strong teams, and their WCTR margins were stronger against strong teams.
Australia landed at the bottom of the list for all of the three different proposed methods.
The reason that Australia fell to the bottom of the list was that their number of wins was
lower, and the matches which they did win were of lower value. The value assessment is
based on the low ranking scores of the teams that they opposed and won over, and the
lower margins in the area of runs and wickets which those wins employed.
4.3 Comparative analysis of the proposed and existing rank measuring techniques
The goal of this paper is to determine the differences in the ICC ranking, which currently
dominates the international ranking platforms, and the ranking options outlined by Daud
et al. [Daud, Muhammad, Dawood et al. (2015)] to determine the most effective
technique. The paper evaluated the proposed techniques (ct-index, CTR, WCTR),
Context Based Adoption of Ranking and Indexing Measures 1125
techniques proposed by Daud et al. [Daud, Muhammad, Dawood et al. (2015)] and ICC
Team Rankings. We normalized the scores (0-1), achieved by all the techniques.
Figure 1: Comparison of ct-index, CTR, WCTR and ICC ODI cricket rankings
1126 CMC, vol.65, no.2, pp.1113-1136, 2020
.
Figure 3: Comparison of ct-index, CTR, WCTR and ICC T20i rankings
4.3.2 Comparative analysis of proposed Ranking Measurements with Daud et al. [Daud,
Muhammad, Dawood et al. (2015)]
Daud et al. [Daud, Muhammad, Dawood et al. (2015)] proposed four different ranking
measurements. Here a comparison of the proposed methods with three of the relevant ones is
presented, and for a fair comparison, experiments are performed on the same dataset as done
by Daud et al. [Daud, Muhammad, Dawood et al. (2015)]. The data was collected for the
duration of (2010-Mid 2012). The results are presented for the ODI matches only.
Context Based Adoption of Ranking and Indexing Measures 1127
The results, Tab. 7, clearly show an increase in the index values due to the impact of
using the weightage of wicket instead of using wicket and run with the same weightage.
For example, India jumped from 5th in t-index to 1st in ct-index due to high margin
victories by runs/wickets.
Table 8: Comparative analysis of proposed (CTR) with Daud et al. [Daud, Muhammad,
Dawood et al. (2015)] (TR)
CTR TR
Rank Team Score Team Score
1 Pakistan 10.8223 Australia 0.133
2 South Africa 10.8122 India 0.121
3 India 10.1737 Pakistan 0.119
4 Australia 9.8100 Sri Lanka 0.114
5 Sri Lanka 8.8970 South Africa 0.110
6 England 8.7769 New Zealand 0.098
7 Bangladesh 6.6100 England 0.094
8 Zimbabwe 5.4226 West Indies 0.081
9 West Indies 5.1413 Zimbabwe 0.063
10 Ireland 3.5656 Bangladesh 0.061
Comparing Weighted Cricket Team Rank (WCTR) with Weighted Team Rank (WTR)
The final comparison is between the Weighted Cricket Team Rank (WCTR) and the
Weighted Team Rank (WTR). The WCTR is a later evolution, or extended formula,
based upon CTR. The important characteristic of WCTR is that it allows teams which
win by the biggest margin, and lose by the smallest margin, to receive the highest
rankings. Daud et al. [Daud, Muhammad, Dawood et al. (2015)] supposed that WTR
would have the same impact, but in fact, WTR has the same issue as the t-index method.
Namely, there cannot be the same ranking issued to runs and wickets. The following
comparison shows the differences achieved when employing the two methods:
Table 9: Comparative analysis of proposed (WCTR) with Daud et al. [Daud, Muhammad,
Dawood et al. (2015)] (WTR)
WCTR WTR
Rank Team Score Team Score
1 South Africa 11.9653 South Africa 0.051270
2 Pakistan 10.1000 Sri Lanka 0.051269
3 India 9.5706 Australia 0.051260
4 Australia 8.5990 India 0.051255
5 Sri Lanka 8.2380 New Zealand 0.051247
6 England 8.0268 England 0.051242
7 Bangladesh 6.3928 Pakistan 0.051204
8 Zimbabwe 5.3062 West Indies 0.051192
9 West Indies 5.1967 Bangladesh 0.050954
10 Ireland 4.4877 Zimbabwe 0.050824
Context Based Adoption of Ranking and Indexing Measures 1129
The results presented in Tab. 9 clearly show the increase in ranking due to the impact of
using the weightage of wicket instead of using wicket and run with the same weightage.
For example, Sri Lanka is dropped from 2nd in WTR to 5th in WCTR due to low margin
victories by runs/wickets.
5 Discussion
In this section, a detailed discussion of the proposed techniques is presented. The
proposed techniques are elaborated by choosing and evaluating example data.
Table 10: Example data for presenting the relation between winning margins
Team Team Winning while batting First Team Winning while batting Second
Runs Wickets Runs Wickets Winning Total Runs Wickets Runs Wickets Winning Total
Scored Lost Scored Lost Margins Winning Scored Lost Scored Lost Margins Winning
by by (Runs) Margins by by (Wickets) Margins
Team Opponent (Runs) Opponent Team (Wickets)
Tr Tw
A 345 5 300 7 45 220 200 10 201 0 10 37
330 7 310 10 20 270 7 271 5 5
350 9 320 10 30 230 8 231 4 6
300 7 200 10 100 300 9 301 3 7
270 6 245 9 25 170 10 171 1 9
B 260 3 185 10 75 250 230 8 231 5 5 26
300 8 220 7 80 200 10 201 8 2
310 7 255 6 55 330 5 331 6 4
300 10 280 7 20 250 6 251 1 9
280 5 250 3 30 270 6 271 4 6
Team B would be ranked higher than team B. It is not the correct ranking as team A has a
higher margin of winning in terms of wickets than team B. The difference between
wicket margin is 11, which must be weighted quite higher than the winning runs margin
which is just 25. To find out the relation between runs and wickets, we calculated total
runs scored by both the teams, whether batting first or second. In the same manner, total
wickets lost by both the teams for all the matches are calculated.
TR=Total Runs scored by both the teams=10520
TW=Total Wickets lost by both the teams=288
Runs Per Wicket (RPW)= (TR)/ (TW)=10520/288=36.53
The ranking through the proposed technique, ct-index, incorporating relation between
runs and wickets is calculated as:
𝑇𝑇𝑤𝑤 + 𝑇𝑇𝑟𝑟 ∗ 𝑅𝑅𝑅𝑅𝑅𝑅 220 + 37 ∗ 36.53 220 + 1351.61
𝑐𝑐𝑐𝑐 − 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖(𝐴𝐴) = � =� =� = 19.82
𝑎𝑎 4 4
winners. It may be observed from Eq. (3) the contribution from team ti while calculating
𝑃𝑃𝑃𝑃(𝑇𝑇𝑖𝑖 )
the TR score of team A, that it depends upon the following ratio i.e., . 𝑃𝑃𝑃𝑃(𝑇𝑇𝑖𝑖 )
𝐶𝐶𝑇𝑇𝑖𝑖
represents matches lost by ith team against A while CTi is the total number of matches lost
by Ti. The existing technique does not count for the number of matches team Ti played
rather only the result count in the form of losing is taken. There are many situations
where the results are not the correct ones. To illustrate the inherited problems in ranking
while not incorporating the statistics of total played matches an example data in Tab. 11
is presented below:
Table 11: Example data for determining the limitations of Daud et al. [Daud,
Muhammad, Dawood et al. (2015)]
Team Total games Total games Total games Total games Contribution
lost against played against lost by Team played by made by team
Team A Team A (CTi) Team in Ranking A
(PR(Ti))
B 3 5 6 10 3/6=0.33
C 3 5 6 30 3/6=0.33
As shown in (3), while calculating the rank through TR(A) [Daud, Muhammad, Dawood
et al. (2015)], all the other teams make their contribution to it. The contribution made by
B would be PR(B)/CTB; the values from Tab. 11 would return it as 0.33.
Same as the contribution made by team C towards the ranking of team A would be 0.33.
It doesn’t look great as team C looks far stronger than team B. Although both the teams
have the same statistics for games lost against A and total games lost against all teams.
Team B lost quite frequently, for instance, lost 3 matches from A while only 5 matches
were played in between them. The overall behavior of team B is not different, it lost 6
matches in all while its total matches are 10. On the other hand, team C has quite better
statistics, i.e., it lost 3 matches to team A while won 7 matches. Overall performance of
team C is the same as against A, i.e., in all teams, C played 30 matches and won 24
matches and lost only 6 matches. The contribution made by both the teams is the same as
TR [Daud, Muhammad, Dawood et al. (2015)] does not use the overall statistics of the
teams in terms of the total number of matches played against team A.
Table 12: Example data for illustrating opposition team strength
Team Total games Total games Games lost Games lost R(Ui ) R(Oi ) Contribution made
lost against played against against GL(Ui ) GL(Oi ) by team in Ranking
Team A against Teams other Teams other = TG(Ui ) =
TG(Oi ) A
(GL(Ui)) Team A than A than A R(Ui )
(TG(Ui)) (GL(Oi)) (TG(Oi)) R(Oi )
B 3 5 3 5 0.6 0.6 1
C 3 5 3 25 0.6 0.12 5
The proposed CTR and WCTR solved the above-discussed issues by incorporating the
number of matches played by ith team against team A along with the total number of
1132 CMC, vol.65, no.2, pp.1113-1136, 2020
matches played by ith team against all the teams other than A. In Eq. (4) and Eq. (5) the
contribution made by team B in the ranking of team A is calculated by
𝑅𝑅(𝑈𝑈𝑖𝑖 ) 𝐺𝐺𝐺𝐺(𝐴𝐴) 𝐺𝐺𝐺𝐺(𝑂𝑂𝑖𝑖 )
, where 𝑅𝑅(𝐴𝐴) = and 𝑅𝑅(𝑂𝑂𝑖𝑖 ) = .
𝑅𝑅(𝑂𝑂𝑖𝑖 ) 𝑇𝑇𝑇𝑇(𝐴𝐴) 𝑇𝑇𝑇𝑇(𝑂𝑂𝑖𝑖 )
While ranking A the contribution by team B is 1 while it is 5 by team C (Tab. 12). The
calculated contributions are logical and true as team C is a stronger opposition and winning
from a stronger opposition must reward higher than winning from a weaker opposition.
To calculate the damping factor, di, for ith team following steps are performed.
i. Find the mean of the matches (mm) played by all the teams.
ii. (a) If the number of matches played by the ith team is greater than or equal to mm then
di =1, otherwise
(b) di = Number of Matches Played by Opposition
The total number of matches played by all the seven teams is 418, and the mean number
of matches (mm) is 59.71. The damping factor for team C, which played 80 matches is
greater than mm, is 1 while the damping factor for team B, which played 8 matches
would be calculated through the rule (i), i.e., dB=8/59.71=0.13. The contributions made
by team B and C would in ranking team A would be calculated as:
𝑅𝑅(𝐴𝐴 )
Contribution through team B=𝑑𝑑𝐵𝐵 ∗ 𝑅𝑅(𝑂𝑂𝐵𝐵 ) = 0.13 ∗ 1 = 0.13
𝐵𝐵
𝑅𝑅(𝐴𝐴𝐶𝐶 )
𝑑𝑑𝐶𝐶 ∗ =1∗1=1
𝑅𝑅(𝑂𝑂𝐶𝐶 )
𝑅𝑅(𝐴𝐴 )
Contribution through team C=𝑑𝑑𝐶𝐶 ∗ 𝑅𝑅(𝑂𝑂𝐶𝐶 ) = 1 ∗ 1 = 1
𝐶𝐶
The above two contributions for team B and C reflect that winning from regular team weighs
more than an emerging one and the strength of the teams are accurately determined.
6 Conclusion
Adoptions of PageRank and h-index are presented for cricket team ranking. In this regard,
three ranking measurements are proposed i.e., ct-index, CTR and WCTR. The investigation
focuses on the importance of a margin in a win and quality of opposition. If a win is won by
a larger margin of runs and wickets, the impact is significant and affects the team’s overall
ranking. The use of a dynamic damping factor produces a significant difference from the
1134 CMC, vol.65, no.2, pp.1113-1136, 2020
use of a static damping factor. The weighting factor used comes into play most prominently
when two teams happen to win a similar number of games against competitors who are
ranked approximately the same. Adopting the h-index and PageRank produces an accurate
ranking of international cricket teams. The result is positive because the opposing team is
weighted as a strong or weak opponent, and the margin of the win (in terms of runs and
wickets) is taken into consideration for both the winning and losing team.
Funding Statement: This research did not receive any specific grant from funding
agencies in the public, commercial, or not-for-profit sectors.
Conflicts of Interest: The authors declare that they have no conflicts of interest to report
regarding the present study.
References
Amin, G. R.; Sharma, S. K. (2014): Measuring batting parameters in cricket: a two-
stage regression-OWA method. Measurement: Journal of the International Measurement
Confederation, vol. 53, pp. 56-61.
Borooah, V. K.; Mangan, J. E. (2010): The ‘Bradman class’: an exploration of some
issues in the evaluation of batsmen for test matches, 1877-2006. Journal of Quantitative
Analysis in Sports, vol. 6, no. 3, pp. 1-21.
Bracewell, P. J.; Ruggiero, K. (2009): A parametric control chart for monitoring
individual batting performances in cricket. Journal of Quantitative Analysis in Sports, vol.
5, no. 3.
Burrell, Q. L. (2007): Hirsch’s h-index: a stochastic model. Journal of Informetrics, vol.
1, no. 1, pp. 16-25.
Cerchiello, P.; Giudici, P. (2014): On a statistical h index. Scientometrics, vol. 99, no. 2,
pp. 299-312.
Dadelo, S.; Turskis, Z.; Zavadskas, E. K.; Dadeliene, R. (2014): Multi-criteria
assessment and ranking system of sport team formation based on objective-measured
values of criteria set. Expert Systems with Applications, vol. 41, no. 14, pp. 6106-6113.
Daud, A.; Muhammad, F.; Dawood, H.; Dawood, H. (2015): Ranking cricket teams.
Information Processing and Management, vol. 51, no. 2, pp. 62-73.
Duch, J.; Waitzman, J. S.; Amaral, L. A. N. (2010): Quantifying the performance of
individual players in a team activity. PLoS One, vol. 5, no. 6, e10937.
Egghe, L. (2006). Theory and practise of the g-index. Scientometrics, vol. 69, no. 1, pp.
131-152.
Ezadi, S.; Allahviranloo, T. (2018): New multi-layer method for Z-number ranking
using hyperbolic tangent function and convex combination. Intelligent Automation and
Soft Computing, vol. 24, no. 1, pp. 217-223.
Farooq, M.; Khan, H. U.; Malik, T. A.; Shah, S. M. S. (2016): A novel approach to
rank authors in an academic network. International Journal of Computer Science and
Information Security, vol. 14, no. 7, pp. 617.
Context Based Adoption of Ranking and Indexing Measures 1135
Gao, S.; Wang, Z.; Chen, L. C. (2019): SI bitmap index and optimization for Membership
query. Intelligent Automation and Soft Computing, vol. 25, no. 4, pp. 683-689.
Haley, M. R. (2016): On the inauspicious incentives of the scholar-level h-index: an
economist’s take on collusive and coercive citation. Applied Economics Letters, vol. 24,
no. 2, pp. 85-89.
Haveliwala, T. H. (2002): Topic-sensitive PageRank. Proceedings of the 11th
International Conference On World Wide Web, pp. 517-526.
Hirsch, J. E. (2005): An index to quantify an individual’s scientific research output.
Proceedings of the National Academy of Sciences of the United States of America, vol.
102, no. 46, pp. 16569-16572.
Iván, G.; Grolmusz, V. (2011): When the web meets the cell: using personalized PageRank
for analyzing protein interaction networks. Bioinformatics, vol. 27, no. 3, 405-407.
Manaskasemsak, B.; Rungsawang, A.; Yamana, H. (2011): Time-weighted web
authoritative ranking. Information Retrieval, vol. 14, no. 2, pp. 133-157.
Mihalcea, R.; Tarau, P. (2004): TextRank: Bringing order into texts. Proceedings of the
International Conference On Empirical Methods in Natural Language Processing, pp.
404-411.
Min, B.; Kim, J.; Choe, C.; Eom, H.; McKay, R. I. (2008): A compound framework
for sport prediction: the case study of football. Knowledge-Based Systems, vol. 21, no. 7,
pp. 551-562.
Mukherjee, S. (2012): Identifying the greatest team and captain-a complex network
approach to cricket matches. Physica A: Statistical Mechanics and Its Applications, vol.
391, no. 23, pp. 6066-6076.
Mukherjee, S. (2014): Quantifying individual performance in cricket-a network analysis
of batsmen and bowlers. Physica A: Statistical Mechanics and Its Applications, vol. 393,
pp. 624-637.
Nykl, M.; Campr, M.; Ježek, K. (2015): Author ranking based on personalized
PageRank. Journal of Informetrics, vol. 9, no. 4, pp. 777-799.
Page, L.; Brin, S.; Motwani, R.; Winograd, T. (1999): The PageRank citation ranking:
bringing order to the web. Technical Report. Stanford InfoLab.
Pérez-Rosés, H.; Sebé, F.; Ribó, J. M. (2016): Endorsement deduction and ranking in
social networks. Computer Communications, vol. 73, pp. 200-210.
Pradhan, D.; Paul, P. S.; Maheswari, U.; Nandi, S.; Chakraborty, T. (2017): C3-
index: a PageRank based multi-faceted metric for authors’ performance measurement.
Scientometrics, vol. 110, no. 1, pp. 253-273.
Qader, M. A.; Zaidan, B. B.; Zaidan, A. A.; Ali, S. K.; Kamaluddin, M. A. et al.
(2017): A methodology for football players selection problem based on multi-
measurements criteria analysis. Measurement: Journal of the International Measurement
Confederation, vol. 111, pp. 38-50.
Saqlain, S. M.; Usmani, R. S. A. (2017): Comment on “ranking cricket teams.”
Information Processing and Management, vol. 53, no. 2, pp. 450-453.
1136 CMC, vol.65, no.2, pp.1113-1136, 2020
Springer, E. P. (2016). Using the h index to assess impact of DOE national laboratories.
Wang, Q.; Ren, J. D.; Davis Darryl, N.; Cheng, Y. Q. (2018): An algorithm for fast
mining top-rank-k frequent patterns based on node-list data structure. Intelligent
Automation and Soft Computing, vol. 24, no. 2, pp. 399-404.
Xiang, L.; Wu, W.; Li, X.; Yang, C. (2018): A linguistic steganography based on word
indexing compression and candidate selection. Multimedia Tools and Applications, vol.
77, no. 2, pp. 1-21.
Yeh, J. Y. (2018): Rank-order-correlation-based feature vector context transformation for
learning to rank for information retrieval. Computer Systems Science and
Engineering, vol. 33, no. 1, pp. 41-52.
Zhang, B.; Wang, Y.; Jin, Q.; Ma, J. (2015): A PageRank-inspired heuristic scheme for
influence maximization in social networks. International Journal of Web Services
Research, vol. 12, no. 4, pp. 48-62.