Evaluation of Metallurgical Recovery Factors For Diamonds Recovered From Kimberlites
Evaluation of Metallurgical Recovery Factors For Diamonds Recovered From Kimberlites
April 2020
Evaluation of Metallurgical Recovery Factors
ABSTRACT
Extraction and recovery of diamonds requires that the host rock, kimberlite, is
fragmented to liberate and recover the contained diamonds. Optimal recovery
requires trade-offs to be made between maximising liberation, minimising diamond
breakage or loss and cost of recovery.
Effective fragmentation and recovery are not only dependent on the comminution
and recovery techniques used but are also a function of the interactions between the
diamond characteristics, the host rock properties and the technology used to crush
the kimberlite and recover the diamonds. Prior approaches have been limited by a
disregard for these relationships and how they change in response to variable
kimberlite and diamond characteristics and their impact on diamond recovery.
Incorrect recovery estimation impacts negatively on the evaluation, design and
operation of diamond mining projects.
This research develops and demonstrates methods to collect and spatially estimate
relevant orebody characteristics that impact on diamond liberation and subsequent
recovery. These characteristics are used in an integrated value chain model to
quantify the variability and uncertainty of diamond recovery. The use of this
technique is demonstrated in two case studies.
Submitted by Stephen Coward to the University of Adelaide as a thesis for the degree
of Doctor of Philosophy December 2019.
I certify that this work contains no material which has been accepted for the award
of any other degree or diploma in my name, in any university or other tertiary
institution and, to the best of my knowledge and belief, contains no material
previously published or written by another person, except where due reference has
been made in the text. In addition, I certify that no part of this work will, in the future,
be used in a submission in my name, for any other degree or diploma in any
university or other tertiary institution without the prior approval of the University
of Adelaide and where applicable, any partner institution responsible for the joint-
award of this degree.
I give permission for the digital version of my thesis to be made available on the web,
via the University’s digital research repository, the Library Search and also through
web search engines, unless permission has been granted by the University to restrict
access for a period of time.
I acknowledge the support I have received for my research through the provision of
an Australian Government Research Training Program Scholarship.
2019-12-12
I am sincerely grateful to my family, friends and colleagues who have made this
research possible and have helped in so many ways to develop and produce this
thesis.
This research has taken place over a substantial period and has involved many
collaborators who have helped to shape both the thinking behind the research and
inform various avenues to resolve the research questions.
There are several groups of people who were particularly influential in the various
phases of this research including the following:
Wynand Kleingeld, who was the initiator of the Wells research unit, and other
members of the team including Niall Young, Johan Ferreira, Rob Pearce, Chris Prins,
Ina Dohm, and Tessa Dodkin.
Grant Nicholas who played a material role in the formulation of the initial scope of
the integrated model .
Special mention must be made of Chris Prins contribution and assistance with the
finalisation of the thesis.
QG Team:
1 INTRODUCTION ................................................................................................................ 22
1.1 CURRENT APPROACHES USED TO EVALUATE DIAMOND MINING PROJECTS ............................ 23
1.2 SPECIFIC CHALLENGES FACED IN EVALUATING DIAMOND-MINING PROJECTS ....................... 25
1.3 DIAMOND PROJECT EVALUATION PRACTICE ............................................................................. 31
1.4 USE OF THE METALLURGICAL RECOVERY FACTOR IN PROJECT EVALUATION ......................... 33
1.5 IMPACT OF RECOVERY PROCESS DESIGN AND OPERATION ON METALLURGICAL RECOVERY. 33
1.6 IMPACTS OF KIMBERLITE PROPERTIES ON TREATMENT PROCESS EFFICIENCY...................... 35
1.7 IMPACT OF DIAMOND PROPERTIES ON THE RECOVERY FACTOR ............................................. 37
1.8 THE SOURCES OF UNCERTAINTY IN ESTIMATING THE RECOVERY FACTOR ............................. 38
1.9 RESEARCH MODEL ....................................................................................................................... 41
1.10 THESIS STRUCTURE .................................................................................................................. 43
2 LITERATURE REVIEW..................................................................................................... 44
2.1 PHYSICAL CHARACTERISTICS OF KIMBERLITES AND DIAMONDS ............................................. 45
2.2 ROCK PROPERTY SAMPLING AND CHARACTERISTIC MEASUREMENTS .................................... 51
2.3 CHALLENGES OF SPATIALLY ESTIMATING PHYSICAL ROCK CHARACTERISTICS ...................... 57
2.4 MINERAL PROCESS MODELS AND SIMULATION PRACTICES ..................................................... 61
2.5 APPROACHES TO EVALUATING UNCERTAINTY AND VARIABILITY IN DIAMOND RECOVERY .. 67
2.6 SUMMARY OF PRACTICES AND LIMITATIONS IDENTIFIED IN THIS REVIEW ............................ 71
LIST OF TABLES
TABLE 1: SELECTION OF SEVERAL DIAMOND MINES SHOWING THE MINE GRADE RECOVERED IN
CPHT, AVERAGE DIAMOND VALUE IN $/CT AND THE AVERAGE REVENUE IN $/TONNE ( PETRA
DIAMONDS, 2019; ANGLO AMERICAN, 2018; DE BEERS GROUP SERVICES 2018 ) ........... 26
TABLE 2: ANALYSIS OF DIAMONDS SIZED BY SCREENING (ADAPTED AFTER FERREIRA 2013) ..... 27
TABLE 4: RELATIONSHIPS BETWEEN ORE PROPERTY AND THE IMPACT ON MINERAL PROCESSING.
........................................................................................................................................................ 63
TABLE 5: A LIST OF PREFERRED VALUATION METHODOLOGIES USED IN SOUTH AFRICA (SILVA AND
MINNITT, 2005) . ........................................................................................................................ 68
TABLE 6: FORMULAE FOR ESTIMATING ELASTIC MODULI OF SOLIDS USING MEASURED ACOUSTIC
VELOCITIES IN ROCK SPECIMENS. ................................................................................................. 89
TABLE 9: SUMMARY STATISTICS FOR THE UNIAXIAL COMPRESSIVE STRENGTH (UCS) AND
BRAZILIAN TENSILE STRENGTH (BTS) TEST WORK. ................................................................ 96
TABLE 10: SUMMARY STATISTICS OF DROP WEIGHT RESULTS OBTAINED FROM CORES. ................ 97
TABLE 11: SUMMARY OF DOWNHOLE GEOPHYSICAL READINGS TAKEN FROM EACH HOLE DRILLED
......................................................................................................................................................101
TABLE 12: DESCRIPTIVE STATISTICS OF THE DOWNHOLE FORMATION TESTER RESULTS. ...........102
TABLE 13: SUMMARY STATISTICS FOR DROP WEIGHT TEST DATA SHOWING SAMPLE AND
POLYGONAL STATISTICS. .............................................................................................................117
TABLE 14:TABLE AND GRAPHICS OF THE MODELLED VARIOGRAMS FOR P-WAVE VELOCITY, T10
SPECIMEN DENSITY AND T10_E1. .............................................................................................123
TABLE 15: SUMMARY STATISTICS FOR DATA AND FIRST PASS ESTIMATES. ..................................124
TABLE 16: SUMMARY STATISTICS COMPARING ESTIMATED AND SIMULATED VALUES FOR P-WAVE
VELOCITY AND DROP WEIGHT RESULTS AT THE 5M BLOCK SCALE. ........................................ 129
TABLE 17: OPTIONS FOR USE OF SPATIAL DATA TO ESTIMATE PROCESS RESPONSE VARIABLES. 134
TABLE 18: SUMMARY STATISTICS FOR THE ESTIMATION AND CALCULATION OF BLOCK SCALE
PROPERTIES FOR PATHWAY 1. .................................................................................................. 135
TABLE 19: SUMMARY STATISTICS FOR THE ESTIMATION AND CALCULATION OF BLOCK SCALE
PROPERTIES FOR PATHWAY 2 ................................................................................................... 135
TABLE 20: SUMMARY STATISTICS FOR THE ESTIMATION AND CALCULATION OF BLOCK SCALE
PROPERTIES FOR PATHWAY 3. .................................................................................................. 136
TABLE 21: SUMMARY STATISTICS FOR THE ESTIMATION AND CALCULATION OF BLOCK SCALE
PROPERTIES FOR PATHWAY 4. .................................................................................................. 136
TABLE 22: SUMMARY STATISTICS FOR THE ESTIMATION AND CALCULATION OF BLOCK SCALE
PROPERTIES FOR PATHWAY 5. .................................................................................................. 136
TABLE 23: A TABULAR DEMONSTRATION OF A WHITTEN SELECTION AND BREAKAGE MODEL. . 163
TABLE 25: CALCULATION OF LOCK UP WITH CONSTRAINT PLACED ON THE MAXIMUM SIZE OF
CONTAINED DIAMOND. ............................................................................................................... 176
TABLE 26: LISTING OF DIAMOND SCREEN SIEVE CLASSES AND ASSOCIATED SIEVE APERTURES,
AVERAGE STONE SIZE PER CLASS AND CRITICAL STONE SIZE PER CLASS................................ 180
TABLE 27: DESCRIPTION OF METHOD USED TO CONVERT THE SAMPLED TAILINGS DISTRIBUTION TO
A TOTAL KIMBERLITIC DISTRIBUTION. ...................................................................................... 185
TABLE 28: FITTING OF THE LOGNORMAL MODEL TO THE RECOVERED SIZE DISTRIBUTION. ....... 186
TABLE 29: PROCESS PARAMETERS REQUIRED FOR THE GRANULOMETRY MODEL. ....................... 187
TABLE 30: CALCULATION OF THE MAXIMUM LOCKED DIAMOND SIZE IN KIMBERLITE PARTICLES IN
EACH SIEVE CLASS. ...................................................................................................................... 187
TABLE 31: ALLOCATION OF THE LOCKED POTENTIAL IN EACH SIZE CLASS ACCORDING TO THE
PROBABILITIES DERIVED FROM THE RECOVERED DIAMOND SIZE DISTRIBUTION ..................188
TABLE 32: CALCULATION OF THE LOCKED CARAT POTENTIAL PER DIAMOND SIEVE CLASS. ........189
TABLE 33: CALCULATION OF THE LIBERATED AND LOCKED POTENTIAL REVENUE. .....................190
TABLE 34: SUMMARY OF THE CHARACTERISTICS OF THREE SAMPLING CAMPAIGNS AND THAT USED
TO DEFINE THE 'VIRTUAL OREBODY(V-BOD ) .........................................................................196
TABLE 35: THE DESCRIPTIVE STATISTICS FOR THE V-BOD AND EACH SCENARIO FOR GRADE, DYKE
THICKNESS AND THE GEOMETRICAL VARIABILITY OF THE DYKE SURFACE (V1). ..................198
TABLE 37: SUMMARY OF MAJOR CAPITAL ITEMS PLANNED FOR EACH PHASE. ..............................208
TABLE 38: AVERAGE EXPECTED DMS YIELDS FOR EACH PHASE. ...................................................208
TABLE 39: LIST OF MODEL SETTINGS USED FOR THE FINANCIAL MODEL. ......................................210
TABLE 40: SUMMARY STATISTICS OF SPATIALLY SIMULATED GRADE, DENSITY AND DMS YIELD
VALUES. ........................................................................................................................................213
TABLE 41: SUMMARY STATISTICS OF $/TONNE DEPLETED BY LOBE BASED ON SIMULATED STONE
VALUES. ........................................................................................................................................215
TABLE 42: A LIST OF GUIDING PRINCIPLES FOR ROCK CHARACTERISTIC SAMPLING. ....................223
LIST OF FIGURES
FIGURE 3: VARIATION OF THE AVERAGE STONE SIZE ESTIMATED FROM SMALL SAMPLES. ............ 30
FIGURE 4: TYPICAL RELATIONSHIP BETWEEN DIAMOND SIZE AND CUT DIAMOND RETAIL VALUE
(ADAPTED FROM RAPPAPORT, 2009). ....................................................................................... 30
FIGURE 11: RESULTS OF A NUMBER OF ROCK TESTS ON A RANGE OF ROCK TYPES AFTER COPUR ET
AL. (2003). ................................................................................................................................... 56
FIGURE 12: SCHEMATIC OF THE PROCESS USED TO CREATE A FRAMEWORK TO QUANTIFY THE
IMPACT OF CHANGES TO SAMPLING , ESTIMATING AND SIMULATING KIMBERLITES ON PROCESS
MODELS. .......................................................................................................................................... 59
FIGURE 13: GEOLOGICAL MAP OF VENETIA K2 (AFTER BROWN 2008) SHOWING THE LOCATION OF
THE GEOMET DRILL HOLES (ORANGE CROSS). ............................................................................ 82
FIGURE 14: A VIEW OF THE LAYOUT OF THE CORE HOLES DEPICTING THE LOCATION OF THE
SUBSAMPLES. ................................................................................................................................. 82
FIGURE 15:LISTING OF SUBSAMPLES TAKEN FROM CORES THAT WERE FULLY SAMPLED. .............. 83
FIGURE 20: HISTOGRAM OF THE DENSITY OF SAMPLES SUBJECTED TO DROP WEIGHT TESTING.... 97
FIGURE 21: COMPARISON OF HISTOGRAMS OF DROP WEIGHT RESULTS FOR ALL SAMPLES TESTED AT
THREE DIFFERENT INPUT ENERGIES (TOP FIGURE IS LOWEST ENERGY; BOTTOM PANEL IS
HIGHEST ENERGY). ........................................................................................................................ 98
FIGURE 22: HISTOGRAM AND BASE MAP OF SAMPLE DENSITY FOR THE VKBR DOMAIN. .............. 99
FIGURE 23: A HISTOGRAM AND CROSS SECTION OF THE DENSITIES OF THE SAMPLES TAKEN FROM
THE VK FACIES DOMAIN ............................................................................................................... 99
FIGURE 24: COMPARATIVE HISTOGRAMS OF DROP WEIGHT RESPONSES AT THREE DIFFERENT INPUT
ENERGY LEVELS FOR THE VKBR FACIES (LHS) AND THE VK FACIES (RHS). .....................100
FIGURE 26: A PLOT SHOWING THE THREE READINGS GENERATED BY THE FORMATION HARDNESS
TOOL FOR HOLE DDH357. ........................................................................................................102
FIGURE 29: A PLOT SHOWING THE IMPACT OF THE SIZE OF THE INTERVAL USED ON THE TOTAL
VARIANCE OF THE BULK MODULUS MEASUREMENT. ............................................................... 112
FIGURE 30: A PLOT SHOWING THE LINEAR MODEL DEVELOPED BETWEEN THE FORMATION
HARDNESS TOOL READINGS AND THE UCS VALUES. ............................................................... 113
FIGURE 31: A BOX AND WHISKER PLOT SHOWING THE SAMPLE VALUE USED TO CALIBRATE THE PLS
MODELS ON THE LEFT OF THE PLOT, AND THE ESTIMATE OF UCS DOWN THE HOLE ON THE
RIGHT. THE HEIGHTS OF THE BARS INDICATE ‘GOODNESS OF FIT’ OF THE PARTIAL LEAST
SQUARES MODEL DERIVED FROM MULTIPLE VERSIONS OF THE MODEL USING DIFFERENT
COMBINATIONS OF SAMPLE AND HOLD OUT DATA. .................................................................. 114
FIGURE 32: A PLOT OF MEASURED UCS VALUES, AND MODELS FOR THE SAMPLES BASED ON THE
DOWNHOLE ACOUSTIC SIGNAL AND CUTTER PENETRATION DEPTH. ..................................... 115
FIGURE 35: AN EXPERIMENTAL AND FITTED MODEL FOR THE VARIOGRAM FOR THE UCS DATA.
...................................................................................................................................................... 119
FIGURE 36: N-S CROSS SECTION OF AREA ESTIMATED SHOWING HIGH UCS ESTIMATES IN HOTTER
COLOURS AND LOWER VALUES IN COOLER COLOURS (LHS) AND A HISTOGRAM OF THE BLOCK
UCS VALUES ESTIMATED. .......................................................................................................... 119
FIGURE 37: HISTOGRAMS OF THE ORIGINAL T10 VALUES (LEFT) AND THE TRANSFORMED VARIABLE
"MASS LESS 5MM IN G/TONNE" (RIGHT). ................................................................................ 124
FIGURE 38: 3 DIMENSIONAL PROJECTIONS OF THE KRIGING OF DROP WEIGHT VALUES (LEFT) AND
THE TRANSFORMED VARIABLE "MASS LESS 5MM IN G/TONNE" (RIGHT). .......................... 125
FIGURE 39: CROSS SECTIONS OF THE OREBODY AND HISTOGRAMS FOR THE THREE VARIABLES
ESTIMATED INDEPENDENTLY INTO A 0.7M X0.7M X 0.7M GRID AND ACCUMULATED INTO A 5M
X 5M X 5M GRID. ......................................................................................................................... 126
FIGURE 40: HISTOGRAMS SHOWING TRANSFORM OF DATA FROM RAW DATA TO GAUSSIAN
VARIABLES. .................................................................................................................................. 127
FIGURE 41: VARIOGRAM MODELS (TOP LEFT AND BOTTOM RIGHT)AND CROSS VARIOGRAM MODELS
(BOTTOM LEFT) FOR THE GAUSSIAN TRANSFORMS OF P-WAVE AND DROP WEIGHT TEST DATA.
......................................................................................................................................................127
FIGURE 42: PERSPECTIVE PLOTS SHOWING A CROSS-SECTION THROUGH THE TEST AREA FOR 5M X
5M X 5M BLOCKS FOR SIMULATED AND KRIGED P-WAVE VELOCITY (UPPER IMAGES) AND DROP
WEIGHT TEST DATA(LOWER IMAGES). ......................................................................................128
FIGURE 43: SCHEMATIC OF THE OPTIONAL ROUTES TO USE POINT SCALE SAMPLE DATA TO PREDICT
THROUGHPUT. ..............................................................................................................................130
FIGURE 44: A RELATIONSHIP BETWEEN INPUT ENERGY AND DEGREE OF FRACTURE, EXPRESSED AS
PERCENTAGE PASSING 1/10TH OF ORIGINAL PARTICLE SIZE (BLUE DIAMONDS), SHOWING A
FITTED BREAKAGE FUNCTION IN BLACK. ...................................................................................131
FIGURE 45: PLOT SHOWING BROAD CORRELATION BETWEEN AVERAGE LONG RUN ENERGY
CONSUMPTION IN SEMI AUTOGENOUS GRINDING (SAG) MILLS AND AVERAGE OREBODY A*B
VALUES (DANIEL, LANE AND MCLEAN, 2010). ......................................................................132
FIGURE 46: RELATIONSHIP BETWEEN POWER CONSUMPTION AND THROUGHPUT FOR A TARGET
GRIND SIZE. ..................................................................................................................................133
FIGURE 47: HISTOGRAMS OF THE VARIABLES THAT ARE CALCULATED AND ESTIMATED IN PATHWAY
1. ...................................................................................................................................................137
FIGURE 48: HISTOGRAMS OF THE VARIABLES THAT ARE CALCULATED AND ESTIMATED IN PATHWAY
3, DATA HAS BEEN GROUPED BY PREDICTED THROUGHPUT QUARTILES. ...............................137
FIGURE 49: PLOT SHOWING THE VARIABILITY IN WEEKLY THROUGHPUT AND A TABLE WITH
SUMMARY STATISTICS FOR EACH OF THE VARIABLES CALCULATED THROUGH EACH PATHWAY .
......................................................................................................................................................138
FIGURE 50: A SCHEMATIC DEPICTION OF APPROACHES TO ESTIMATING THE RECOVERY FACTOR FOR
DIFFERENT PROJECT MATURITIES. .............................................................................................143
FIGURE 51: A PLOT OF THE SPECIFIC INPUT ENERGY AND CUMULATIVE PROBABILITY OF FAILURE
FOR A FEW SELECTED MINERALS, SIZE PARAMETER SET TO 5MM (ADAPTED AFTER KING
2001). .........................................................................................................................................158
FIGURE 52: A PLOT OF THE MEDIAN FRACTURE ENERGY FOR MINERAL PARTICLES OF DIFFERENT
SIZES. ............................................................................................................................................159
FIGURE 53: A PLOT OF ENERGY INPUT VS PRODUCT SIZE USING THE T10 APPROACH AND A ROSIN
RAMMLER BREAKAGE FUNCTION MODIFIED AFTER KING (2001). ...................................... 167
FIGURE 54: A DENSIMETRIC DISTRIBUTION PLOTTED FOR FOUR SAMPLES DERIVED FROM
KIMBERLITE, CRUSHED TO 100% PASSING 12MM AND GROUPED BY DENSITY CLASSES. ... 169
FIGURE 55: A PLOT OF THE LOG OF DIAMOND WEIGHT VS LOG OF THE NUMBER OF STONES IN EACH
CLASS PER HUNDRED TONNES PER UNIT INTERVAL. ................................................................ 171
FIGURE 56: A PLOT OF THE LOGARITHM OF DIAMOND SIZE VS STONE FREQUENCY IN STONES PER
HUNDRED TONNES PER UNIT INTERVAL. .................................................................................. 172
FIGURE 57: PLOT SHOWING THE ACTUAL AND MODELLED LOG NORMAL DIAMOND SIZE
DISTRIBUTION. ............................................................................................................................ 182
FIGURE 58: COMPARISON OF THE THICKNESS AND V1 BASE MAPS FOR THE KRIGED AND SIMULATED
OUTPUTS OF EACH SCENARIO WITH THAT OF THE V-BOD. GRADE WAS HELD CONSTANT FOR
EACH SCENARIO. .......................................................................................................................... 197
FIGURE 59: A DIAGRAM DEPICTING THE IMPLEMENTATION OF THE MINING CONSTRAINT LOGIC.
...................................................................................................................................................... 199
FIGURE 61: A VIEW OF THE THREE LOBES OF THE DEPOSIT LOOKING FROM THE WEST TO THE EAST
(SOUTH LOBE IN DARK BLUE) ADAPTED AFTER CAMPBELL (2009). ................................. 207
FIGURE 62: SCHEMATIC OF PROCESS FLOWS USED FOR PROCESS MODEL. ..................................... 209
FIGURE 63: PLOT SHOWING A SIMULATION OF THE SIZE OF 20 000 DIAMONDS DRAWN. .......... 210
FIGURE 66: SUMMARY PLOT OF CUMULATIVE DISCOUNTED CASHFLOW FOR THE AK PROJECT, P50
CASE SHOWN IN GREEN,P80 AND P20 CASE SHOWN IN RED, INDIVIDUAL CASES SHOWN IN
GREY. ............................................................................................................................................ 216
FIGURE 67: GRADE SIZE PLOT SHOWING THE IMPACT OF APPLYING A STRICT SIZE CUT-OFF TO A
TOTAL CONTENT CURVE. .............................................................................................................217
FIGURE 69: A DATA TYPOLOGY ADAPTED AFTER KEENEY AND WALTERS (2008). ....................221
FIGURE 70: A LANDSCAPE FOR SAMPLE TYPE CLASSIFICATION IN TERMS OF BOTH SPATIAL
CONTINUITY AND PRIMARY RESPONSE DIMENSIONS. ...............................................................222
FIGURE 71: SCHEMATIC SHOWING THE LOCATION OF SAMPLES FROM K2 PIPE. ...........................247
FIGURE 72: A VIEW OF THE LAYOUT OF THE CORE HOLES DEPICTING THE LOCATION OF THE
SUBSAMPLES ................................................................................................................................248
FIGURE 73:LISTING OF SUBSAMPLES TAKEN FROM CORES THAT WERE FULLY SAMPLED .............249
FIGURE 74: DEPICTION OF THE SUBSAMPLES TAKEN FROM CORES THAT WERE PARTIALLY SAMPLED.
......................................................................................................................................................250
FIGURE 75:A VIEW FROM THE SOUTH-WEST OF VENETIA K2 SAMPLED AREA - SHOWING
DOWNHOLE DENSITY. ..................................................................................................................251
FIGURE 77:A VIEW FROM THE SOUTH-WEST OF VENETIA K2 SAMPLED AREA - UCS SAMPLE VALUES
AS SCALED SPHERES AND ESTIMATED BLOCK DENSITY IN TRANSPARENT BLOCKS. ..............252
FIGURE 78:A VIEW FROM THE SOUTH-WEST OF VENETIA K2 SAMPLED AREA - T10 SAMPLE VALUES
AS SPHERES AND ESTIMATED T10 IN TRANSPARENT BLOCKS. .................................................253
LIST OF APPENDICES
APPENDIX 1 -EXPERIMENTAL DESIGN AND DATA FROM OREBODY SAMPLING .............................. 247
1 INTRODUCTION
The metallurgical recovery factor is introduced in the context of the diamond mining
industry and its impact on diamond mining project evaluation. The connection
between the unique characteristics of diamonds (their particulate nature, low
concentration and logarithmic relationship between size and value), their host
deposits (kimberlite), and the diamond recovery processes is explained.
The consequence of using global estimates for rock and diamond characteristics,
based on few and spatially sparse data combined with the vast differences between
the scale of measurements from small samples and the scale of estimation of these
properties is described.
The primary response framework (Coward et al., 2009) is introduced to clarify the
taxonomy for variables that are used in this research. This conceptual framework
provides a basis for developing quantitative models of the relationships between
variables that drive uncertainty in the recovery of diamonds. These models are used
to propagate variability, and uncertainty through the diamond recovery value chain.
The specific uncertainties explored include those associated with data collection,
characteristic estimation, and process response modelling.
The introduction concludes with a demonstration of the role that the metallurgical
recovery factor plays in evaluating the economic potential of diamond mines.
Evaluating the economic and technical viability of diamond mining projects requires
several inputs and assumptions. The report required by Canadian National
Instrument NI-43-101 (NI43-101, 2001) for standards of disclosure for minerals
projects comprises a detailed list of information and technical data vouched for by
a Competent (Qualified) Person. Many orebody inputs (e.g., grade, tonnage
estimates) are derived from limited, and/or spatially sparse, data and are thus
characterised by a high degree of uncertainty. One of these uncertain inputs is the
metallurgical recovery factor that is used to calculate the quantity and quality of the
diamonds that will be delivered by the project.
Current reviewed methods of estimating and using the metallurgical recovery factor
do not explicitly account for the impact that variable and uncertain kimberlite
characteristics have on the derivation and use of the factor. Assumptions of
continuity of kimberlite characteristics in large parcels of mined material has
potential to under- or over-estimate recoveries and in some cases, lead to material
inaccuracy in the estimate of project value (Mackey and Nesset, 2003).
The potential value of a diamond project is driven, to a large degree, by the size of
the deposit, the in situ diamond grade and the in situ diamond value from the in situ
resource. Translation of the estimate of in situ grades of the orebody into an
expected cash flow model for valuation of the mining project requires valid methods
to determine diamond recovery and loss . The metallurgical recovery factor is one
of the modifying factors used in the development of the valuation model. Its value
reflects the proportion of the total population of in situ diamonds that is expected
to be recovered when the mine is operational. This expected carat recovery is then
integrated with the other estimates into a financial model to estimate a mining
project’s value.
Development of a mining project begins with the discovery of a kimberlite that has
the potential to contain diamonds. Once discovered, several sampling and
measuring processes are initiated to gather data on the diamond content and
information that is used to infer the size of the deposit. These activities include
collection of geological information based on outcrops, core and trench samples.
The uncertainty associated with the grade estimates of portions of the resource is
used to classify the resource into inferred, indicated or measured categories as per
the requirements of the prevailing codes for reporting exploration results, mineral
resources and ore reserves, e.g. JORC (JORC, 2012) and SAMREC (SAMREC, 2016).
These resources are then subjected to a planning process that is usually divided into
several phases. At each phase the number of options, or configurations considered,
are reduced and ideally, as more information is acquired, confidence in the
estimates of the primary parameters (Grade, Tonnage, Deposit shape) of the mineral
project is increased. Once a mine plan and a proposed process plant design has been
completed, it is possible to estimate the expected tonnage and grade that will be
produced by the project. This calculation includes assumptions about both mining
and metallurgical process efficiencies. These efficiencies are used to estimate the
reserves of the project that can be classified as either probable or proven as defined
in the relevant classification code e.g., JORC (2012) code. The declared reserves are
an essential input into the mine and process plant design and configuration.
The evaluation process is iterative in nature, and usually includes several phases of
sampling, sample processing, resource estimation, mine and process design, reserve
estimation and valuation. In each iteration, more information on the deposit is
acquired, reducing uncertainty in the geological models, grade estimates, and other
inputs used to evaluate the viability of the mining project.
At the end of each assessment stage a gating process is carried out to select the ‘best
next step’ for the project. The next steps might include divesting from the project,
halting the project, continuing the original plan, or increasing expenditure to
expedite the project delivery. These decisions are informed by both the valuation
of the project and, to some degree, the uncertainty of the valuation. (Brennan &
Schwartz 1985; Bratvold & Begg 2002).
Resource Intelligence Unit, 2005). Although diamonds also occur in other deposit
styles, such as alluvial and fluvial, this research is focused on kimberlitic deposits.
Kimberlites are the transport mechanism that brings diamonds from the diamond
stability field in the earth’s mantle into the crust in a medium that preserves the
diamonds (Field & Scott Smith 1999).They are formed by a diverse range of natural
volcanic events. Such events give rise to deposits that exhibit a wide range of
geometries and thus require substantial geological investigation to understand the
geometry of the phases of the deposit and the presence and dispersion of diamonds
contained within each phase (Field et al. 2008).
The grades of diamond deposits are typically expressed as carats per hundred
metric tonnes (CPHT). A carat is equivalent to 0.2g and thus this measurement
equates to a weight-by-weight fraction of one part per 500 million. Table 1 gives a
few examples of grades of mines that are, or were, operating at the time of
publication.
Average
Average Revenue
Mine production $/tonne Year
$/ct.
Grade in cpht.
Orapa 81 97 79 2016
Cts
Lower Unit
Diamond Average retained Stone
Critical interval
Sieve Size on each Count
Size Factor
screen
# Ct/Stone Ct/stone Ct #
+15 CTS 14.8 17.118 0.00 0
+23 8.0360 10.9060 3.7704 0.00 0
+21 3.6910 4.8500 2.9595 0.00 0
+19 1.9180 2.4800 3.5175 0.00 0
+17 1.4230 1.5700 7.7134 0.00 0
+15 1.1950 1.2600 13.1862 1.75 1
+13 0.7030 0.8600 4.3400 3.22 4
+12 0.5230 0.5610 7.7849 2.57 5
+11 0.3170 0.3710 4.5989 0.92 2
+9 0.1790 0.2110 4.0289 0.92 4
+7 0.1170 0.1230 5.4151 3.38 27
+6 0.0792 0.0896 5.9011 0.90 10
+5 0.0485 0.0730 4.6952 0.90 12
+3 0.0256 0.0350 3.6036 0.60 17
+2 0.0138 0.0210 3.7263 0.43 20
+1 0.0054 0.0140 2.4541 0.02 2
-1 0.0020 0.0090 2.3182 19.00 2111
Table 2: Analysis of diamonds sized by screening (adapted after Ferreira 2013)
Diamond sieves are metal plates with round holes of a specific diameter punched in
them. The diamond trading company nomenclature for the sieves is given in the
first column of Table 2. Each aperture size can be related to a characteristic
diamond weight that has a 50% probability of either passing through or being
retained on the screen of this aperture; this is referred to as the critical size for that
screen aperture.
As the difference in the size of the orifices in the screen sequence varies between
sieve sizes, the masses retained on each sieve need to be factored or standardised
by the relative 'distance' of the interval between each sieve in order to compare
relative abundance in each size fraction. This factor, known as the unit interval
factor, can be derived from Equation 1
𝑼𝒏𝒊𝒕 𝑰𝒏𝒕𝒆𝒓𝒗𝒂𝒍 = 𝟏⁄
[𝐥𝐨𝐠(𝑪𝒔𝒛𝒖𝒑 ) − 𝐥𝐨𝐠 (𝑪𝒔𝒛𝒍𝒐𝒘 )] Equation 1
where 𝐶𝑠𝑧𝑢𝑝 is the critical size in ct/stone of the previous sieve used in the
sequence; and 𝐶𝑠𝑧𝑙𝑜𝑤 is the critical size in ct/stone of the sieve on which the
diamonds are retained. This unit interval is used to normalise the mass or number
of stones retained per sieve class when plotting various charts of the diamond size
distribution.
The diamonds retained in any sieve class can be reported as either a mass of
diamond or a stone count. The average size of stone that is retained on a sieve size
in the sequence is derived from sieving and counting stones retained on each sieve,
and then dividing the total mass of the stones retained on the sieve by the stone
count. This average can be used to generate an approximate estimate the number
of stones retained in a sieve interval for other sieved parcels of diamonds.
The information in Table 2 is also depicted in Figure 2, which shows diamond weight
class on the horizontal axis and diamond count on the vertical axis. The figure on
the left demonstrates the relationship between size and stone abundance. In the
plot on the right-hand side is calculated by taking the logarithm of the class average
stone size and applying the unit interval correction to the stone count in each class.
Diamond Size vs Stone count per Diamond Size vs Stone count per
weight class weight class with UI correction
500 120
450
Stone Count Per class
Stone Count Per class
400 100
350 80
300
250 60
200
40
150
100 20
50
0 0
0 5 10 15 0.01 0.1 1 10 100
Size Class (Average size Ct/stone) Log Size Class (Average size Ct/stone)
The graph also shows how the actual average stone size in the parcel is
underestimated (biased) because of the under-representation of larger stones in the
small samples. This demonstrates why large samples are required and even with
very large samples there is always a probability that the recovered diamond size
distribution may not be representative of the in situ size distribution. This is the
reason that models of the relative abundance in each size fraction are often used to
predict the diamond size distribution that will be recovered by a full-size production
recovery process.
0.35
0.3
0.25
(Ct/stone)
0.2
0.15
0.1
0.05
0
0 50 100 150 200
Number of Stones Sampled
Minimum Average Maximum Actual Average Stone Size
Figure 3: Variation of the average stone size estimated from small samples.
Diamond value is positively correlated with size. This relationship (see Figure 4) can
be described using a logarithmic model for a given combination of shape, colour and
quality. An accurate model of the in situ $/ct for each diamond size class in the
population is required to estimate the in situ $/tonne.
Figure 4: Typical relationship between diamond size and cut diamond retail value (adapted
from Rappaport, 2009).
If the parcel in Table 2 was valued using a $/ct per sieve class like that depicted in
Figure 4, its ‘bench value’ would be 140 $/ct. If the same parcel was sampled as
described above using samples containing 100 stones, the change in the sampled
size distribution produces samples with bench values between 471 $/ct and 46 $/ct.
The under-sampling, and low recovery, of large stones in small samples is a
substantial source of uncertainty in the evaluation of kimberlite deposits. Methods
that have been developed to address this are discussed further in the literature
review section and their impact is described in case study 2.
Given the cost of diamond sampling and the low probability that a randomly
selected kimberlite will have an economic diamond content, initial evaluation
activities following discovery are used to limit expenditure to that which is sufficient
to generate just enough data to decide whether the deposit is likely to contain
diamonds in quantities that will support economic extraction.
Micro diamonds are diamonds smaller than 0.5 mm in size that are recovered from
core through a thermo-chemical dissolution process. Typically, samples of the order
of 20 to 50kg are required. The analysis of micro diamond results is used to
determine the size frequency relationships that exist in the lithologies within the
deposit. This size vs stone frequency relationship is modelled and used to generate
an estimate of the diamond potential in the deposit by extrapolating the size and
stone frequency relationship from micro diamond sizes into larger diamond sizes.
(Rombouts, 1995)
Should the potential diamond content be deemed sufficient for investment purposes
then the next phase of evaluation is initiated, and usually aims to establish an
estimate of the financial value of the diamonds in the deposit. The financial value of
diamonds is influenced by several factors. These factors include diamond size, their
quality (for example number of inclusions, flaws), their colour, and their shape. The
valuation process starts with the definition of the size frequency of the diamonds.
As the number of diamonds acquired increases, the assortment of the diamonds in
the deposit begins to be revealed. The assortment refers to the shape, colour and
quality of the diamonds in a deposit. To achieve reliable estimates of diamond
values typically requires approximately three thousand to five thousand carats for
assessment (Rombouts, 1995). Figure 5 based on the sample depicted in Table 2
Even with the time, cost and logistical complexity associated with the collection and
treatment of large samples, it is possible, with sufficient core and large diameter
drilling, to estimate the in situ macro diamond grade with reasonable confidence. It
has been demonstrated that diamond grades are spatially correlated variables that
can be estimated using spatial estimation techniques such as kriging. (Kleingeld,
1996; Rombouts, 1995).
As the project matures, and more information on the project is acquired, it becomes
possible to improve the methods used to derive the recovery factor and increase the
spatial resolution of the recovery factor, both into smaller areas of the deposit
and/or to shorter periods of production that are associated with the mining and
treatment of specific areas of the deposit. The form of the factor can also be adapted
to express the expected recovery per diamond size fraction. The recovery factor
does not depend on a single variable tied directly to a regionalised variable, but is
the result of the complex interaction of the:
The recovery process aims, as far as is possible, to preserve the in situ diamond size
distribution between the predetermined upper and lower size limits of the process.
To do this the ore is crushed in stages with several cycles of diamond removal from
the crushed products. The process flow sheet can be considered as having three
primary objectives;
To achieve this efficiently several different size streams are generated by screening
and each size stream is then subjected to slightly different processes that are
tailored to the size distribution that is to be treated.
The selection of design parameters for the treatment of a given kimberlite deposit
is based on a combination of deposit scale, assumptions about the kimberlite
characteristics and long run performance of the selected unit processes. During the
early stages of the project the recoverable diamond size envelope is determined,
which defines the top and bottom cut-off size of diamond that will be recovered. As
the project progresses several trade-offs studies are undertaken with the aim of
optimising the balance between increased recovery and increased capital and
operating cost of selected treatment processes.
If the host rock is crushed very finely a larger number of diamonds will be released
but there will also be an increased probability of diamond damage and or breakage.
The range of recovered diamond size and the sequence of crushing and separation
that is used are very important considerations in the design and management of the
operation. The configuration of these processes also has a material impact on the
proportion and size distribution of diamonds that are liberated, recovered and lost
through lock up and damage.
Blasting, which is the first phase of comminution, is a process used to extract the
rock and reduce its size for delivery to the recovery plant. This changes the
properties of the rocks to some extent and causes some diamond damage. Studies
have, however, shown this to be negligible beyond more than five blast hole
diameters (Wilmott, 2004).
In-pit and primary crushing usually produce a product with a top size that is in the
range of 250 to 125mm. This reduces the feed to a manageable size for entry to the
plant. The next stage of comminution reduces the material to a nominal cut size of
32mm usually through a combination of cone and impact crushers.
The material is then washed, and fines removed in preparation for dense media
separation. In the Dense Media Separation (DMS) process the diamond particles,
having a density of 3.5 g/cm3, are separated from the particles of kimberlite, most
of which have a density of less than that of diamond. Several machines are used to
do this, but by far the most common is the hydrocyclone. This unit processes a
mixture of water and ferrosilicon (the dense media) into which the crushed ore is
mixed. This mixture is pumped into a hydro-cyclone and the dense particles pass to
the spigot or underflow whilst the less dense particles migrate to the vortex finder
and hence into the overflow.
The tailings from this process may contain diamonds in two forms:
2- Diamonds that have been correctly classified but have floated out of the
separation process because the combined density of the particle consisting of
diamond and kimberlite is less than the effective cut-point in the dense media
separation process.
This stream may be re-crushed to liberate these so-called ‘locked’ diamonds and to
recover any errant ‘free’ diamonds.
The undersize material and coarse oversize material from the process are usually
disposed in so-called ‘slimes dams’ or ‘processed kimberlite dumps.’ Due to
changing market conditions, and improving technology, over time these dumps may
become economic to exploit.
The impact that rock properties have on the process depends on the interaction of
the rock properties with the process. This interaction is controlled by the following
variables:
Kimberlites have been described, from a processing point of view, as a relatively soft
clay-rich rock, however due to the process of emplacement they may contain
differing amounts of several rock types that are mixed into the kimberlite (Boychuk
et al. 2012). The process of mixing, and the contents of the resulting mixtures means
that kimberlites comprise a wide range of rock types that have numerous and
variable physical characteristics.
throughput and reducing the average size of the particles produced. Excessive clay
content can cause the particles to become sticky, cause agglomeration in the
crushing chamber and reduce throughput due to clogging of crushers, feed and
discharge chutes.
Fine grained fresh or hypabyssal kimberlites exhibit high rock strengths (e.g.,
Cullinan Mines Hypabyssal kimberlite has been measured to have a uniaxial
compressive strength of 150Mpa). At this hardness, the energy required to crush
this material can exceed the installed power of the comminution circuit. This can
lead either to a reduction in the overall grind of the material or, for a given grind,
require a reduction in the plant throughput.
Hardness variability can be difficult to deal with as the mass balance in the
comminution circuit can vary to an extent where one part of the circuit is
overloaded, which reduces overall throughput. Although this can be dealt with by
increasing stockpile sizes, this is not always a feasible option, and during overload
periods these reduced throughputs may not be detected if they are of short duration.
The primary variable that controls separation is the apparent density distribution
of the rock for a given rock size distribution. Species denser than the effective cut-
point will sink along with the diamonds and species that are less dense will float and
either be discarded or re-crushed to release more diamonds. To measure the
proportion of material that is above the density cut point for a given lithology the
samples are crushed to a nominal size. These are then put through a sequence of
density separations in a static bath containing fluids of differing density. This
distribution of density is used to predict the mass flows across the process. It also
predicts the number of diamonds that are expected to be recovered from re-
crushing the floats from the first pass DMS process. Depending on the mineralogy
of the kimberlite there may be sizes of clasts that, when liberated, make a substantial
difference to the density distribution. An example of this is the modal sizes of
garnets that are dense and report to recovery.
Clay and fines content of the kimberlite may compromise the DMS operation by
causing turbulence in the cyclone. Surges of high density or incorrectly sized
material may cause overloading of the cyclone and reduce the rate of settling in the
cyclone. Flat particles generated from brittle rock types also hinder the efficiency
of the separation process. (Plitt, 1976)
Final recovery processes use several diamond properties to reduce the mass of the
concentrate from the DMS and upgrade the final product that is dispatched from the
mine to diamond trading companies. The most common of these processes is the
use of x-ray fluorescence to separate the diamonds from the concentrate. Kimberlite
can, however, contain several minerals that fluoresce in a similar way, which can
compromise recovery efficiency. Magnetic separation is also used to reduce the
magnetite and other magnetic and paramagnetic particles from the recovered
concentrates. One of the final processes used on mine sites is hand sorting where
trained pickers are used to separate gangue from a rich diamond concentrate. Prior
to valuation recovered diamonds are washed in various acids to remove any
coatings that may impair downstream diamond grading and classification.
The shape of the size distribution of diamonds will display some variation between
deposits but can in most cases be modelled with a log-normal distribution.
(Kleingeld et al, 1996). The crushing and screening process are normally designed
to recover a portion of this size distribution that yields the highest proportion of the
revenue. The diamonds above the top cut-size are crushed, and those below the
bottom cut-off size are discarded as tailings. Premier Mine has one of the largest top
cut-sizes at 65mm, more commonly however this is set to 25 to 32mm. As most of
the screening is carried out on screens with square or rectangular openings, there
is always some degree of misclassification. This misclassification impacts on both
the mass flow of material and the shape of the recovered diamond distribution.
The selection of these cut sizes for a given mine is based on the average size
distribution across the deposit. This is integrated with the revenue distribution to
arrive at an expected $/tonne that will be recovered. This value, less the cost of
treating the material, will determine the contribution that will be derived from each
tonne of ore. This value is used to design the ultimate pit, the sequence and schedule
for mining. This highlights how the estimation of the metallurgical recovery factor
impacts on strategic decisions in mine design and evaluation.
Diamonds are formed in the deep mantle, and during the emplacement process may
be subjected to several cycles of heating, cooling and mechanical stresses. This may
build up to a point where the diamonds become stressed and will be damaged in the
comminution process. Little quantitative work has been undertaken to estimate the
proportion of diamonds that are exceptionally stressed in situ, although as the
diamond cutting and polishing industry has shown, stresses in recovered diamonds
can impact on the yield that can be obtained when cutting and polishing the stones
(Rombouts, 1995).
Several shape and size distribution classification systems have been developed for
diamond populations (Caers and Rombouts, 1996). Their cubic crystal form is a
face-centred cubic lattice but there are many crystal habits to which diamonds
conform. The shape distribution has an impact on the way in which diamonds
liberate from kimberlite, the way in which they settle in the DMS process and the
trajectories exhibited in both magnetic and X-Ray processes. Surface features also
impact on the way in which diamonds liberate. Incompletely liberated diamonds,
sometimes referred to as 'comets', may float out of the dense media separation
process as the adhering kimberlite is less dense than the diamond so that the
aggregate particle density is less than that of the dense medium. These composite
particles may also be lost in the X-ray recovery section as the kimberlite may shield
the diamond from the incident X-rays and prevent the diamond from luminescing
sufficiently to be detected. The fine ferrosilicon media used in the DMS may adhere
or enter cracks in the diamonds, and as this media is magnetic, diamonds
contaminated in this way will be lost in magnetic separation units that are used in
final diamond recovery plants.
The approach does not deal explicitly with expected variation of the recovery factor.
It can be demonstrated that there is a squared relationship between the uncertainty
in the estimate of the recovery factor and the uncertainty in the recoverable grade.
(Lantuéjoul, 1990). Hence, if the uncertainty in the recovery factor is halved,
uncertainty in the recoverable grade will be quartered. Under-estimation of the
magnitude of the uncertainty in recovery can prevent projects from reaching their
full potential and, in extreme cases, lead to cessation of operations shortly after
starting up. Conversely, by improving the protocols and methodology to estimate
this factor and reduce the uncertainty, it is possible to increase substantially the
value that can be attributed to reserves. (Benicelli et al, 2000)
Acquiring data of rock characteristics with enough spatial coverage and support to
generate unbiased estimates is problematic. Even if sufficient data are obtained
from a high number of samples with large supports to generate unbiased rock
property estimates, the use of these estimates to predict process response at
production scale requires adaption of existing methods of modelling process
performance. One approach to address scale-up explicitly, termed ‘Comminution
Economic Evaluation Tool’ (CEET) developed by MinnoveEX, now owned by SGS, is
Stephen Coward - April 2020 39
Evaluation of metallurgical recovery factors for diamonds recovered from kimberlites
a system developed for short-term optimisation. CEET addresses the scale-up issue
by adapting the parameters of the prediction model so that the outputs of the model
match actual production outputs (Ameluxen et al., 2001). This approach provides a
good way to modify the inputs into the model by conditioning the models with data
once the plant and mine are running, but it is not suitable for prospective operations,
and has limited forecasting power.
Current practice in De Beers makes use of a combination of core samples and bulk
sampling. This includes the measurement and testing of a range of rock
characteristics, as described in the literature review. Diamond liberation and lock-
up is predicted based on a combination of rock size reduction measured during bulk
sampling and process models based on average kimberlite characteristics. This
approach has several shortcomings including:
on data derived from core samples that have been used to model and
predict the in situ diamond distribution.
Although work has been done to improve the methods used for sampling, ranging
from drilling techniques to improved laboratory practices, the underlying problem
of integrating the data acquired from sampling and estimation of rock properties
with the process design over the life of the mine has not yet been addressed.
A sampling strategy that collects and integrates both destructive and non-
destructive data from core sampling was designed and executed. The data from this
sampling experiment are used to estimate and simulate the physical rock
characteristics into an orebody model.
Kimberlite
Mining Process Output
Property
Model Model Model Analysis
Research Areas
This orebody model is depleted and treated using simulated mining and
metallurgical processes to evaluate the impact that the spatially varying rock
properties have on the treatment plant processes. A quantitative diamond recovery
model that responds to the rock characteristic variability and includes uncertainty
was developed. The process model is used to simulate the operation and produces
data that can be used to estimate both the expected diamond recovery and the
The relationship between the sources of the variances in the integrated model can
be formulated as in Equation 2:
Where:
F, G, H are functions that are derived from either sampling and modelling or
observed relationships between the rock characteristics and the specific
process that is being considered. They must be calibrated for each specific
facies of the kimberlite bodies, and to the specific unit processes.
The building of the integrated value chain model required the following objectives
to be met:
The literature review (Chapter 2) that follows addresses the nature of the
kimberlite, sampling theory, and sampling methods developed in this research to
augment limited destructive data with geophysical data. Spatial estimation and
simulation techniques used to generate models of the physical characteristics of the
orebodies being mined is followed by a review of mining and metallurgical process
simulation techniques. The literature review concludes with a statement of the
research problem (Chapter 3).
The first area of investigative research (Chapter 4) covers data acquisition and
describes the design and execution of a rock property sampling experiment that was
conducted to generate data required for this research project.
The data gathered are used to explore techniques for using non-destructive data to
augment the costly destructive data and estimate spatial kimberlite rock
characteristics. (Chapters 5)
The integrated model has been developed and applied to several projects during
this research process. Two case studies are presented to demonstrate the
implications of model outcomes on real diamond projects (Chapters 7 and 8).
A brief discussion follows to describe how the outcomes of this research can be
applied in an industrial framework (Chapter 9). The thesis concludes in Chapter 9
with several recommendations for further research.
The research was conducted in the context of this background to improve the
estimation of rock and diamond characteristics, model the complex interaction of
these with the recovery processes, and predict process responses at the scale of
diamond operations. The findings of this research provide an important milestone
in the development of methods to improve the derivation and use of the
metallurgical recovery factor. This will result in improved evaluation, design and
operation of diamond mining projects.
2 LITERATURE REVIEW
The review begins with the description of the current understanding of kimberlite
emplacement processes and the consequent composition of kimberlitic diamond
deposits that these give rise to. It is shown that kimberlites exhibit a wide range of
rock characteristics and that this heterogeneity must be considered when sampling
for, and estimating, the characteristics required to derive metallurgical recovery
factors.
The current state of the art in sample acquisition, sample measurement tools,
quality assurance and control techniques reviewed. Non-linear relationships
between primary rock characteristics and process response variables are explored,
as is the impact that so-called "non-additivity" has on spatial estimation, and up-
scaling (from laboratory test to block-scale response predictions) of the estimated
or predicted variables. If incorrectly addressed, this important aspect of recovery
factor estimation can lead to significant undetectable bias in estimates. Using the
correct samples, measurement tools and sampling strategies can assist by
determining the nature of these relationships that can then be accounted for in the
subsequent estimation or simulation of the spatial characteristics of interest.
The review demonstrates that current practical approaches used to propagate rock
characteristic variability through the mining value chain have been hampered by
both availability of suitable software tools and limited processing capacity. This
review describes the evolution of optimisation models used in mining projects. It
demonstrates how a single optimised solution may not result in robust project
design if there is any orebody or processing uncertainty. The review concludes with
a description of a conceptual framework for an Integrated Evaluation Model. This
model aims to provide more robust estimates of the metallurgical recovery factor
using an integrated mining project value chain simulation.
The benefits of this approach are described as are the philosophical and practical
challenges presented by adopting a systems approach to a complex stochastic
problem.
Kimberlites are pipe-like bodies that carry diamonds, formed deep in the earth’s
mantle, to the surface of the earth’s crust (Field, Stiefenhofer et al., 2008).
The research is focussed on one mineral in one geological setting. However, when
considered in the spectrum of rocks, kimberlite is an extreme end-member. There
are numerous processes that give rise to these bodies both during and after
emplacement. Not only do kimberlites exhibit a diverse range of geometries, the
post emplacement processes, such as weathering and alteration, add further
complication to the rock and mineral assemblage. This makes the design and
implementation of sampling campaigns capable of achieving representative results
particularly challenging.
• Identification of the factors that govern rock strength and how these are
dealt with in other rock types. This includes measurements of rock texture,
fracture frequency and measures of weathering and alteration in similar rock
types.
• A comparison and contrast of geological classification systems, their
objectives and the implications this has for both sampling for rock property
and estimating rock properties at a larger scale.
• Understanding the implications of the great differences in the scale of
measurement and the scale of estimation. This includes an assessment of the
challenges that these large differences present for change of support
modelling especially when the models must deal with non-additive, highly
skewed data.
• The scale on which this research requires estimates of rock characteristics
and the implication for mineral processing and diamond recovery modelling
and simulation.
• Identifying and modelling relationships between qualitative and quantitative
measurements of rock characteristics
The concept of petrologic clans was introduced by Reginald Daly in 1914 (Mitchell,
1996) and is predicated on the assumption that it is possible to classify rock types
into several suites by interpreting the composition of the magmas that led to the
rock formation. To a large degree, the original descriptive schemes have been
adapted by several authors. These include Skinner and Clement (1979) who
developed a system based on the modal mineralogy of the groundmass, based on
the belief that the ubiquitous presence of olivine was insufficient grounds for
classification. Since then several schemes have been proposed and these are
reviewed in more detail in this section.
The challenges faced by any generic description system include the requirement to
describe the following characteristics across a diverse range of scales and
emplacement settings:
1.0E+11
1.0E+10 Ryholites
1.0E+09
Magma Vsicocity (Pas)
1.0E+08 Dacites
1.0E+07
1.0E+06 Andesites
1.0E+05
1.0E+04 Basalt
1.0E+03
1.0E+02 Kimberlites
Komatiites
1.0E+01
1.0E+00
500 750 1000 1250 1500 1750
Temperature (Degrees Centigrade)
Figure 7: Melt viscosity as a function of temperature at 1 bar for natural melts spanning the
compositional range Rhyolite to komatiite. All compositions are volatile free. The
temperature range is illustrative of typical eruption temperatures for each composition –
adapted from (Sperra, 2000) and (Sparks et al., 2009).
The shape and size of the deposit is related to the nature of the rocks into which the
pipes have intruded. A schematic cross section of a typical kimberlite pipe geometry
is presented in Figure 8. At least three types of kimberlite pipes have been identified:
• Deep (up to 2 km), steep-sided pipes which comprise three distinctive zones
(crater, diatreme, root)
• Shallow pipes (<500m) which comprise only a crater zone and are in-filled
exclusively with volcaniclastic kimberlite, mainly pyroclastic material, and;
• Small (<600-700m deep), steep-sided pipes filled predominantly with re-
sedimented material and less common pyroclastic kimberlite or, in a few
instances, with hypabyssal kimberlite (Field and Smith, 1999).
Even though most of the deposits seen today have been eroded to an extent where
the products of the emplacement volcanism are not present, some of these products,
such as fine ash and pyroclastic surge deposits, can be found within the body of the
pipes.
Kimberlites also occur as sills and dykes, and similar kimberlite pipes can contain
highly variable grades (e.g. Marsfontein, South Africa). However, even if one only
considers the pipe-like bodies, the range of rock types that can be identified is
substantial. A breakdown of the proportional contents that might be found in a
kimberlite deposit is shown in Figure 9.
% of Mineralogical
Deposit type Component % of Deposit zone Rock Description Rock analysis Modal analysis analysis
50 Fresh K feldspar
10 Altered K feldspar
Tuff ring 10 30 Country rock 100 Granite Fragments 30 Quartz
Fragments 10 Biotite
70 Serpentine
30 Macro crysts 100 Olivine Xeno crysts 20 Calcite
10 Smectite
Figure 9: A schematic representation of the materials that could be found in a ‘typical’ kimberlite
pipe, adapted after (Field and Smith, 1999).
Work by Sparks (2006) on several of the southern African pipes, has suggested
that the rock types in the diatreme could be classified as four basic types:
Component Examples
Juvenile components derived from the magma Lapilli, phenocrysts and groundmass minerals
Minerals, xenocrysts Xenocrysts related to the breakup of mantle
nodules or basement crystalline rocks
A mega-cryst suite Crysts including olivine ilmenite and garnet
Xenoliths of deep origin Peridotite and eclogite
Country rock accidental lithics Can be correlated with the geology of the
basement and host country rock
Table 3: A description of the components of volcaniclastic kimberlites.
Layered volcaniclastic rocks can be distinguished from the massive type by visual
inspection of the large-scale rock fabric. There are many sub-facies in this category
reflecting the wide variety of scales and kinds of layering. Within these layers the
gradation of size, nature of clasts, and degree of sorting varies widely. These
textures are best preserved in kimberlites where the crater facies are preserved but
are also evident in deep parts of narrow pipes. Some of these varieties can be
ascribed to primary pyroclastic processes and some to redistribution of primary
constituents from reworking by normal sedimentary agents such as water, wind and
gravity. Terms such as pyroclastic or re-sedimented kimberlite may then become
applicable if the origin is not in doubt (Sparks, 2006).
Marginal wall rock breccias are the third prominent rock type. They range from
breccias composed entirely of wall rock to breccias with variable amounts of
kimberlite matrix of the MVK type. The latter vary from net veining of in situ wall
rock to clast-supported breccias to matrix-supported varieties.
The fourth type, hypabyssal kimberlites (HK), are those that have not been involved
in an eruptive process, and hence appear to be uniform in texture and coherent
(Skinner and Marsh, 2003). The country rock lithic clasts are usually strongly
altered due to the reaction between the fresh kimberlite magma and the rock into
which it intrudes. These rocks can be found in all parts of the pipe and are
characterised by abundant macro crysts and groundmass containing typical
kimberlitic minerals (E.g. monticellite, spinel, perovskite, calcite, and serpentine).
Understanding the makeup of each of these types of rocks is important both from a
sampling perspective, and to understand relationships between rock type and
physical properties.
Estimates of the physical characteristics of rocks within a kimberlite pipe are often
based on samples of the rock that are many orders of magnitude smaller than the
pipe itself. There are several processes used to obtain samples of the kimberlite,
including coring, chip sampling and bulk samples acquired from trenches or shafts.
The perceived cost of sample acquisition, especially in the case of early in the life of
projects where capital is rationed, is one of the main constraints on sample size,
sample count and the spatial distribution of samples within the kimberlite body.
These constrains often compromise or limit the representativity of the samples
acquired. Technical challenges of acquiring the rock mass required for testing have
also been documented (Hoek, 1997). The challenge is proportional to the variability
and scale of the orebody, and because kimberlite orebodies are complex,
proportional and representative sampling is challenging.
The objective of taking samples is to measure a portion of the total population and
acquire data that can be used to generate unbiased estimates of identified
characteristics of values at un-sampled locations. Sampling optimisation is the
process of identifying the set of tools and processes that will enable one to estimate
with least bias, imprecision, uncertainty and at lowest cost, the values, and
properties at un-sampled locations. This suggests that any value-optimising
business will try to balance the cost of sampling with the value of the information
obtained. The difficulty has always been to value the potential additional
information that will be acquired by the sampling before the sampling takes place.
Sampling technologies and practices reviewed are focused on the method of
sampling and measuring the characteristics of the in situ kimberlite that impact on
process performance in such a way as to facilitate the spatial estimation of the
characteristics that have been measured on the samples.
This research has found, for kimberlites, that it is possible to sample the primary
properties of the rock, estimate and simulate these at block scale and then to use
these estimates to predict the operational scale process response.
It is important to understand and quantify the impacts that the nature of the variable
measured has on selection and execution of geostatistical estimation methods. The
'strange' behaviour of several response variables suggest that it is important to
validate assumptions that underpin the geostatistical estimation or simulation
method used. This includes, for instance being able to validate the relationship
between change of support and variable additivity (Carrasco et Al., 2008).The
implications here are that the design of sampling and testing programs for physical
rock characteristics should include collection of information that will allow the
testing of assumptions that are important to identify and select a validate the
estimation method. This is expanded on in section 2.3.
• Scale - the scale of the sample and the scale of the estimate may be very
different. Although the sample data are considered to be "Point" values they
are in reality some statistic (e.g. an average) of a phenomenon collected or
measured over some support. This support (the size shape and orientation
of the sample) and the way the test integrates the measure over the support
will influence the characteristics of the data produced (means and
variances). When making predictions, either at sample support scale or far
larger scales the combination of support attributes of the sample and
estimated volume needs to be carefully considered. Change of support
calculations require a model of the relationships between distributions at
These challenges reinforce the notion that where possible the preferred approach
is to spatially estimate or simulate so-called "Primary" additive variables wherever
possible (Coward, 2008). Many rock "response" variables are not additive, this
suggests that the variability and scale relationships are not simple, and hence their
spatial estimation and simulation will require rigorous demonstration that these
variables do indeed meet with the requisite assumptions of the estimation and
simulation techniques used.
A taxonomy of sources of error is central to the concepts that underpin the theory
of sampling. These are then used to determine a way to relate the mass of the sample
to the expected error in the estimate. The Discrete Selection Model formalised and
published in 1975 has recently been updated (Gy, 2004) and the relationship
between sources of error can now be depicted as in Figure 10.
Figure 10: A hierarchical depiction of the relationship of sources of error arising from
sampling, Adapted after Gy (2004).
The dominant error is taken to be the fundamental error, although the others should
be minimised as far as possible. The primary cost driver in processing samples is
likely to be size, thus using this approach to sample design aims to minimise the size
without increasing the fundamental error to such an extent that the information
obtained is of negligible use in estimation. To this end Gy proposed the formula for
deriving the Fundamental Sampling Error. A simplified version is presented in
Equation :
C d3
( FSE ) =
2
Equation 3
Ms
Where
Errors will arise from many sources; hence effort should be focussed on identifying
and eliminating sources of error when sampling. Where the errors cannot be
eliminated, the estimate of the magnitude of the relative errors must be considered
in any consequent data analysis that is carried out.
The properties selected and identified in section 1.4 include those that can be used
to model and predict kimberlite comminution, diamond liberation and effectiveness
of separation processes. They can be broadly described as either primary or
response variables as defined by Coward (2009).
Work carried out by Copur and Billgin et al. ( 2003) on a range of rocks yielded the
data that is depicted in the plot in Figure 11. Although the range of rocks tested are
not kimberlites the work demonstrates how it is possible to collect different types
of data to explore the existence of a relationship between the primary variables (e.g.,
various textural measures, measures of rock content) and response variables . The
nature of these relationships is clear for some rock types and not for others. This
suggests that for some rock types the variance of rock responses within the 'rock
type domain' may well be greater than the variance of rock characteristics between
'rock type domains'.
160
140 10 Dynamic Elasticity
modulus Edyn
120 8 Schmidt hammer
100 rebound value Shrv
80 6
Density Y
60 4
40 Brazillian Tensile
strength BTS
2
20
Acoustic P wave
0 0 velocity Vp
0 1 2 3 4 5 6 7 8 9 10 11 Cerchar abrasivity
value CAI
Rock types
Figure 11: Results of a number of rock tests on a range of rock types after Copur et al.
(2003).
The use of downhole measures and laboratory scale tests relevant to this research,
such as acoustic velocity is described more fully in section 4.2. The outcomes of
early stage sampling are used in desktop studies to validate additional exploration
of the target.
These data can be used for process design, informs the geological model and
improves the models of the relationships between primary and response variables.
The data is however usually restricted in spatial coverage as the bulk samples are
likely to be selected from single accessible locations in each major kimberlite
domain
Data obtained from measuring the physical characteristics of samples can be used
to estimate the characteristics of the un-sampled portions of the orebody. These
variables, unlike grades, may exhibit various behaviours that make their estimation
at block scale somewhat challenging.
There are several ways to generate spatial models of variables at different scales,
the methods considered here include:
be estimated. However, if the rock is very homogenous and the sampling has been
carried out in a very regular grid, then this may be an appropriate approach.
Although this approach is computationally efficient it does not consider the spatial
location of samples, does not account for the differences of the scale of the sample
vs the scale of the estimate, and does not provide a means to assess the quality of
the estimate made.
Polygonal estimates
This approach utilises the generation of a volume that the sample is deemed to
represent. The region of interest is divided into shapes that fill the volume, in a way
that ensures that each volume has at least one sample in it.
This approach may be appropriate when there are sufficient samples regularly
located throughout the region of interest. It is computationally efficient and does
not require any modelling of functions.
Kriging
Kriging is a term used for a family of estimators whose origins were attributed to
Danie Krige by George Matheron (1963). Matheron (1963) named the process used
to generate estimates "Kriging" as a tribute to the work of Danie Krige. Ordinary
kriging is a linear estimation algorithm for a regionalised variable that satisfies the
intrinsic hypothesis.
The benefits of this approach are that the estimate of the variable of interest at
unsampled locations is unbiased, and that the error of the estimate at sampled
locations is calculated during the estimation procedure.
Spatial simulation
The evaluation of the metallurgical recovery factor requires an understanding of
both the average, expected, and variable nature of the rock characteristics on the
recovery factor. One approach to overcome some of the limitations of block scale
estimates is to generate a sufficient number of equally plausible realisations of the
orebody characteristics and then to use these as inputs to a process simulation. The
set of simulations represents the uncertainty in the ore characteristics and each
individual realisation can be used to explore the impact of block to block variability
on process performance.
These rich spatial models can be created using one of several spatial simulation
techniques (Dowd, 1994). Spatial simulations aim to reproduce the histogram of the
data and the spatial co-variance of the data.
One of the realisations of the simulation set (based on real data) is selected at
random and deemed to be the "Real deposit. This model can be sampled using
different sample grids to generate data that represents the results of sample
experiments that have different geometric layouts and sizes.
These 'virtual samples' from the 'virtual orebody' can be used to generate new
estimated orebody models and condition a new set spatial simulations of the
kimberlite deposit.
Real Sample
Estimated
Data
Orebody Model
Mine and
Process
model
Simulated
Estimated Simulated Orebody
Orebody Orebody
Model Model
Estimated
Select One Orebody Model
Realisation: Mine and
The ‘Virtual Process
Orebody’ Model
Simulated
Orebody
Figure 12: Schematic of the process used to create a framework to quantify the impact of
changes to sampling , estimating and simulating Kimberlites on process models.
The ‘Virtual orebody,’ (i.e. the one selected realisation taken from a simulation set
based on real sampling data ) can be taken to represent so-called 'perfect knowledge
case' against which various comparisons can be made (e.g., the impact of sample
spacing, simulation method used, crushing process model etc.). The method has a
few limitations and is computationally demanding but allows for a myriad of
possible comparisons. The framework or platform is used to compare several
aspects of the methods and techniques presented in this research, including a
comparison between estimates of expected recovery based on limited information
with that which would have been realised had the ‘virtual orebody’ been treated.
The techniques that can used to generate spatial simulations include turning bands,
sequential indicator and sequential gaussian simulation. Each has their own benefits
and drawbacks; a brief description 'turning bands' which has been used in this
research is described.
Turning bands simulation was developed in the early 1970’s (Dowd, 2004). This
method of simulation uses a one-dimensional random function that is set up along
lines that have an even spatial density. The simulations from these lines are then
projected out onto a grid that is set up around the lines. Some of the limitations of
this approach include the generation of artefacts if insufficient lines are used. The
method deals with anisotropy indirectly, usually through grid expansion and
shrinking, and the method requires a separate kriging step to condition the
simulation.
Implications
The method used to generate spatial estimates and simulations of rock
characteristics has consequences for the use of these models in value chain
simulation. Estimates derived using linear methods such as ordinary kriging are
likely to underestimate the variability of orebody variables such as grade that will
be observed during operation. Simulated realisations of variables of interest at block
scale will better reflect the variability of the characteristics of the blocks as they are
processed. It is however important to consider that the scale of measuring rock
characteristics (typically the size of core) is many orders of magnitude smaller than
the size at which the process will respond to the change in the specific
characteristics.
The approach described in this thesis suggests that matching the support of the
sample, the location and spread of samples in the pipe to be evaluated and the scale
of the spatial estimate is a very important aspect that will determine the effective
use of this important input into value chain modelling.
Mineral processing, such as crushing and dense media separation, can be modelled
using several mathematical techniques. King (2000) provides a demonstrated
taxonomy of systems together with a classification of these with benefits and short
comings. The emerging use of so-called “expert systems” integrated with dynamic
simulation for use for a) calibrating process simulation models and b) for improving
operational decision making.
The processes used to recover diamonds from kimberlite can be divided into mining
and treatment processes. There are several levels of process modelling and
simulation, from 'black box simulations', that merely transform an input to an
output through a simplistic mathematical function, to complex systems, that address
the interaction of individual particles and elements in very small-time increments.
The primary objective of the integrated evaluation model is to create a value chain,
comprising linked models, that can be used to simulate the operation of the mine
and treatment plant system. The mining model translates the estimate of in situ rock
characteristics into a sequential ore stream. The ore stream characteristics are used
to drive process models which are used to calculate the plant throughput and
efficiency and derive an estimate for diamond production based on the
characteristics of the incoming ore stream.
The design, or planning, of an open pit mine has three primary components:
Several algorithms have been developed to optimise the process of mine design,
including the Lersch Grossman approach developed from graph theory (Lerchs and
Grossman, 1965). The aim of the optimiser is to achieve the maximum present value
by offsetting estimates of cost to mine the block against the value recovered from
the block. The estimate of block value is usually derived by multiplying the
contained mineral in the block by the price that will be obtained for the mineral
minus the cost of extracting, processing and delivering the valuable content of the
block. The associated present value can be obtained by developing and applying a
feasible production schedule.
Underground methods used to mine kimberlites include block caving and room and
pillar methods. These methods are subject to many geometric and ground condition
constraints. The processes used to derive the geometry and sequence of the
operation are beyond the scope of this research.
For the models considered here it is assumed that an optimal mine plan has been
designed for the simulated deposit based on the kriged estimate of recovered value.
Re-running the mine design for each realisation would not only be time consuming
but is not appropriate given that the simulations are designed to maximise the
variance at block scale. There is however ongoing work to incorporate uncertainty
in the derivation and optimisation of the ultimate pit shell, block sequence and
schedule (Dowd and Dare-Bryan, 2004, Lamghari and Dimitrakopoulos, 2016.).
Mining is the first stage of the process where ore breakage occurs. The processes
used include a combination of primary drilling and blasting. An approach aimed at
optimising selectivity during the drill and blast cycle has particular relevance to this
research (Dowd and Dare-Bryan, 2004). The method is centred on the creation of
an in situ model that represents “perfect knowledge at all scales” through
simulation. This simulation is then sampled on a user-defined grid to provide the
limited information that would normally be available for planning. This information
can then be used to design the blast and muck pile clearing pattern. It is then
possible to evaluate how well the plan is likely to perform by comparing what was
selected and trammed to the plant given the mixing caused by blasting, against the
expectation derived from simply using the estimated block characteristics of the
planned mining blocks.
Although the approach has been designed to optimise selectivity, the simulation
approach used to create estimates at relatively small scales, far smaller than the
smallest selective mining unit (SMU), demonstrates that rock properties can be
simulated into blocks of an appropriate scale. The key limitation is the minimum
size of the block that can be simulated in the blast process. The minimum
dimensions are driven by the resolution of sampling, and the processing power to
simulate this process.
Modelling comminution
The modelling of comminution has received much attention by several authors
(Agricola, 1556; Bond, 1943; Witten, 1972; Napier-Munn et al., 1999). To model
comminution requires a generalised formulation for the mechanisms that drive the
reduction in the size of rocks. When rocks are crushed the mechanisms of size
reduction include:
• Attrition;
• Chipping;
• Impact fracture of rock particles; and
• Breakage of contained mineral particles.
Specific attrition rates are ore specific. Techniques developed by the Julius
Kruttschnitt Mineral Research Institute show that this can be determined by
treating a sample for 10 minutes in a tumble mill and the progeny screened (Napier-
Munn and Morell et al., 1999).
An attrition breakage function can be obtained from assessing the slope of the graph
depicting the cumulative size distribution of the finer fractions. The change in mass
(m) in a size fraction per unit time is expressed in Equation 4 :
dm m Equation 4
= −3k
dt d f−
1
Where
k is a material-specific constant
Integration of this equation with respect to time will determine the total
comminution, or size reduction that will be achieved for a given unit process.
Comminution modelling and its use in the value chain model is described more fully
in section 6.3
The model was developed by Kleingeld (1976) in an attempt to estimate the locked
diamond content of cemented gravels that were discarded by coastal operations in
southern Namibia.
The purpose of the model is to produce a reliable estimate of the diamonds that
might plausibly be contained in the discarded rock stream based on the operating
parameters of the plant and the size distribution of the recovered diamonds. Each
size class of ore particles (for example the -10mm +8mm size fraction) has a
potential to float out of the dense media during the separation process and report
to waste. The particle class that is discarded can carry with it a size of diamond that
can be determined through the density relationship between diamond, the rock that
it is enclosed in, and the effective density cut point at which the separation is made.
• the total grind achieved estimated from ore stream size distributions;
• the number of ore particles in each size fraction;
• the average ore density in each size distribution; and
• the recovered diamond distribution.
The model is applied during the sample phase to estimate the liberation during bulk
sampling. Although the estimate of locked diamonds is not added to the modelling
phase of the total content distribution, it is used as a qualitative guide to the
difference in liberation that is likely to be achieved between the sampled grade and
the expected production facility.
Currently it is not common practice to determine how the range of rock properties
will vary in the short term and how this will impact on the efficiency and throughput
of the process. In most cases, the global average rock properties are used to estimate
the expected ore grind size distribution and diamond liberation.
The research reported in this thesis seeks to demonstrate that that the ‘Diamond
Liberation Granulometry Model,’ can be adapted and used effectively to integrate
total diamond content models with spatial models of kimberlite properties to
evaluate diamond recovery.
Published accounts of estimated diamond recovery factors often only report a single
value for expected grade recovery (Farrow, 2015). A single point estimate of
expected, or most likely, recovery of the contained diamond content does not give
any indication of how uncertain or variable the diamond production will be over the
life of the operation.
One important source of uncertainty arises from the design of sampling campaigns
to define reserves. These campaigns are often constrained by the capital allocated
to sampling the orebody. The optimal selection of the size, number and geometric
location of samples to achieve a specified degree of confidence in the orebody grade
cannot be determined with any degree of accuracy prior to executing the sampling
campaign. There are some generic 'rules of thumb' that can be applied to the
classification of resources and reserves (Parker, 2000) based on the expected
variation in production that can be inferred from the quality of a completed resource
estimate. The metrics used in classification range from search neighbourhood size,
drill hole spacing, and/or kriging variance (Silva and Boisvert, 2014). It is however
difficult to predict these prior to executing the sampling campaign and performing
the estimate. Some inference can be made from early geological modelling or from
other kimberlites with similar architectures to predict the resulting uncertainty in
estimates of in situ volume, density, tonnage and grade, but it is far more difficult to
predict the recovered values for these variables.
An alternative approach to sampling campaign design would begin with defining the
tolerable financial uncertainty for a project and then use these limits to define the
required degree of confidence in the variation in expected recoverable reserves. To
do this however requires estimates for several technical factors and a methodology
that can be used to calculate how the in situ values, which are spatial in nature, will
be transformed into revenues over the life of the project.
demonstrated in chapters 7 and 8. This work shows how it is also possible to derive
confidence limits for the project value based on predicted variability in project
cashflow.
There are, however, moves within the industry to attempt to create guidelines for
this process. This includes the international valuation standards committee as well
as several other bodies around the world. Although it is a guideline and not
prescriptive SAMVAL (2016) suggests:
Although not prescriptive on the methods used it is suggested that the three bases
for valuation should be income, sales comparison and cost. A brief description of the
assumptions and limitations of methods used to value mining projects is presented
below. The review also demonstrates how to incorporate the metallurgical recovery
factor into financial models and hence demonstrate the financial impact of the range
and uncertainty of the recovery factor in financial terms.
DCF =
cashflows i
Equation 5
(1 + r ) i
Where
The model described above can, and often is, used in deterministic fashion to assess
the relationship between an input variable and the value of the project; This is
known as sensitivity analysis. Sensitivity analysis may be a suitable technique for
assessing relationships between input variables and output values for relatively
simple valuation models.
As valuation models include more variables to improve the 'reality' of the model,
independent sensitivity analysis is increasingly likely to incorrectly identify the
variables that that are going to have the largest impact on project value. To mitigate
this risk, it is possible to build in correlations between the input values e.g. as
interest rates rise so do wages and perform a 'correlated sensitivity analysis'.
The 'sensitivity centric' approach to assessing project risk often gives a better
indication of the structure of the model itself rather than providing a valid way of
quantifying the relationships that exist between the orebody uncertainty and
variability, project configuration and project outcomes. The main shortcoming in
this approach is that the ranking does not account for the probability distribution of
the range for each of the input variables.
A slightly more complex technique is to use input distributions, that may or may not
be correlated, to generate the range of input variables used in the project model. The
parameters used for the distributions are selected in a qualitative manner using
expert opinion (Vose, 2002, Davis, 1995) as discussed further in section 9. In this
approach the type of distributions for important variables, and the values of their
parameters, are elicited from a panel of experts that can be weighted in several
ways (Aspinall, 2018). The limitation of this approach includes the biases that arise
during the process of eliciting expert opinion and potential biases that arise from
underweighting of extreme events, and overweighting of recent events (Kahneman,
2011). The use and application of these approaches are reviewed in more detail in
chapter 9.
Sensitivity analysis is carried out by drawing values for the input variables from the
calibrated distributions, and rerunning the model, in the order of thousands of
times. The distributions of outcomes can then be analysed to provide a far richer
understanding of the variable relationships that exist in the project. This approach
is sometimes referred to as 'monte carlo analysis'.
There are cases when two or more variables oscillate simultaneously, or are
correlated in a complex relationship over time , for instance the relationships that
exist overtime between the grade of the material mined and the cost of processing
that material. Prediction of the financial performance of complex systems require
models of appropriate sophistication (Dowd, 2015). The 'integrated evaluation
model' techniques described in this thesis have been developed to address the
complexities in such cases. It has been demonstrated that for mining projects this
method is likely to outperform sensitivity analysis (Nicholas et al., 2007). A more
recent evolution of the approach termed "Scenario Based Project Evaluation"
(Coward, et al., 2013) incorporates a more sophisticated approach to deal with
future operating environments as suggested by Bradfield et al.(2005).
Kimberlites are formed through several high energy processes over a long
geological timeframe (Field, 2009). This gives rise to deposits with a variety of
geometries and the domains within the deposits vary greatly in shape and size. They
contain many diverse minerals and exhibit a wide range of physical characteristics.
Mineralisation and mineral assemblages mean that the variables of interest can
have very different spatial covariances and that these can change over very small
ranges.
This research describes how the variability and uncertainty of the estimates of
mineral resource contents and physical characteristics can be used to improve the
estimate of uncertainty in production output and express this uncertainty in
financial metrics.
The literature review clearly shows that the recovery factor has a material bearing
on the value of diamond projects. Furthermore, technology now exists that will
enable sufficient cost-effective samples to be taken from orebodies to enable
estimation and simulation of physical characteristics of the kimberlite that will drive
mining and diamond extraction process efficiency. It is possible with some
additional work to link these directly to mining and metallurgical process models
and to use these to determine the range and variability in the metallurgical recovery
factor. These data can be used in appropriately sophisticated financial models to
express the impact that the range of metallurgical recovery will have on mining
projects in financial terms.
3 PROBLEM STATEMENT
3.1 Introduction
The factors that influence the expected value of a kimberlite diamond mine include
the proportion of the total contained diamond population that will be recovered, the
diamond sales price and the cost of mining and recovering the diamonds. Diamond
recovery and loss is a function of the effectiveness of the mining and metallurgical
processes. This research addresses important shortcomings of current methods
used to derive and quantify the range and uncertainty of metallurgical recovery.
The challenge of quantifying the range and uncertainty of diamond recovery impacts
on the evaluation of mining projects from early discovery phases through to
operating mines and into project closure. Errors in predicting the recovery of
diamonds arise from the complex process required to acquire kimberlite orebody
information, the low concentration of the mineralisation and the difficulty of
sampling and estimating the physical characteristics of the kimberlite that impact
on process rate, process efficiency and cost.
Due to the range of rock types contained in these orebodies, the granular nature of
the mineralisation and its very low concentration, sampling of the grade and size
distribution of in situ macro-diamonds (diamonds >0.5mm) is time consuming and
costly (Ferreira, 2013). The accuracy of grade sampling that is carried out is subject
to biases that can arise during sample acquisition and treatment.
Bulk sample plants crush and separate diamonds using similar techniques to full
size production scale diamond processing plants. The impact of the rock
characteristics on the variability of recovery efficiency at sample scale is different to
that at full production scale. The relationships at different scales exhibit non-linear
relationships that are difficult to model and adapt for scale.
Output from constrained simulations that have been constructed and executed to
determine the ‘single best design’ cannot be used for risk mitigation evaluation ,
over and above the limited sensitivity analyses that have in some cases be used to
assess the impact of uncertainty (Nicholas, 2007).
In parallel with this work, a modelling and simulation framework has been
developed to demonstrate how multivariate estimates and spatial simulations of the
physical characteristics of an orebody can be used in a linked-up value chain model
to derive a metallurgical recovery factor and create confidence limits for the range
and variability in the metallurgical recovery factor over time.
4.1 Introduction
The methodology used in this thesis to derive the metallurgical recovery requires a
spatial estimate of country rock and kimberlite characteristics that impact on the
diamond recovery process. The estimation of these characteristics is not trivial and
requires the collection and testing of samples to derive appropriate spatial data of
these characteristics. This chapter describes the design and execution of an
experiment to gather data that was used to evaluate options for generating spatial
estimates and simulations of the required kimberlite characteristics.
• The sample is not the estimate - A further shortcoming of the use of ‘average’
ore characteristics to determine recovery factors for a geological domain is
that the average of the results of treating the 'average composite' samples,
which are far smaller than the domain for which the predictions are being
made, do not constitute an estimate of the 'average recovery' that can be
expected for that domain, i.e. 𝐹(𝐸(𝑥)) ≠ 𝐸(𝐹(𝑥)).Without having sufficient
In the absence of project constraints (e.g., time, access, cost) sufficient samples of
appropriate size, i.e. geostatistical support, would be acquired from all domains in
the orebody so that both the average and variability of the characteristic variables
can be spatially modelled with a known degree of uncertainty. In addition to
collecting enough samples, the sampling programme will also consider the
relationship between bench-scale testing and operational-scale performance, and
the implications for the minimum mass of samples required for metallurgical tests.
(Coward et al., 2009)
Mineral projects are constrained by both time and cost, which in turn limits the
scope of sampling and testing programmes (e.g., limited access) that are designed to
characterise the orebody. This often leads to a trade-off between the number and
size of samples that are collected, measured and/or processed. Some tests and
measures require less sample mass, and for these it is possible to acquire large
numbers of samples e.g. point load tests (masses of several grams). The very small
support used to acquire data from the sample can compromise the measurement's
validity (i.e. the measurement does not correctly reflect the true value of the total
sample) and impact on overall sample representivity. It is important to be able to
quantify the impact of different numbers, sizes and types of samples on the scale-up
from sample support to production support. In cases where a large sample is
collected e.g., 100 tonne pit samples, the number of samples are limited, and so
spatial coverage is usually insufficient for estimation. Thus, it is often the case that
the support, spatial location and number of sample points are insufficient to
characterise the spatial character of the rock property variables and hence
rendering reliable spatial estimation difficult if not impossible.
This chapter describes the design and execution of a sampling campaign that aimed
to address many of the limitations of traditional sampling approaches. Several
different properties of kimberlite were measured using a range of methods and
tools on cores derived from holes that were relatively closely spaced (~5m).
It should be noted that this work was carried out in collaboration with Dr. Matthew
Field, who assisted with the design and management of the sampling campaign. The
work was generously funded by De Beers, who also allowed on-site staff at Venetia
Mine to supervise core drilling and core collection and logistics. The author co-
ordinated destructive tests on core samples at several laboratories. The author
undertook several site and laboratory visits during the process of the sampling
campaign. Rob Pierce assisted with project management and compilation the
geological data base.
The sampling design used in this experiment aimed to test the assumption that,
given sufficient geological controls (i.e. within a geologically homogenous, grade
‘domain’), it would be possible to sample kimberlite cores, and augment data
acquired from destructive rock testing with a range of geophysical measurements
to facilitate geostatistical characterisation and hence estimation of rock
characteristics. This required a data set to be generated from samples taken from a
real deposit that was geologically well defined, with closely spaced (i.e. less than 5m
apart) samples. Spatial intensity, or spatial coverage, can be defined as some
function of the average distance between the samples. Sufficiency criteria can range
from qualitative rules of thumb (Parker, 1978) for the mineralisation type to a
quantitatively derived measure such as having observations that are within half the
range of the semi-variogram. As no such data set was readily available for the
variables that were to be sampled in this kimberlite, this sampling experiment was
designed and executed based on the minimum feasible distance that could be
achieved within the physical constraints of the drilling equipment and the most
likely variogram range that would be expected for the variables considered.
• Design a layout of samples and core holes to acquire and test core from
several closely spaced drill holes of 50 m in length;
• Define how each portion of core is to be sampled and describe protocols for
core selection, preparation and treatment;
This chapter describes the sampling layout, gives an overview of the geology of the
site selected, and describes the tests and tools used to generate the data. The
independent variables include all the primary properties of the rocks and the
dependant variables are the response variables. Process variables are those used to
control the properties of the processes used to carry out the tests.
A programme was carried out on a block of in situ ore (50m x 50m x 50m) located
at Venetia Mine in South Africa. Nineteen core holes were planned to be drilled and
each core was subject to several phases of description and testing according to a
pre-determined procedure.
The data set described in this chapter has been used in subsequent chapters to
demonstrate the proposed methodology to determine the impact that the variable
rock properties can have on diamond recovery.
This exploratory data analysis begins with an evaluation of the correlations between
measures:
In the next chapter, the spatial behaviour and correlations established between
these measurements are reviewed and their usefulness in generating spatial
estimates of the in situ the rock characteristics is assessed.
The main aspects of the specific sampling campaign that was carried out included:
Orientated core holes were drilled at 5m centres from the centre hole in a cross
pattern to a depth of approximately 50m from the hole collar. This cross pattern has
been established as a reasonably robust methodology for establishing short-
distance component of variograms (Chiles et al., 1999). The reason for the close-
spacing of these holes is that one of the most important aspects of the spatial
relationships are the nugget effect (i.e. random component) and the range over
which individual measurements are correlated. If the expected range is 50m, then it
is essential that some data points are located between 0 and 50m.
The location of the cluster of holes was determined largely by the availability of
suitable areas in the open pit that would allow for access and extraction of the cores.
Mapping of the K2 open pit by Brown (2008) provided good geological control, and
the pit was made available for experimental drilling from November to December
2004. The cluster of holes straddled the faulted contact between “fine-grained,
homogenous volcaniclastic kimberlite” occupying the eastern part of the pipe, and
the more varied lithologies lying to the west of the faulted boundary. This approach
ensured that the data set would cover two distinct domains (Figure 13).
Figure 13: Geological map of Venetia K2 (after Brown 2008) showing the location of the
Geomet drill holes (orange cross).
Two categories of drill holes were defined; The first are holes where all the material
was sampled, and the second where only portions were sampled.
Figure 14: A view of the layout of the core holes depicting the location of the subsamples.
The sampling layout of the cores extracted is depicted in Figure 14. White
intersections were not sampled but retained; all other coloured intersections were
sampled. Details are of the subsamples are shown in Figure 15. It was important to
assign a sample type to each of the lengths of core prior to recovery of the core to
minimise selection bias. This is of importance when selecting cores for destructive
testing. Post drilling selection methods can result in only competent cores being
selected for test work which introduces large biases in the resulting data.
Figure 15:Listing of subsamples taken from cores that were fully sampled.
The layout ensured that the fully sampled holes would provide a duplicate set of
samples for each mining bench, whilst the sub-sampled holes would provide a one
sample set per bench.
Ideally, all the data would be collocated at the same point and all the data would be
available at all points, and this data could then be termed ‘isotopic.’ As several of the
sampling tests destroy the core it is not possible to generate truly isotopic data. The
destructive tests were thus ‘bracketed’ with petrographic samples to test for short
scale changes in geological characteristics. The petrographic data were used during
analysis of the data to identify the bench sets where the assumption of geological
continuity within the bench would be tenuous.
• three small petrographic samples (Pet) spread through the interval to enable
the geology of the section to be described;
• two microdiamond samples (Mida) to evaluate the diamond grade of the
sampled interval;
• a geometallurgical sample (Geomet) that would be used to determine the
physical rock characteristics; and
• a geotechnical sample (Geotech) that was used for destructive geotechnical
testing.
The core that was extracted was logged and subjected to a range of non-destructive
description, analysis and destructive tests. This included:
The downhole geophysical data were collected to evaluate the potential for building
relationships between several continuous measures and the discontinuous
destructive tests that were carried out on the core. These relationships can be used
to enhance the estimates of destructive test responses at unsampled locations. The
nature of this data also lends itself to assessment of the representativity of the sub-
samples that were taken from the core. As an example, the density distribution of
the sub-samples can be readily compared to that of the density measures derived
from the geophysical tools.
As these geophysical data are continuous, they can to some extent be used to assess
the impact of support on the derived measurements. The true support of the tools
not only differs from tool to tool but will also vary between rock types that respond
differently to the signals being used.
The downhole data were gathered using several tools, each of these tools and their
rock-tool responses are briefly described below:
Downhole Calliper
This tool measures the size of the hole using three arms that trail behind the tool
and rub up against the walls of the drilled-out cavity. The deflection of the arms is
measured and the circumference of the hole at that location is calculated assuming
that the hole is circular. Variations in the size of the hole are related to different
properties of the lithology that has been drilled through and to some extent the
drilling parameters. Using the calliper data, it is possible to calculate the volume of
the core extracted. Comparing the calipered volume against the measured
recovered core volume makes it possible to calculate in core recovery. Core recovery
is correlated with both rock quality and drill rig operating parameters.
Resistivity
The direct contact resistivity tools work on having several point contacts with the
wall rock that are used to pass a current through the wall of the hole. The rock
resistivity is determined from the relationship between the voltage that is applied
across the contacts and the current flow that is realised. Some tools induce a current
in the wall using coils that are energised with high frequency alternating currents
and induce a current in the wall rock. This induced current is inversely proportional
to the resistivity of the rock being tested. Resistivity is deduced from the amplitude
and relative phase of the secondary magnetic fields that are sensed by measuring
coils. Direct contact tools provide high resolution logs but do not provide the
penetration of the wall rock that can be achieved with inductive tools.
The natural gamma tool uses a sodium iodide scintillation detector to measure the
natural gamma ray radiation emitted by the formation that it is passing through. The
signal is processed into five distinct peaks that correspond to the radiation energy
from three most common elements that contribute to naturally occurring radiation;
potassium, thorium and uranium. The count rates in each of the five segments of the
energy spectrum are used to determine the relative proportion of these components
in the rock. In strata where relatively, low counts are obtained logging speed can be
reduced to ensure that the number of emissions measured remains statistically
valid. The data can be expressed as total gamma ray, a uranium free gamma ray, and
the concentration of uranium, potassium and thorium.
There are several tools that measure density down the hole. In this case the
instrument (or sonde) consisted of a gamma ray source that is contained in the
downhole probe. The gamma rays are scattered by the wall rocks. The intensity of
the returning gamma rays is a function of the electron density of the rock that in
turn is related to the bulk density of the rock. The back scatter is detected at two
distances from the source. The further the detector from the source the greater the
support of the density measurement. Short spaced readings may in some cases may
biased toward that of the drilling fluid if the hole surface is coated with drilling mud.
This tool has been developed to measure the hardness of the wall rocks of boreholes.
The unit consists of a calliper that measures the original hole diameter and has a
cutter wheel that runs along the outside of the hole. The deflection of the cutter into
the wall is recorded at small intervals and reported as a penetration reading in mm.
Acoustic velocity
The velocity and attenuation of acoustic signals in rocks is related to the nature of
the material that is carrying the signal including its density, porosity, the fracture
frequency and orientation as well as the clast size distribution. The strength and
frequency of the input signal can be adapted, to improve the quality of the data
obtained.
There are several ways in which acoustic velocities in rock can be measured. Most
rely on transmitting an acoustic signal through a specimen of rock and measuring
the speed and attenuation of the signal of both compression waves and shear waves.
An alternative approach for extracted core is to create a right cylinder of the rock
and then input a signal of varying amplitude and frequency until the resonant
frequency of the specimen is obtained.
If it is assumed that that rocks being measured are homogenous, linear elastic and
propagate acoustically induced waves at the same velocity in all directions (
isotropic) it is possible to create a mathematical relationship between measured
shear wave and compression wave velocities and several of the physical moduli.
This approach is particularly useful for characterising the physical characteristics of
the rocks considered and so the relationships between acoustic velocities and
material characteristics are described here in more detail.
1. Young’s modulus;
2. The Bulk modulus; and
3. The Shear modulus.
Young’s modulus is a measure of the elastic stiffness of a material. Named after the
18th Century British scientist Thomas Young. It describes how a solid deforms under
load within its elastic limit. It is defined as the ratio of imposed pressure to relative
deformation (strain) measured in a direction that is parallel to the direction of the
imposed pressure. Strain is a measure of deformation normalised to the sample
length or area (Equation 6). It can be derived from measuring the slope of the stress
strain curve near the origin, or in the interval from the onset of the deforming
pressure to a point before plastic deformation occurs. It is in this range that the
deformation on the solid is directly proportional to the imposed force. This
behaviour is described by Hooke’s law, so named after the 17th century British
Physicist, Robert Hooke.
∆𝑳
𝒆=
𝑳 Equation 6
Where
where
∆𝑷
𝑲 = −𝑽 ∆𝑽 Equation 8
where
∆𝑉 is the change in the volume observed for the given change in applied pressure.
The inverse of the bulk modulus gives the compressibility of the sample
The shear modulus, sometimes referred to as the modulus of rigidity, is the ability
of a sample of material to resist deformation in a direction that is tangential to the
main axis of the sample (Equation 9). It is reported in Pa and is measured by
applying a bending force to a sample and measuring the deflection as the bending
force is increased.
𝑭⁄𝑨
𝑮 = ∆𝒙/𝑳
Equation 9
where
It is possible to estimate the above moduli using measures of the acoustic velocity
in a solid. Alternative equations that can be used to estimate these quantities are
given in Table 6. These equations have specific measurement requirements, and are
based on assumption that the rocks being tested are linear elastic, homogenous and
isotropic (Momayez et al.,1995).
E
(1 − )(1 − 2 ) G=
E =V 2
2(1 + )
1−
p
E = V p2 2 (1 + ) G = Vs2
9Gk
E=
(G + 3K )
E = 2G (1 + )
Table 6: Formulae for estimating elastic moduli of solids using measured acoustic
velocities in rock specimens.
Where:
Vs
= Shear wave velocity
= density
= Poisons ratio
The work showed that the ratio of compression wave velocity to shear wave velocity
was lower for samples taken from the pit than for core samples. Further analysis
indicates that this is a function of both weathering and potentially the introduction
of micro-cracks introduced during the mining process. The author did, however,
conclude that acoustic signal measurement could be used to discriminate between
waste rock species and different lithotypes in the Venetia deposit.
Once all the non-destructive testing and core description was completed, the
process of sampling the core to obtain further information commenced.
Petrological: These samples were removed according the procedure shown above
and were submitted to the De Beers Geoscience Centre Johannesburg for sample
preparation according the scheme depicted in Figure 16.
Geometallurgical samples:
These samples of 1 metre of core were sawn to length, dried and packed in
cellophane and dispatched from site to SGS laboratories in Johannesburg.
Drop weight
tester-
Screen t10 index
Ta Analysis TA index
Micro diamonds
Figure 17: Preparation and analysis of Geometallurgical samples.
The method used for rock breakage testing follows the procedure described by
Morell (Morell, 2003) and termed the SMC test. This test has been developed by the
JKMRC in conjunction with several industry partners. The method has recently been
refined by Dr S. Morrell to utilise a smaller support of core (Morell, 2003).
Essentially the method aims to impart a fixed energy into several fragments sawn
from the core and the resultant degree of fragmentation is measured by sieving the
progeny of the breakage test. It is then possible to relate the energy input to
proportion of material that has been crushed to less than one tenth of the size of the
original fragments. This is the so-called t10 and gives a measure of the samples’
resistance to crushing.
At the laboratory in Johannesburg the cores were cut into quarters and then each
quarter section was cut into 40mm lengths to form triangular sectors of quarter core
for testing. Each of the triangles was then subjected to drop weight testing. The
product of the controlled energy fracture was collected and sized. These data were
then used to determine the energy breakage relationship for each sample (Figure
17).
Geotechnical Samples: A half a metre of core was sawn using kerosene as the cutting
fluid. The section of core was then cut into lengths of approximately 120mm for
uniaxial compressive strength testing. The remainder of the core was cut into discs
of 40 mm of core length for Brazilian tensile strength as described in the literature
review section. This process is depicted in Figure 18.
The samples were extracted from Venetia, a kimberlite mine located in northern
South Africa. 17 cores were extracted from the K2 pipe in the form of a cross with
the samples located 5m apart. The samples were collected from the core as per the
recipe described above.
The plan view layout of the core location is shown in Figure 19.
The samples were collected in the middle of 2005 and subjected to destructive
testing shortly afterwards. Rocklabs, a testing laboratory located in Pretoria carried
out the UCS and BTS testing and SGS laboratories in Johannesburg carried out the
drop weight tests.
Cores were drilled out, collected and scanned using a visual spectrum core scanner.
The holes were subject to a suite of geophysical measures. The scanned core was
then marked according to the original sub-sampling design, regardless of the
‘suitability’ of the selected interval, this was done to minimise selection bias. The
sub-sampled lengths of core were also photographed to ensure that. where
anomalies were detected later in the analysis, the photographic record would
provide a means of further investigation.
The core was then cut into subsamples that were dispatched to several laboratories.
As the data were returned, they were compiled into an access database. Substantial
time was spent ensuring correct location of the data using several methods. This is
important because to test associations between measures (e.g. Downhole
geophysics and UCS) there must be a high resolution on the data location.
The data collected are described using descriptive statistics that include some
measures of spatial dispersion and expected relationships in the data set. This
section provides data for this thesis and thus needs to be detailed in a way that
supports further analysis and identifies shortcomings in the data collection process
that might in future be improved by adapting the data acquisition protocol.
The data comprise several types of tests, collected from several different
laboratories, and have been stored in an access database titled Venetia K2.mdb. The
numbers of samples collected during this programme are given in Table 7.
• Describe the data that has been collected, identify any limitations in the data
collection process;
• Data preparation – removing outliers e.g., impact of casing on geophysical
readings;
• Evaluation of the relationship between geological characteristics and test
responses;
• A comparison and contrasting of the results from the different destructive
testing methods; and
• Identification and recommendation of changes improvements that can be
made to improve the reliability of the results of this type of testing.
• Petrological logging
• Microdiamond sampling
• Downhole and core geophysical measurements
• Destructive test measures
A total of 43 petrological samples were acquired. The core sections were cut into
slabs and polished. The surfaces of the slabs were photographed at high resolution
and the images. The variables that were generated during this phase included the
grain size distribution and mineral abundance, the summary statistics of this data
are displayed in Table 8.
The petrological data show the average proportion of area of each sample that is
made up of typical so-called ground mass minerals (matrix), olivine and lithics. The
olivine gives an indication of the proportion of deep mantle minerals (including
diamonds) that are in the sample, the matrix gives an indication of the state of the
kimberlite during emplacement, and the lithics give some indication of the
proportion of dilution in the kimberlite sample. For this data, although the matrix
makes up about half of the area sampled, in some specimens this value drops to zero
when the waste contamination rises to 100%. In this sample set, olivine constitutes
on average 10% of the area but it ranges from 0 to 17% in the samples.
The specification for this test is given by the American Society for Testing and
Materials (1988). It can be described as a compression to failure test where a core
specimen, cut into a disc of predefined dimension, is placed under load to failure.
During the compression of the core the deformation of the sample translates the
compressive force into a tensile force that leads to tensile failure.
This is also defined as an ASTM test. In this case a section of core is cut to a
rectangular cylinder of predefined dimensions and loaded until failure. As the core
is loaded the deformation of the core can be measured by means of strain gauges
and several other derivatives of rock strength can be calculated.
UCS BTS
Specimen UCS Specimen BTS
Density Strength Density Strength
g/cm3 Mpa g/cm3 Mpa
Minimum 2.38 14.96 2.32 1.3
Median 2.52 31.11 2.52 5.1
Mean 2.56 44.66 2.56 6.2
Max 2.94 169.36 2.90 35.5
Std dev 0.13 39.49 0.14 4.0
Coeff. Variation 0.05 0.88 0.05 0.7
Count 137 137 135 136
Table 9: Summary statistics for the Uniaxial Compressive strength (UCS) and Brazilian
Tensile Strength (BTS) test work.
A total of 130 core samples that were 63mm in diameter were submitted for testing.
A total of 117 results were used for analysis. The method is described in Napier-
Munn et al. (1999) and requires several specimens of known size to be crushed with
a known energy input.
The ‘no data’ responses were a result of not being able to cut the core to the required
size. This suggests that these samples either were mishandled or had weathered to
an extent where cutting was made impractical. The infrared scans of these cores
were reviewed to identify high proportions clay in the following samples: 01, 02, 09,
20, 2, 26, 36, 44, 66, 68, 69, 96, 111, and 115.
The t10 data were gathered at three different energies and included the density and
size distribution of the product material after it was subject to drop weight testing.
A total of 117 results were obtained, and summarised statistics are given in Table
10.
In Table 10 it is noted that the coefficient of variation of the t10 increases with
decreasing energy. This large change in the range of the variability of the response
variable may be the result of a few outliers in the data set.
It appears that at an energy input of 0.25 Kwh/t, at which only 10% of the material
has been reduced to one-tenth of its original size, that the differences in the pre-
existing fractures of the rock specimen used for drop weight measurements has
more of an impact than the intrinsic rock fabric. Histograms of this data are
presented below.
Figure 20: Histogram of the density of samples subjected to drop weight testing.
The t10 data range from 41 to 4% passing one-tenth of its original size at three
different energy levels. There are some samples which were identified as waste rock
that gave results that lie in the tails of country rock; these have been labelled as
waste rock.
It is however known that there is a relationship between the lithology from which
the sample comes, the density of the sample and the resulting susceptibility to
fragmentation. The histograms in Figure 21 show the spread of this data. In this
data, the increasing spread of responses is also indicated (the top panel in this figure
is at a lower energy than the histogram in the lowest panel). This suggests that at
higher energy input levels the range of resulting fracture is higher, and the
histogram is less skewed. This gives an important indication to the design of future
experiments, suggesting that appropriate energy levels are required to fully
characterise the rocks energy breakage relationship.
Figure 21: Comparison of histograms of drop weight results for all samples tested at three
different input energies (top figure is lowest energy; bottom panel is highest energy).
The samples were taken from two different rock types, a breccia domain which
includes many country rock fragments, and a more coherent volcaniclastic domain.
The data can thus be grouped using the rock type criteria. Figure 22 and Figure 23
present a comparison of the location and histograms of density for the VKBR and VK
domains respectively
Figure 22: Histogram and base map of sample density for the VKBR domain.
From these figures it is apparent that the VK domain has a smaller variation of
density and so we would expect to see a less variable measure of rock fracture for a
given energy input as depicted in Figure 24.
Figure 23: A histogram and cross section of the densities of the samples taken from the VK
facies domain
Figure 24: Comparative histograms of drop weight responses at three different input energy levels
for the VKBR facies (LHS) and the VK facies (RHS).
This change in location and dispersion of the t10 variable with differing energy input
suggests that during analysis modelling of this variable in a spatial context will not
be trivial. Several approaches to dealing with the skewness and instability of the
histogram will always require testing if this variable is to be used.
The downhole data were acquired by running several tools down and up the drilled
holes at a controlled rate. All data were then provided to the project and
summarised in a database.
Table 11 below contains summary statistics of the results for all holes that were
surveyed using the tools provided.
Table 11: Summary of downhole geophysical readings taken from each hole drilled.
A brief analysis of this data indicates that there is an impact of the steel casing that
was used around the collar. This can be seen in the histograms for density which
shows several observations in the 3.5 g/cm3 range (Figure 25). The main
implication is that the observations from the cased portion of holes can be used to
check the calibration of the density measures, and that it is important to discard this
information from any further rock property interpretation.
The analysis of the data produced in this way aims to determine the following
relationships:
• Penetration and lithological and petrological classification;
• Penetration and destructive tests; and
• Penetration and other geophysical measures
The tool was deployed down four holes DDH 357, 358, 359, 365 and 366. The basic
descriptive statistics of the downhole formation data is contained in Table 12.
A plot of the three readings for hole DDH357 is shown in Figure 26, a histogram and
downhole variogram of the cutter penetration (the difference in location between
the calliper and cutting wheel) is provided in Figure 27. The ranges of the data
clearly demonstrate that the unit is indeed responding to the different lithology
types and higher variances in penetration are seen in breccias. It is also interesting
to note that the points of extreme deflection are associated with break out from the
drilled cavity. This information can be used to quality control the downhole
calipering of the holes.
100 5
99 4.5
Difference in Deflection (mm)
98 4
97 3.5
MM diameter
96 3
95 2.5
94 2
93 1.5
92 1
91 0.5
90 0
10 20 30 40 50 60 70
Depth (m)
Figure 26: A plot showing the three readings generated by the formation hardness tool for
hole DDH357.
As can be seen from the downhole semi-variogram (Figure 27)the range of the raw
semi-variogram is of the order of 15 m, justifying the requirement for spacing of
holes by approximately 7m.
The design and execution of this programme required substantial support from the
mine and its operational staff for which the researcher is very grateful. Ensuring that
the right equipment and access could be provided to the site required substantial
planning and coordination. It is essential that in the execution of this type of study
that frequent and on-going communication is facilitated.
Several of the test units, which although they were housed in an air-conditioned
container, had occasional problems with operating temperature and dust. As this
campaign aimed to get many different types of close spaced data from the samples
generated the programme took several months to complete. The study does
however demonstrate that it is possible to obtain collocated data of different types
for the evaluation of kimberlitic deposits.
The petrographic data that have been gathered provide a basis for segmenting the
geology of the test area based on the composition of the rock mass and the
abundance of different types of rock fragments. The analysis shows that one of the
limitations in the data collection was the misinterpretation of the location of the
contact between the breccia and coherent volcaniclastic kimberlite. This led to the
coherent volcaniclastic kimberlite being under-represented in the dataset.
The destructive data that have been gathered is co-located with multiple
geophysical measures and hence provides a basis for investigating the potential to
use the geophysical data to augment the destructive test data and improve the
estimation of the physical characteristics of the kimberlite properties in the test
area.
A data set was generated for this research by samples taken from, and
measurements made on, two distinctly different rock types; a highly variable
kimberlite breccia and a more spatially continuous volcaniclastic lithology. The data
have been quality controlled and then captured and compiled into a substantial and
accessible database to facilitate further analysis.
The next chapter expands on the statistics and reviews associations that exist
between the measures made and demonstrates how these can be used to generate
estimates of rock properties at un-sampled locations.
locations. The data gathered here and their use in the value chain model to predict
the expected range of the metallurgical recovery factor is explored in the next
chapter
The approach that has been demonstrated here is applicable in the exploration
phase of new deposits and is also of significant use in assisting the sampling
programme design when extension of the resource and reserves of existing
operations is required.
5 MODELLING PHYSICAL
CHARACTERISTICS OF
OREBODIES
5.1 Introduction
A model of the in situ ore and waste-rock properties that impact on the mining and
recovery of diamonds is required as an input into the value chain model that is used
to estimate the expected overall diamond recovery.
The properties that are important can be usefully classified as two types of
variables:
1 -Primary (in situ) variables: these are mostly characteristics that can be directly
measured, for example measurements of grade, density, grain size; and
Current approaches reviewed to date that are used to model rock characteristics are
limited in that they aim to produce BLUE (Best Linear Unbiased Estimates) of the
characteristics. These properties of the estimate can be useful for several purposes
e.g., when the estimate is being used in a long-term design optimisation process but
may have some limitations when used in a value chain model as the estimates
produced are smoother than the real in situ variability. The methodology
demonstrated here suggests the use of estimates and simulations of the rock
• Identifying and selecting the characteristics of the ore and waste rocks that
have a material impact on the process rate and efficiency;
• Support and scale corrections for laboratory-scale test results to process scale
response;
The process of building spatial models is centred on data that can be acquired in
many ways, one useful distinction is that between ‘direct’ and ‘indirect’ data (Dowd
and Pardo-Igúzquiza, 2006). ‘Direct data’ are data acquired from a sample taken
from a known location. The sample data are acquired by subjecting the sample to a
Model
Prepared
Transform Not
Additive
Low Coverage
Identify Variables Extremes
to be Modelled Important
Augment with Conditional
Meta data Simulation
Begin
The term ‘indirect data,’ on the other hand, refers to the acquisition of data where
the input is acquired from a tool that measures the rock response to a geophysical
input; the geophysical response is then used to inform the property of the rock that
would have exhibited had it been exposed to direct testing and measurement, but in
an indirect way though some form of mathematical model. The value of integrating
these two data types is that the relationship between the direct and indirect data
can be used to improve the estimate of the unknown quantity at un-sampled
locations.
Importantly, direct data are usually sparse and need to be expanded in coverage and
support. Indirect data can, however, be adapted to provide larger support, is far
more exhaustive and usually needs some form of compression or summarisation.
In this chapter, methods to improve the quality of the rock characteristic estimates
using relationships between the direct, indirect, non-destructive and destructive
data are explored. These improvements include the reduction of the error of the
estimate of rock characteristics derived from destructive tests at un-sampled
locations and the identification of potential sampling bias that might arise from
sample selection.
The approach adopted can broadly be separated into two phases. The first is an
exploratory data analysis phase in which relationships between the variables of
interest are evaluated, followed by a second phase that evaluates different methods
for spatially estimating values into the block model needs to be considered.
This chapter begins with a brief review of the data that have been collected from the
experiment. These data are then used to a) improve the estimates of the rock
characteristics at point scale ‘down the hole’ using a few multivariate techniques
and b) to review the implications of different pathways that are available to
generate block scale estimates of response variables.
Previous work on the integration of different sources of data has included several
methods that use spatial data to generate property estimates at block model scale
(Dowd 1997, Dowd and Pardo-Igúzquiza, 2006). These can be grouped into two
types:
2)Those used to understand and exploit the spatial nature of the data to
generate spatial estimates of variables at a mining block scale. These
methods also use spatial covariance models to integrate direct data and
sensed data .
The methods used included ordinary kriging, standard linear regression, and
estimation of a local mean, and simple kriging with estimation of a local mean,
kriging with external drift, co-kriging, and Bayesian integration. The outcome of this
study indicated that if there is less than 10% direct data available, and the
correlation between direct and indirect data is less than 0.3 then geostatistical
methods are better as they give lower mean squared errors than linear regression.
Co-kriging is best when the correlation co-coefficient is above 0.2. Above 0.3
external drift and or local means perform best (Dowd and Pardo-Igúzquiza, 2006).
Spatial variables assume values that change as some function of the support and
locations at which they are measured. Most spatial estimation methods create some
form of weighted average the measured values of a variable at a point, over a specific
volume (e.g., the grade of a drill core is the average grade over a cylinder of rock).
The variance of a spatially correlated variable usually decreases as the support on
which the measurement is taken increases. There are several methods available for
using point data acquired from some location in space to estimate the value of a
spatial variable at an un-sampled location and these are briefly reviewed.
The simplest of these is to create a polygonal estimate where the values that have
been measured in the sample are used to inform a volume around the sample. In this
approach, no averaging is carried out, but neither is a specific change of support
calculation, so that the estimate will have a variance that is a function of the sample
layout, and hence volume estimated, and it is likely that the estimates generated in
this way will have a lower variance than the real values.
When using a kriging approach for estimation, the spatial variability of the data is
characterised in the form of a spatial covariance model or semi-variogram. Reliable
estimation of the parameters of the spatial covariance model requires sufficient data
that are representative of the domain within the orebody that is to be estimated.
Domains are variable-specific and selecting and defining domains of stationarity for
response variables requires consideration of both statistical and geological aspects
of the variable to be modelled.
Rock characteristic variables may require sampling on various sample volumes and
over a range of distances to provide an adequate quantification of spatial variability.
Linear geostatistical estimation methods (e.g. ordinary kriging) require variables to
be additive. Thus, if the variables are non-additive this process is not applicable but
may be used if a data transform can be used to render the data sufficiently additive,
or if the range over which the variable is observed appears not to show too much
non-linear behaviour.
In some cases, however it is the extreme values that are of interest in the orebody –
e.g., for example extreme hardness that will damage crushing equipment. In this
case geostatistical simulation methods are required. These can be used to honour
the data at the sampled location and produce the variability at a scale suitable for
the value chain model. Individually the simulations present a ‘truer’ reflection of
variability in the ore characteristics, and when used as a set can give an indication
of the range of values for the characteristic of interest. Methods described in the
literature review will be demonstrated here.
The variables selected for modelling are useful for predicting process performance
and diamond recovery. These broadly cover variables that will influence the
fracturing of the rock in a process, and the efficiency of the separation of diamonds
from their host rocks.
There are several destructive tests that can be used to measure the strength of rock
specimens. In this work the focus has been on obtaining data that quantify the
nature of the rock strength in tension and compression, and the fracture that results
from energy input. The tests considered include uniaxial compressive strength
(UCS), the Brazilian tensile strength (BTS) and t10.
Some existing test standards have well-defined support, even though this may be
inappropriate for all rock textures (ASTM, 1998). This is because the scale at which
they are carried out may not be sufficient to measure the responses that are valid
at the scale of the required estimates. This is arguably true of rock strength
measurement in coarse grained rocks or in breccia lithotypes. To overcome this, it
is suggested that several tests are conducted over a range of sizes larger and smaller
than those laid down in the standards in use.
Several of the rock characterising tests (Bond Work Index, Drop weight test, UCS,
BTS) have been developed to mimic at a small scale the processes that the rock will
undergo during mining and treatment. The underlying philosophy has been that if
one can obtain a representative sample it is possible to carry out small-scale tests
with sufficient rigour and control to enable their results to be scaled up using some
form of factorisation.
Downhole geophysical readings create a valuable data set as the readings are
continuous and data acquired from different tools can be used in concert to improve
the estimate of destructive results at un-sampled locations. The downhole
geophysical wireline log readings are taken approximately every 10mm. At this
scale, the tools are potentially responding to several very small features, and the
readings may to some extent be smeared.
10
.
9
8
Average Variance in readings
7
6
5
4
3
Average variance within samples
2 Variance between samples
1 Total variance
0
0 1 2 3 4 5 6
Meters of reading averaged
Figure 29: A plot showing the impact of the size of the interval used on the total variance
of the bulk modulus measurement.
To improve the relationship between the wireline geophysical signals and the
destructive tests the geophysical signals need to be accumulated. The data were
accumulated over increasing intervals and the within and between interval variance
calculated. The interval over which the sum of these variances is minimised is
deemed to be the appropriate length for averaging the geophysical signal. In the case
of acoustic data this was found to be about 70cm of readings (Figure 29).
This has the impact of reducing the signal variance and improving the correlations
between the wireline geophysical log and the results of the destructive tests. It is
strongly recommended that in the search for appropriate proxies this approach to
upscaling be used to evaluate associations between destructive and non-destructive
rock measurements.
The correlations between all measured data were calculated. Some of the higher
correlations were observed between acoustic velocity and various measures of rock
hardness. Indeed, it is possible to translate acoustic velocities into various estimates
of destructive rock properties using the formulae discussed in Momayez et al.
(2004).
40
UCS_spec_strength
35 2012VK Linear model
2011VKBr
2014VKBr
2013VKBr
30
2010VKBr
2007VKBr 2008VKBr
.
25
2009VKBr 2006VKBr
UCS in Mpa
20
15
10
0
0.400 0.600 0.800 1.000 1.200 1.400 1.600
MM deflection of Cutter
Figure 30: A plot showing the linear model developed between the formation hardness tool
readings and the UCS values.
Even though the relationship is weak, it is possible to use these data in several ways,
such as co kriging to improve the estimate of the destructive variable (UCS) using
the downhole response (Cutter deflection).
Figure 31: A box and whisker plot showing the sample value used to calibrate the PLS
models on the left of the plot, and the estimate of UCS down the hole on the right. The
heights of the bars indicate ‘goodness of fit’ of the partial least squares model derived from
multiple versions of the model using different combinations of sample and hold out data.
The sampling experiment gathered abundant geophysical data that are correlated,
albeit weakly with the spatially sparse destructive data. In this case a trial was made
using a PLS model to estimate the UCS of core along its length. To do this, a
calibration set of samples is required to define the relationship, which can then be
used to estimate the values at un-sampled locations. In Figure 31, generated using
Camo’s Unscrambler software, the sample values are plotted next to an estimate of
the values downhole 358. The very high bar on the left shows that there is one
sample (Venucs 2016) for which the relationship between the model and the actual
value is very weak and does not correspond with the model parameters.
There is also a way to incorporate directly the results of the acoustic velocity data
into the downhole estimates. These can be plotted together to provide a means of
visually reviewing the models as shown in Figure 32.
50 4
45
3.5
.
40
3
UCS values MPa
35 VENUCS2049 VENUCS2053
VENUCS2048 VENUCS2051
VENUCS2050 VENUCS2055 2.5
30
mm penetration
VENUCS2046 VENUCS2052
25 2
VENUCS2047
20
1.5
15
1
10
0.5
5
0 0
0 10 20 30 40 50 60 70
M Down hole
Figure 32: A plot of measured UCS values, and models for the samples based on the
downhole acoustic signal and cutter penetration depth.
In Figure 33 the UCS values down the hole for the samples recovered from the VK
facie have been calculated using a PCA model of P-wave acoustic velocity
measurements, S wave acoustic velocity measurements and measured densities.
The boxes show the range of values that would be calculated for redoing the PCA
multiple times and leaving out approximately 10% of the data in draw – this gives a
sense of the uncertainty or potential error that is in the PCA model.
The second way in which these methods can be evaluated is to generate estimates
and then evaluate the predictions based on the actual properties that are
encountered when this part of the orebody is mined. The challenge in this approach
is that the estimated ore will not be the only source of rock to be treated. There will,
however, be an opportunity to carry out in-pit sampling as the mine deepens and
Figure 33:PCA model of UCS downhole based on a Principal Component Analysis (PCA)
model using P- wave velocity, S wave velocity and long density, using one principal
component (upper panel) and two principal components (lower panel).
Polygonal
For a highly heterogeneous domain in which the underlying spatial structure cannot
be discerned it is reasonable to assign each sample attribute value as the best
estimate of the attribute of a volume, of specified shape and size, centred on the
sample. In this case downhole models would be used to generate estimates for
samples of 70cm cores (as indicated by the prior work on upscaling). These can then
be used to estimate the values into polygons measuring 0.7m*0.7m *0.7m. In this
way, the sample statistics and the block statistics would be the same (which, of
course, is impossible). The sample and estimate statistics using this method on the
experimental area are contained in Table 13.
Polygonal
Polygonal Block
estimate
Estimate values
Statistic Sample Values averaged into 5m
(0.7mx0.7mx0.7m
x 5m x 5m block
grid)
model
Table 13: Summary Statistics for drop weight test data showing sample and polygonal
statistics.
The underlying assumption with this approach is that the samples and modelled
values are representative of the areas between holes. When drilling is not closely
spaced this assumption may become very tenuous. The impact of sample layout on
the estimate generated using this approach is demonstrated in Figure 34. Note how
samples that are located far from other samples influence a far larger area than
samples close together.
Figure 34: A 3d representation of a polygonal estimate of drop weight values in the 0.7m
x0.7m x 0.7m grid.
The data were imported into Isatis, a geostatistical software package, and raw semi-
variograms were calculated, modelled and interpreted. Initial analysis has detected
ranges in the order of 6m to 15 m. This short range is thought to be due to the nature
of the orebody sampled and the small specimen size.
Increasing the support by ‘compositing down the hole’ has not decreased the overall
variance as the samples are not contiguous along the core. Further experimental
work would be required to determine the impact on both the distribution of
destructive rock property values and their ranges. One approach might be to acquire
larger diameter core and thereby increase the sample support. This would, however.
increase the cost and time taken for sampling.
The experimental and fitted semi-variograms for the UCS data are displayed in
Figure 35.
Figure 35: An experimental and fitted model for the variogram for the UCS data.
The fitted semi-variogram model comprises a nugget effect and two spherical
structures with ranges of 25m and 27m. The kriging was carried out with an
isotropic search radius of 27m and used a minimum of 5 samples and a maximum
of 10. A cross-section of the area estimated is depicted in Figure 36, where hotter
colours represent higher values and cooler colours represent lower values.
Figure 36: N-S cross section of area estimated showing high UCS estimates in hotter colours
and lower values in cooler colours (LHS) and a histogram of the block UCS values estimated.
The statistics of the estimates as shown on the histogram indicate that the mean of
the UCS is slightly lower than the mean of the samples but that the standard
deviation of the block UCS has dropped from 27 to 6.5. It is also evident that the
estimate in this specific case is influenced by a few very high values.
Fortunately, the data were collected with very well documented geological
descriptions and it was possible to determine that these results had been influenced
most strongly by their waste rock content. This suggests that when sampling for
rock characteristics, especially in breccia kimberlite facies, there may be a
requirement to carry out some form of indicator kriging to deal with the highly
skewed nature of the so-called ‘contaminating country rock fragments.’
Alternatively, a separate estimate of the proportion of dilution of each major type
might be used in generating the spatial model.
Carrasco, Chiles and Séguret (2008) present a mechanism to test the consequences
of treating a non-additive variable as additive. In this model a comparison is made
between the 'illegitimate average of ratios' and the 'legitimate ratio of averages' as
presented in Equation 10.
𝒁𝒓
𝑹= Equation 10
𝒁𝒉
𝒁𝒉 as the in situ sampled t10 can be treated as the in situ ratio, or proportion, and
can be incorrectly estimated using kriging from samples into blocks.
𝒁𝒓 can be considered the size distribution that results from crushing at the
treatment plant derived by correctly converting t10 into the mass proportion in a
block of a given size; and
𝑹 is the response variable the ratio of the treated and mined t10. represents the
process response variable of tonnes to undersize of 2mm
In this way it is possible to calculate the R variable in two different ways and then
determine the impact of the underlying bias that arises from averaging a non-
additive variable. :
𝒁𝒉 is constant, or 𝒁𝒓 is a constant;
or the range of the values of 𝒁𝒓 , 𝒁𝒉 and the change in the ratio of 𝒁𝒓 to 𝒁𝒉 is very
small.
In the case here block size is assumed to be constant and so it is likely that the impact
of the bias of incorrectly using a linear geostatistical technique would be almost
immaterial. It is possible to krig and apply weights 𝑤𝑖 to the samples to estimate the
ratio of two variables ( 𝑍1 (𝑥) and 𝑍2 (𝑥) )where the data for both variables is
available at all sample Equation 11.
𝒏
𝒁∗𝟏 (𝒙) 𝒁𝟏 (𝒙)
∗ (𝒙) = ∑ 𝒘𝒊 Equation 11
𝒁𝟐 𝒁𝟐 (𝒙)
𝒊=𝟏
• the same variogram is used for the estimation of 𝑍1∗ (𝑥) and 𝑍2∗ (𝑥); and
Spatial simulation on the other hand will produce models of block values that have
a higher variability than estimated kriged models which may be useful for assessing
the impact of rock property variability on process performance. The assumptions of
the data properties for spatial simulation are more onerous and require sufficient
sampling to enable modelling of the histogram of the data and testing of geological
domains to ensure that assumptions of stationarity hold.
• Independent kriging of the t10 values based only on the t10 data;
• Sample support scale modelling the t10 at locations in each hole that have not
been sampled; and
The suggested methodology begins with an up scaling of all the collected data to a
70cm composite. In this way the support of the destructive data (t10, and t10 sample
density) and the geophysical data (P wave slow) is equalised. Variograms were
modelled and the estimation parameters set (Table 14). Estimates were run on two
different search neighbourhoods, the second far larger than the first, as the first pass
estimation did not populate all blocks.
Table 14:Table and graphics of the modelled variograms for P-wave Velocity, Specimen
Density and drop weight data.
Table 15 gives the summary statistics for the variables selected for estimating and
the statistics of the estimates generated into blocks of 0.7m x 0.7m x 0.7m
In all cases the average of the raw variables and the estimates differ by less than 3%,
as expected the coefficient of variation of the kriged estimates is lower than that of
the raw data. The number of blocks populated for 'T10_density' and t10_e1 are far
fewer than that for P-wave. This is a result of the range used for P-wave variable
being far greater than that of the other two variables.
Figure 37: Histograms of the original t10 values (left) and the transformed variable "mass
less 5mm in g/tonne" (Right).
The t10 transformation used is displayed in Equation 12. This transform converts the
proportion of the sample passing one tenth of the original rock size to a 'grade' like
variable which expresses the rocks amenability to crushing in the grams of rock that
would be below a certain size for a given energy input.
Where:
𝑡10 is the percentage by mass passing 1/10th of a particles original size for a given
energy input;
In this case the data comprise 116 values, at sample scale. The average of the
sampled t10 value is 33.29, which is the average of the ratios. To calculate the ratios
of the averages the average mass less than 5mm was divided by the average mass of
the samples to give a value of 33.262. The difference between the average of the
ratio and the ratio of the averages is 0.107% of the average of the original t10 values.
Ordinary kriging was carried out on both variables and the sample density to
populate the test block grid (as described in section 4.4). The block models showing
the variables estimated are shown in Figure 38. The average of the block estimates
for t10 was 33.54, and the average of the mass less 5mm was 8.44 g/tonne.
Figure 38: 3 dimensional projections of the kriging of drop weight values (left) and the
transformed variable "Mass less 5mm in g/tonne" (Right).
The result of dividing the average undersize by the average mass of the block (the
ratio of the averages) is 32.13, which is a difference of just over 5%. The magnitude
of the difference is sensitive to the change in the density estimate which in turn
affects the variation of the mass in each block.
This data set shows a range of values that is relatively narrowly dispersed - with a
coefficient of variation for the t10 samples of 12% and density of 3%. At block scale
the coefficients of variation reduce to ~5% for t10 and a density coefficient of the
blocks of ~1.2%. This does not prove that the t10 variable is additive but gives some
indication that for this specific set of data the risk of treating t10 as an additive
variable may result in a 5% bias in the estimation.
Figure 39 shows cross-sections and histograms of the estimates for each variable
when they are up scaled to mining blocks sizes of 5m x 5m x 5m. It is noticeable that
there is now a considerably higher degree of smoothing.
Figure 39: Cross sections of the orebody and histograms for the three variables estimated
independently into a 0.7m x0.7m x 0.7m grid and accumulated into a 5m x 5m x 5m grid.
The same data were then used to generate a turning bands simulation of the same
area of the deposit. To do this the data variables were converted to Gaussian
variables using a Gaussian anamorphosis. The impact of the transformation is
depicted in Figure 40.
Figure 40: Histograms showing transform of data from raw data to Gaussian variables.
These Gaussian data were used to develop the semi-variogram models depicted in
Figure 41. The three primary directions all seem to have similar ranges, with the
structure of the P-wave variograms being far more stable than those of the
variograms for the t10 data.
Figure 41: Variogram models (top left and bottom right)and cross variogram models
(bottom left) for the Gaussian transforms of P-wave and drop weight test data.
These inputs were used to generate 50 spatial realisations of the variables at block
scale. The simulated values were back transformed, and the simulations were
validated by comparing the simulated histograms replication of the input data.
Perspective views of cross-sections of these variables comparing the texture of the
simulated values vs the kriged values are shown in Figure 42.
Figure 42: Perspective plots showing a cross-section through the test area for 5m x 5m x 5m
blocks for simulated and kriged p-wave velocity (upper images) and drop weight test
data(lower images).
The statistics for the block are given in Table 16 and show the differences in
averages, std. deviation and extreme values. As expected, the estimates have a lower
variance than the simulated values.
In this specific case, the differences between the simulated and estimated maximum
and minimum values are relatively small. This is to some degree a function of the
relatively closely spaced holes (5m) whereas in normal production models the drill
spacing will be far larger and hence the difference in variance between the
estimated and simulated values will be greater.
Table 16: Summary statistics comparing estimated and simulated values for P-wave velocity
and drop weight results at the 5m block scale.
If it is assumed that the purpose of the operating strategy for a crushing and grinding
circuit is to achieve a given target grind size and allow throughput to vary, it is
possible to predict throughput in each period for a given input rock hardness. To
convert the measured in situ variable t10, to a prediction of throughput requires
several steps including:
• Model an energy breakage relationship, and capture the “A” and “b”
parameters of the fitted function;
• Use the product of the “A” and “b” parameters to predict the energy
consumption;
The alternative pathways through these steps are depicted in Figure 43.
Figure 43: Schematic of the optional routes to use point scale sample data to predict
throughput.
The mechanics of each of the calculation steps are briefly described below:
The procedure for sample testing requires that several fragments of rock are subject
to a range of controlled, known input energies and the sizes of the resultant broken
particles are measured and expressed as the percentage of material passing one
tenth of the fragments’ original size (the t10) for a given energy input. The
relationship between increasing energy input expressed as kWh/t and the
increasing percentage of material passing one-tenth of its original size can be
plotted. A so-called breakage function can be fitted through these points. Figure 44
shows a plot for the case where fragments of the sample have been exposed to three
different energy levels, which have produced three different levels of fracture.
30
20
15
10
0
0 0.2 0.4 0.6 0.8 1 1.2
Specific Comminution Energy(Kwh/t)
VENODS1008 Predicted
Figure 44: A Relationship between input energy and degree of fracture, expressed as
percentage passing 1/10th of original particle size (blue diamonds), showing a fitted
breakage function in black.
Equation 13 describes The function that can be fitted though the data:
In this case the A value is 44 and the B value is 1.06, giving a A*b value of 47.65.
Several authors (e.g., Bye et al., 2011) have used the product of the parameters fitted
to the breakage function (A*b) value to relate the t10 value directly to milling energy
consumption (although in reality the achieved throughput is a function of a host
more variables there it has been demonstrated that there is some relationship and
hence this could be a good variable to use to predict throughput for a target grind
size). This raises the issue of the impact of selecting a ‘controlling’ or ‘target
operational response variable’ for estimation and allowing others to fluctuate in
response to operation with the objective of meeting a targeted response.
This behaviour arises from the interaction of ore characteristics and an operational
strategy and not strictly only the properties of the ore being treated and hence could
be misleading if estimated into the block model without careful and precise
documentation of the assumptions and models used.
Ideally in the value chain modelling, all response variables should be determined ‘at
run time’– e.g., the value chain model constraints would include a policy that the
process targets a grind size of p80 of 75 µm and allows the feed rate to fluctuate. A
separate run of the value chain model might alternatively target 1000tph feed rate
and let the grind size oscillate in full response to the input rock properties. In both
cases the relationship between the primary variables and response variables will be
very different, so the primary variables should be in the block model and the
response and operating strategy should be calculated in the value chain model.
Figure 45: Plot showing broad correlation between average long run energy consumption
in Semi Autogenous Grinding (SAG) mills and average orebody A*b values (Daniel, Lane and
McLean, 2010).
0.6
0.4
0.2
0
0 2 4 6 8 10 12 14 16 18 20
Estimated Power Consuption in Block (Kwh/t)
Figure 46: Relationship between power consumption and throughput for a target grind
size.
The mine plan was simplified to a layer-by-layer plan to assign each block a mining
sequence – the blocks then mined could be assigned a time to crush based on their
t10, which can then be cumulated into a daily, weekly and monthly throughput.
Several pathways are possible to convert the sample data into block estimates of
throughput assuming that the process will be constrained to deliver a fixed size
distribution. These options are briefly described in Table 17.
Pre- Block
Spatial Post estimation
# Point data derivation estimation scale Comments
estimation calculation
calculation prediction
Fit function to
three points at
each sample
location, derive A Spatially
3 Break sample, sieve and
determine % passing 1/10th of
and B value, estimate and
Calculate energy
consumption and
calculate A*b and or simulate Throughput
original particle size, for three throughput in the
estimate A*b, product of
energy levels. block
Calculate energy A*b
consumption and
throughput in the
block
Fit function to
three points at
4 Break sample, sieve and
determine % passing 1/10th of
each sample
location, derive A
Estimate
Calculate throughput
Energy Throughput
original particle size, for three and B value, in the block
Consumption
energy levels. calculate A*b,
Calculate energy
consumption
Fit function to
three points at As throughput is a rate
each sample Spatially variable is subject to
5 Break sample, sieve and
determine % passing 1/10th of
location, derive A
and B value, put
estimate and
or simulate
constraints and limited
to some maximum and
None Throughput
original particle size, for three A*B through the minimum. This
energy levels. transfer function, throughput approach may estimate
Calculate variable infeasible throughput
throughput at values
point
Use correlation
Break sample, sieve and In each block use
between t10s and
determine % passing 1/10th of three t10s to fit curve
6 original particle size, for three
Geophysical
response to
Co simulate
or estimate
and derive A and b t10 are a proportion by
energy levels, where there are values, Multiply A*B- Throughput mass of the block, thus
calculate t10’s at three t10
no samples down the core then use A*B through variable is additive
un-sampled values
used Geophysical relation to function to predict
locations down
t10 to augment core values. throughput
core
Table 17: Options for use of spatial data to estimate process response variables.
In this pathway, the individual laboratory results achieved from crushing each
sample are converted to three t10 values treated as point data. The three t10 values
are spatially estimated using ordinary kriging.
The resulting values of t10 in each block were analysed and a least squares curve was
fitted. The A*b value for each of these curves is used deterministically to calculate
power consumption and throughput. Summary statistics for the block estimates are
shown in Table 18.
Proportion of Time to
Predicted
max treatblock Days
t10_e1 t10_e2 t10_e3 Model A Model b A*b Power
throughput (Portion of Treated
Consumption
possible day)
Minimum 7.58 16.63 26.71 35.45 0.06 34.98 5.46 0.65 0.31 0.31
Average 10.92 20.92 33.39 60.05 0.92 51.17 7.50 0.96 0.33 266.58
Maximum 13.79 24.01 39.48 562.93 1.75 65.84 11.65 1.00 0.48 522.60
Std. Deviation 1.15 1.35 2.34 23.70 0.24 5.35 0.99 0.07 0.03 150.54
Cof. Var. % 10.53 6.47 7.00 39.47 25.75 10.45 13.20 6.80 7.94 56.47
Table 18: Summary statistics for the estimation and calculation of block scale properties
for pathway 1.
Pathway two
In this pathway, the t10 values at the point are plotted as described in Figure 44 and
the A and b parameters fitted. The estimation process estimates the A and b values.
Once the estimate is completed the A and b values in each block are multiplied
together to generate the A*b product value which is used to calculate the resulting
energy consumption and throughput (Table )
Proportion Time to
Predicted
of max treatblock Days
t10_e1 t10_e2 t10_e3 Model A Model b A*b Power
throughput (Portion of Treated
Consumption
possible day)
Minimum 7.58 16.63 26.71 44.53 0.43 42.64 2.64 0.83 0.31 0.31
Average 10.92 20.92 33.39 65.28 0.96 60.51 6.26 0.99 0.32 254.37
Maximum 13.79 24.01 39.48 250.46 2.05 120.73 9.19 1.00 0.38 505.09
Std. Deviation 1.15 1.35 2.34 20.20 0.18 10.95 1.16 0.03 0.01 145.46
Cof. Var. % 10.53 6.47 7.00 30.94 18.51 18.10 18.55 2.78 3.08 57.18
Table 19: Summary statistics for the estimation and calculation of block scale properties
for pathway 2
Pathway three
In this pathway, the t10 data are used to derive a breakage function, the A and b
values are multiplied, and this product is estimated. The statistics are summarised
in Table 20.
Proportion Time to
Predicted
of max treatblock Days
t10_e1 t10_e2 t10_e3 Model A Model b A*b Power
throughput (Portion of Treated
Consumption
possible day)
Minimum 7.58 16.63 26.71 44.53 0.43 38.37 5.48 0.73 0.31 0.31
Average 10.92 20.92 33.39 65.28 0.96 51.93 7.36 0.97 0.32 263.96
Maximum 13.79 24.01 39.48 250.46 2.05 65.57 10.43 1.00 0.43 519.13
Std. Deviation 1.15 1.35 2.34 20.20 0.18 5.37 0.95 0.06 0.02 149.94
Cof. Var. % 10.53 6.47 7.00 30.94 18.51 10.34 12.84 5.96 6.68 56.81
Table 20: Summary statistics for the estimation and calculation of block scale properties for
pathway 3.
Pathway four
This pathway generates estimates of power consumption values that are then
converted to throughput by calculation. The summary statistics for the outputs of
this process are given in Table 21.
Proportion Time to
Predicted
of max treatblock Days
t10_e1 t10_e2 t10_e3 Model A Model b A*b Power
throughput (Portion of Treated
Consumption
possible day)
Minimum 7.58 16.63 26.71 44.53 0.43 38.37 5.52 0.57 0.31 0.31
Average 10.92 20.92 33.39 65.28 0.96 51.93 7.57 0.95 0.33 269.98
Maximum 13.79 24.01 39.48 250.46 2.05 65.57 13.34 1.00 0.55 527.07
Std. Deviation 1.15 1.35 2.34 20.20 0.18 5.37 1.04 0.07 0.03 152.22
Cof. Var. % 10.53 6.47 7.00 30.94 18.51 10.34 13.69 7.32 8.45 56.38
Table 21: Summary statistics for the estimation and calculation of block scale properties
for pathway 4.
Pathway five
In this pathway, the t10 data are used to derive A*b values and these are used to
calculate the energy consumption and the throughput. The point scale calculated
throughput values are then used to generate estimates of throughput at block scale.
The statistics of the output are given in Table 22.
Proportion Time to
Predicted
of max treatblock Days
t10_e1 t 10_e2 t10_e3 Model A Model b Axb Power
throughput (Portion of Treated
Consumption
possible day)
Minimum 7.58 16.63 26.71 44.53 0.43 38.37 5.52 0.71 0.31 0.34
Average 10.92 20.92 33.39 65.28 0.96 51.93 7.57 0.93 0.34 274.86
Maximum 13.79 24.01 39.48 250.46 2.05 65.57 13.34 1.00 0.44 538.14
Std. Deviation 1.15 1.35 2.34 20.20 0.18 5.37 1.04 0.06 0.02 155.66
Cof. Var. % 10.53 6.47 7.00 30.94 18.51 10.34 13.69 6.08 6.53 56.63
Table 22: Summary statistics for the estimation and calculation of block scale properties
for pathway 5.
Although pathways 5 and 6 have been given in table 2 above the outputs of these
pathways will be discussed in Chapter 6 as they require a dynamic process model to
evaluate these approaches to estimation pathways.
The distribution data for each of the variables calculated and/or estimated are
shown in Figure 47.
Figure 47: Histograms of the variables that are calculated and estimated in pathway 1.
In these histograms, the data have been grouped by the resulting throughput factor.
This figure clearly shows the relationship between the variables used to derive the
throughput indicator. These distributions can be contrasted with the histograms for
pathway 3 depicted in Figure 48.
Figure 48: Histograms of the variables that are calculated and estimated in pathway 3, data
has been grouped by predicted throughput quartiles.
In pathway three the estimation of the A*b variable results in a distribution of the
A*b variable that is less dispersed than that for pathway 1. This ultimately results in
a far higher estimate of the throughput factor.
6100 T10
t10 Estimated
Estimated
5900
A and b values Estimated
5700
5500 A*b Estimated
5300
Power Consumption
5100 Estimated
4900 Throughput Estimated
4700
4500
0 10 20 30 40 50
Figure 49: Plot showing the variability in weekly throughput and a table with summary
statistics for each of the variables calculated through each pathway.
Figure 28 depicts a pathway to generate a spatial model that depends on the sample
data available, the variable being considered and the use of the estimate and ore
Geophysical tools can yield data that, although weakly correlated with destructive
tests, can achieve a higher spatial coverage at a far lower cost than destructive tests.
Using this additional data has potential to improve the estimates of spatial rock
characteristics in the following ways:
The effective use of geophysical tools however requires that careful attention is
given to the accurate location of the geophysical data and careful and repeated on-
site calibration of the sondes used. Even a small deviation of the order of 0.5m can
destroy the correlation required to make this data useful. As demonstrated here the
acquisition of geophysical data from both the core and the hole that it is extracted
from can be used to correct core location data and assist in determining the
relationship between the minerals and geology of the rock that is driving the
measured geophysical response.
The collection of destructive data acquired from small rock specimens requires that
specific attention be paid to the execution of the test themselves. As it is impossible
to crush the same rock twice, the quality control measures are centred on
repeatability of the testing procedure. Indications are that the introduction of
geophysical measurements of core just before it is destructively tested would
improve laboratory quality control. This would also help to develop understanding
of the relationship, at core scale, between the geophysical responses and the
measured destructive response.
elastic moduli of the rock. These in turn have been shown to be correlated with the
destructive t10 data.
The massive differences in the scale of laboratory test work and operations require
that existing approaches to up-scaling be carefully evaluated. The impact of the size
of specimens used in destructive rock property tests on the data dispersion and
variogram range requires further investigation. Larger specimens may reduce the
error in block scale estimates of rock property.
In the next chapter, the use of spatial rock characteristic estimates in process
simulations will be explored.
6.1 Introduction
The chapter begins with a review of open pit mine models and a discussion of their
limits and constraints is discussed. A method to model and simulate the process of
ore extraction is provided. A description of the outputs of the mining model, and its
use in the unit process models in the treatment plant simulation is given.
Three, increasingly complex, diamond recovery models can be used to estimate the
metallurgical recovery factor. The approaches are depicted in Figure 50 and
include:
The micro-macro diamond model approach is suited to early stage projects where
little is known about the deposit. In such a model only a few macro diamond samples
are available but the relatively large number of micro diamonds (of the order of
100’s of stones per kg) that occur in small samples of diamond bearing ore can be
used to model the in situ diamond distribution (Chapman and Boxer, 2003; Caers
and Rombouts, 1996). This modelled in situ size distribution can be used to predict
the abundance of coarse diamonds and in some cases the expected macro diamond
recovery. This requires the use of relatively broad assumptions about mining and
treatment efficiencies to predict the expected diamond recovery at a specified top
and bottom size cut off.
The granulometry based liberation and lock-up model is suited to projects that have
a well-conceived overall process design and treatment flowsheet, but where there
is limited information on unit process efficiency and the impact of mining methods
or the ore properties on process rate, efficiency and cost. Several parameters of this
model are usually inferred from other operations. The ‘granulometry liberation loss
model’ relates the size distribution of the recovered diamonds to the discarded
kimberlite particles to estimate the ‘locked’ diamonds; i.e., those that are not
sufficiently freed from the host ore and are unrecovered or lost. The model provides
sufficient insight to begin trade-off studies between different flowsheet options.
A more detailed approach that uses calibrated population balance models can only
be taken when sufficient inputs are available to calibrate accurately the unit process
models. The population balance models can be adapted to include several unit
processes that are all impacted by the variable and uncertain properties of the
treated rock.
In most diamond operations a mass balance model that has several unit processes,
including at least a comminution circuit, a dense media separation circuit and a final
recovery plant, is required. With this model it is possible to estimate how the
changes in the feed characteristics impact on the recovered diamond grade and size
distribution.
Micro diamond
Pre-Feasibility,
Model and
option evaluation
Global geometry
Feasibility and operation
Granulometry model
Process simulation
Figure 50: A schematic depiction of approaches to estimating the recovery factor for
different project maturities.
The design and selection of a mining process determines the sequence in which the
ore arrives at the process plant and the extraction methods that are used to fracture
the ore and transport it either directly to the plant, or to a stockpile or to dump as
waste. Design and optimisation of the ultimate pit, the mining sequence execution
process methods used to plan the schedule and sequence of blocks to be mined, and
model the impact that drilling, blasting and haulage have on the physical rock
characteristics (Dowd, 1976; Lane, 1998)
These usually aim first to determine an ultimate pit shell which contains the ore that
is profitable given the economic constraints of the project (Lerchs Grossman, 1965)
and expected contents and value of each mining block. This ultimate maximum size
excavation is then ‘optimised’ in terms of the sequence of push-backs or cuts to
excavate the ultimate pit. The schedule is then derived by determining the fleet size
and targeted treatment tonnage over relatively long periods. The result of the mine
design work is to assign a sequence to each of the selective mining units (SMU) in
the block model. This is discussed in more detail later in this chapter.
Initial models considered, that have been developed in the mining industry, require
the following assumptions for their predictions to be valid:
Several liberation models have been developed, and according to Munn et al.
(Napier-Munn et al, 1999) two primary approaches can be distinguished.
The first approach is to evaluate the rock texture and then to develop predictive
models based on a mathematical description of the texture to relate the change in
the size of the rock to the change in liberation. These models include work by Gaudin
(1939), Weigel and Li (1967) and King (Beniscelli et al., 2000). They provide an
estimation of the possibility that fractures will intersect a diamond for a given stone
density and fracture size.
The second approach is to gather feed and progeny data from a crushing process
and analyse the degree to which valuable minerals have been liberated. Weedon
(1992) carried out several tests on crushed rock particles and reviewed the degree
to which the mineral of interest had been liberated. This data was then used to
establish characteristic curves that described the relationship between final rock
size and the expected liberation. Morell et al. (Box and Draper, 1987) demonstrated
that the liberation was largely independent of the process used to achieve
comminution. As a result, liberation could be predicted by a measure of the size
distribution of the contained mineral and the size distribution for the crushed ore.
Using this finding they could model and predict the liberation of sphalerite using a
three stream, two process population balance model. The three streams that were
modelled were the gangue, the locked valuable component and liberated valuable
component. The gangue and the liberated valuable component are processed
through conventional population balance model crushers (Box and Draper, 1987).
The locked valuable component stream is also processed through a conventional
crusher model and its products, locked and liberated mineral streams and gangue,
are added to the other products emerging from the other crusher models. Gay
(2002) has also produced similar models that reconstruct the parent composition
from the progeny that is observed in both valuable and discard streams.
This model can be used to determine the quantity and size distribution of diamonds
that will be recovered from each estimated unit of ore in each period of mining.
As diamond projects move from prefeasibility to the feasibility study stage more
sampling of the orebody is carried out, to provide additional information on the
diamond grade, size distribution and value as well as the spatial nature of waste and
kimberlite characteristics. This information typically includes several
The unit process models for comminution and DMS separation are data dependant
and require the fitting of model parameters are consistent with the observed data.
The fitting process is iterative and is assumed to be valid when the errors between
the observed data and the model are minimised. In brownfield situations this can be
time consuming but is generally achievable with well-considered and executed
process surveys. In greenfield situations it is necessary to rely on several diverse
sources of data to estimate process model parameters and to create a feasible
process simulation (i.e. one that converges to a stable solution.).
The primary focus of this research is to model the impact of variable and uncertain
rock strength on comminution, liberation and recovery of diamonds. There is a tacit
assumption that the equipment is well maintained and effectively operated. It is
however possible that operational personnel can at times have an influence on
process performance by adjusting process parameters (e.g., crusher gap setting).
Changes to the operating parameters of crushers, screens and dense media
separation units can either increase or decrease recoveries but will increase the
variability of the process output (Demming, 1986). These impacts have not been
explicitly considered in this research work, though some of these impacts may
contribute to the overall variance of the data that are obtained from operating mines
and is an area for further research.
Usually, in the case of open pit mining of kimberlite pipes, all the kimberlite material
is mined and so the mined excavation extends to the boundary of the kimberlite .The
decision to leave sub-economic material in the pit is usually constrained by the low
structural strength of kimberlite and the need to ensure access to payable
kimberlite. In some cases where the pit is large enough (e.g. Williamsons mine)
areas of sub-economic kimberlite can be mined around, although this is not
common. In mines that contain relatively small un-payable material volumes (e.g.
Ellendale) a variety of ore/waste selection and rejection (grade control) practices
are used to determine the ideal destination for mined material i.e. waste or sub-
grade or plant feed. Given the difficulty and cost of sampling for grade these
practices are often based on a combination of the assigned grade in the
resource/reserve model, a visual assessment of the kimberlite as it is exposed, and
ease with which the location of the pipe contact with waste rock can be defined.
In the ‘Integrated Evaluation Model’ a method to replicate the block selection and
destination assignment has been developed. Each block in the grade model is
assigned a planned sequence number that is derived from the long-term mine plan
that is derived using conventional pit optimisation algorithms. The resulting
schedule is derived from the integrated model that responds to the interaction of
the orebody properties and the constraints of the mining method.
The mining process will impact on the rock strength properties, requiring some
modifier to the primary properties that have been spatially estimated or simulated
into the block model. The relationship between blasting parameters and diamond
distribution has been investigated by Guest et al. who found that within four blast
hole diameters most of the contained diamonds would be destroyed. (Guest, 1997)
To extract material from the mined pit, many operations use a drilling and blasting
method. Increasing the efficiency and reducing the cost of this operation has been
the focus of considerable research (Wilmott 2004, McGee 1995) The blasting will
produce a range of sizes of rock and the impact that this has on the process will
include the total power required as well as the mass flow in the primary section of
the comminution circuit. The impact on total grind will, however, primarily be
related to the total energy used in the blast, commonly referred to as the powder
factor measured in kg of explosives used per tonne of ore mined.
It is also important to consider the residence time of the ore post blasting in the pit
or on stockpiles where it is exposed to the atmosphere and will degrade to some
extent. In Kimberley in the late 1800’s this process of natural weathering was used
to limit the energy required to crush the kimberlite. At some sites the total content
of clay and the clay species can be used to build a relationship between proportion
of the feed that has been exposed to rain and the overall comminution achieved
There is invariably some form of stockpiling on diamond mines, ranging from small
surge capacity bins (e.g., Snap Lake) to large +100,000 tonne stacker reclaiming
systems (Premier Mine). The impacts of blending and weathering need to be
correctly considered in any model of the operation as this process can, if correctly
operated, reduce the variability of grade and other important ore characteristics
such as waste rock content, UCS etc. (Robinson, 2004; Everett, 2001) This can be
relatively simplistically modelled in extreme cases by either not adapting block
characteristics which implies no blending or, for each characteristic of interest, the
entire load in the stockpile is averaged at each time increment in the simulation.
More realistic approaches include creating sub-parcels of several blocks that are
deemed to be co-located and the degree of mixing is controlled by some function of
both residence time and sequence of loading and withdrawal.
Integration and adaption of population balance process models with an ore stream
that has multivariate characteristics is required to estimate the recovery factor.
Particular attention is given to the derivation of the parameters of the unit process
models, and how these are influenced by rock properties, and how they can be
perturbed realistically to quantify the range and variability of the process response
to changing rock properties.
Population balance models work by creating several ‘bins’ or “intervals’ for each
rock characteristic (e.g. size intervals, density intervals, grade intervals etc) that are
carried through the flowsheet and the masses entering and leaving each interval at
each stage of the flowsheet are required to balance. The balancing is achieved
through a set of iterative calculations and mathematical models that are required to
converge to some pre-set error tolerance. Using this approach for diamond
processing it is possible to model both the ore size distribution, the ore density
distribution as well as the diamond size distribution and to carry these through the
flowsheet.
A survey of De Beers’ operations provides an insight into the ranges associated with
the parameters that can be input into unit processes. The rock properties used
include the expected head feed size distribution, the strength of the rock expressed
by an energy size relationship using sampled t10 and a fitted breakage function, and
a distribution of density per size class.
The external coordinates and their changes impact on the internal coordinates. This
relationship can be modelled in several ways and sets up a formalised framework
for developing and testing simulation of mineral processing flow sheets.
𝜓(𝑥)𝑑𝑥 is the number fraction of particles per unit volume of phase space;
ℛ(𝑥) is the rate at which particles at coordinate position 𝑥 are destroyed, this rate
is specified as mass per unit volume of phase space per unit time;
𝑊𝑖𝑛 is the mass rate at which solid particles enter the system;
𝑊𝑖𝑛 is the mass rate at which solid particles leave the system;
a(𝑥; 𝑥 ′ ) is the distribution density function for particles produced by attrition and
wear of particles at 𝑥′
𝝏
∫ 𝑵𝝍(𝒙)𝒅𝒙
𝝏𝒕 𝑹𝒄
Equation 14
= − ∫ 𝑵𝝍(𝒙)𝒖. 𝒏𝒅𝝈 − 𝑫 + 𝑩 − 𝑸
𝑺𝒄
+𝑨
where 𝑵 is the ‘outward pointing’ normal vector to the phase space 𝑆𝑐 at point 𝑥.
The destruction, birth, material removal and arrival terms are described below.
𝓡(𝝍(𝒙), 𝒙, 𝑭[𝝍(𝒙)]
𝑫=∫ 𝒅𝒙 Equation 15
𝑹𝒄 ̅ (𝒙)
𝒎
Where D is the number of particles broken per unit time in the control volume 𝑅𝑐 .
The rate of breakage is a function of the number distribution of both the particles in
the control volume and of the entire distribution function.
𝑩
𝟏
=∫ ∫ 𝓡(𝝍(𝒙′ ), 𝒙′ , 𝑭[𝝍(𝒙)])𝐛(𝒙; 𝒙′ )𝒅𝒙′𝒅𝒙
̅ (𝒙) 𝑹′(𝒙)
𝒎
𝑹𝒄 Equation 16
𝑵
−∫ ∫ 𝝍(𝒙′ ), 𝝁(𝒙′ ), 𝑭[𝝍(𝒙)])𝐚(𝒙; 𝒙′ )𝒅𝒙′𝒅𝒙
𝑹𝒄 ̅
𝒎 (𝒙) 𝑹′′(𝒙)
In Equation 16 𝑅′(𝑥) and 𝑅′′(𝑥) are the regions of phase space where particles that
have changed their size either by breakage or attrition and can enter the phase
volume 𝑑𝑥 around the point 𝑥.
𝑵
𝑨 = 𝑾𝒊𝒏 ∫ ( ) 𝝍𝒊𝒏 (𝒙)𝒅𝒙 Equation 17
𝑹𝒄 𝑴
𝑵
𝑸 = ∑ 𝑾𝒐𝒖𝒕 𝒋 ∫ ( ) 𝝍𝒐𝒖𝒕 𝒋 (𝒙)𝒅𝒙 Equation 18
𝒋 𝑹𝒄 𝑴
The above equations can be integrated over the surface of the reference region. King
(2001) however suggests that the application of the divergence theorem, which
equates the integral of the phase space volume 𝑅𝑐 .with that of the enclosing space
surface 𝑆𝑐 leads to a more tractable form for the steady state operation.
which, when expanded, gives the working equation for the steady state operation
as:
𝑵 ∫ ⋁𝝍(𝒙)𝒅𝒙 + 𝑫 + 𝑩 = −𝑸 + 𝑨
𝑹𝒄
𝓡(𝝍(𝒙), 𝒙, 𝑭[𝝍(𝒙)])
𝑵 ∫ ⋁𝝍(𝒙)𝒅𝒙 + ∫ 𝒅𝒙
𝑹𝒄 𝑹𝒄 𝒎̅ (𝒙)
𝟏
+∫ ∫ 𝓡(𝝍(𝒙′ ), 𝒙′ , 𝑭[𝝍(𝒙)])𝐛(𝒙; 𝒙′ )𝒅𝒙′𝒅𝒙
𝑹𝒄 ̅ (𝒙) 𝑹′(𝒙)
𝒎
Equation 20
𝑵
−∫ ∫ 𝝍(𝒙′ ), 𝝁(𝒙′ ), 𝑭[𝝍(𝒙)])𝐚(𝒙; 𝒙′ )𝒅𝒙′𝒅𝒙
𝑹𝒄 ̅
𝒎 (𝒙) 𝑹′′(𝒙)
𝑵
= − ∑ 𝑾𝒐𝒖𝒕 𝒋 ∫ ( ) 𝝍𝒐𝒖𝒕 𝒋 (𝒙)𝒅𝒙
𝒋 𝑹𝒄 𝑴
𝑵
+ 𝑾𝒊𝒏 ∫ ( ) 𝝍𝒊𝒏 (𝒙)𝒅𝒙
𝑹𝒄 𝑴
𝓡(𝝍(𝒙), 𝒙, 𝑭[𝝍(𝒙)])
𝑵⋁𝝍(𝒙)𝒅𝒙 + ∫ 𝒅𝒙
𝑹𝒄 𝒎̅ (𝒙)
𝟏
+∫ ∫ 𝓡(𝝍(𝒙′ ), 𝒙′ , 𝑭[𝝍(𝒙)])𝐛(𝒙; 𝒙′ )𝒅𝒙′𝒅𝒙
𝑹𝒄 ̅ (𝒙) 𝑹′(𝒙)
𝒎
𝑵
−∫ ∫ 𝝍(𝒙′ ), 𝝁(𝒙′ ), 𝑭[𝝍(𝒙)])𝐚(𝒙; 𝒙′ )𝒅𝒙′𝒅𝒙 Equation 21
𝑹𝒄 ̅
𝒎(𝒙) 𝑹′′(𝒙)
𝑵
= − ∑ 𝑾𝒐𝒖𝒕 𝒋 ∫ ( ) 𝝍𝒐𝒖𝒕 𝒋 (𝒙)𝒅𝒙
𝒋 𝑹𝒄 𝑴
𝑵
+ 𝑾𝒊𝒏 ∫ ( ) 𝝍𝒊𝒏 (𝒙)𝒅𝒙
𝑹𝒄 𝑴
In the preceding section the fundamental population balance model has been
reduced to an appropriate discrete form with the region 𝑅𝑐 representing an
appropriate particle class, and the material entering and leaving each size class is
calibrated using data acquired from existing processes.
The unit processes that have a material impact on estimating the release, recovery
and loss of diamonds in the processes of comminution, separation and recovery
have been adapted for use in the integrated evaluation model.
There are three main processes in the diamond process flow sheet: size reduction
achieved in several steps, a density separation and final diamond recovery using a
combination of x-ray and magnetic recovery. Models used to simulate or emulate
these processes are described briefly below.
Crushing models
There are several types of crushers that are used in the diamond industry, including
jaw, gyratory, short-head and high-pressure roll crushers. In each part of the
flowsheet different modes of crushing are used. The key to understanding the
overall liberation is however the sequence of size distributions that are achieved.
Measuring and describing the relationship between energy input and the degree of
fracture produced is not trivial. Some of the challenges include:
Depending on how these problems are addressed will, by and large, dictate the
limits and usefulness of any subsequent process modelling framework.
The earliest references reviewed show that initial work was aimed at developing a
relationship between energy input and fracture achieved. The difficulty in
describing these relationships lies in the complex non-linear relationship between
energy input into the crushing device and in some way measuring and quantifying
‘breakage’ and ‘fracture’ as the reduction ratio.
Rittinger:
𝟏 𝟏
𝑾 = 𝑲𝟏 [𝟏 − ] . 𝟑 Equation 22
𝑹 𝒂
Kick:
𝒍𝒐𝒈𝑹
𝑾 = 𝑲𝟐 [ ] Equation 23
𝒍𝒐𝒈𝟐
Bond:
𝟏
𝟏 𝟐 𝟏
𝑾 = 𝑲𝟑 [𝟏 − ( ) ] . 𝟏 Equation 24
𝑹
𝒂𝟐
Holmes:
𝟏 𝒓 𝟏
𝑾 = 𝑲𝟒 [𝟏 − ( ) ] . 𝒓 Equation 25
𝑹 𝒂
Where:
𝑅 is the particle size reduction ratio, usually expressed as the median size of the
feed to the crusher and divided by the median size of the product.
𝑎 is a product grind size usually represented by the single side dimension measure
of the square aperture that 80% of the product post crushing would pass.
Rittinger’s theory suggested that the energy necessary to reduce particle size is
proportional to the increase in specific surface area. It is focused on modelling the
rupture of chemical and physical bonds in the material.
In 1880 Kick suggested an energy fracture relationship. It was translated from the
original by Stadler to suggest “the energy required for producing analogous changes
in configuration in geometrically similar bodies of equal technological state varies
as the volumes of these bodies.” Kicks law centred on the energy required to deform
the particle to its elastic limit. In 1957 Holmes came to the forefront of the discussion
where up until then the ‘laws’ of Kick, Bond and Rittinger were in use.
Holmes’ model begins with the consideration of the failure of a cube of rock of
dimension D. Using Hooke’s law and three assumptions Holmes demonstrated that
by expansion of multiple failure events it can be shown that the energy required to
reduce a unit weight of cubes of side D can be formulated as in Equation 26:
𝟑𝒌 𝑫−𝒓 (𝑹𝒓 − 𝟏)
𝑬𝒘 = Equation 26
𝝆. (𝟐𝒓 − 𝟏)
Where:
𝑟 is the variable parameter of the rock type considered referred to by Holmes as the
“Kicks law deviation exponent.”
𝑘 is another parameter of the model that is used to describe the failure boundary
conditions.
𝒀 = 𝟖𝟎(𝑿⁄𝒂)𝒎 Equation 27
Where:
The fourth assumption implies that there will have to be some consideration of the
proportion of energy that is used to propagate factures vs the energy that is input
into the device. This should take the form of an efficiency factor 𝜂 and will be
different for different sizes of material broken.
𝑹𝒓 − 𝟏 𝟏
𝑾 = 𝑲( ). 𝒓 Equation 28
𝑹𝒓 𝒂
Where:
𝑅 = 𝐹/𝐴 is a parameter which varies to some extent with 𝑎 is the size at which 80%
passes.
Holmes then suggests that the parameters in the model should be considered as
‘engineering measures’ rather than laws. He also suggests that the constants used in
the models are as much a function of the rock properties as the machine properties.
This substantiates the requirement for a framework to formalise the relationships
between primary, response and process properties of any comminution system. The
main shortcomings of direct explicit rock fracture models include the inability to
directly access, measure and hence model the internal so-called ‘co-ordinate shifts,’
and that these internal shifts are the result of an interdependent, potentially non-
linear function of the material properties, the machine properties and operating
characteristics of the comminution device. A further challenge to the explicit
modelling framework, as discussed by King (2001), is that the energy will propagate
into the progeny in several ways and differently into different factions of the
progeny as it evolves through the comminution device.
Repeating breakage events using a controlled apparatus that loads the particle in
some way to failure of single particles of a single mineral of a selected size can
however give some indication of the overall relationship between input energy,
probability of failure at differing energy levels and final progeny distribution (this
of course has to assume that the nature of flaws in the parent particles are randomly
distributed and is very similar between the particles tested, and that sufficient
particles of a given size can be tested to demonstrate the ‘average’ energy fracture
response). Tavares and King (1998) conducted such tests on several Taconite
Quartz, Sphalerite and Galena particles and demonstrate that the cumulative
distribution of energies required to cause fracture can be represented using a log
normal distribution and have represented this relationship as per Equation 29.
𝒍𝒏(𝑬⁄𝑬𝟓𝟎 )
𝑷(𝑬; 𝒅𝒑 ) = 𝑮 ( ) Equation 29
𝝈𝑬
With G being the cumulative Gaussian distribution and 𝐸50 being the median
fracture energy which can be said to vary with size as per the relationship described
in Equation 30:
𝝋
𝒅𝟎
𝑬𝟓𝟎 = 𝑬⋈ (𝟏 + ) Equation 30
𝒅𝒑 − 𝒅𝒑𝒎𝒊𝒏
The values of 𝑑0 and 𝜑 are material specific and have been measured for several
common minerals. 𝑑𝑝𝑚𝑖𝑛 is the size of particles produced and below this size
particles absorb energy but do not fracture. 𝐸⋈ is the median fracture energy for
large particles. In this study ‘large’ was deemed to be above 1cm. Using the models
provided by King (2001). it is possible to plot the relationship between the energy
input and the probability of failure shown in Figure 51.
99.5
99 Sphalerite
98 Quartz
95 Taconite
85
80
70
60
50
40
30
20
15
10
5
2
0.5
0.2
0.1
0.05
1 10 100 1000
Log Specific particle fracture Energy (J/kg)
Figure 51: A plot of the specific input energy and cumulative probability of failure for a few
selected minerals, size parameter set to 5mm (Adapted after King 2001).
The impact of the size selected also exhibits an exponential form, this is shown in
the plot in Figure 52. This clearly demonstrates that the measurement and
estimation of the energy required to fracture smaller particles is not trivial and
especially as the material is ground ever finer the distribution of particle fracture
toughness will also expand dramatically.
Quartz
600
Taconite
500
400
300
200
100
0
0 2 4 6 8 10 12
Parent Particle Size in mm
Figure 52: A plot of the median fracture energy for mineral particles of different sizes.
Given the distributions shown in Figure 51 and Figure 52 it is evident that even for
a mineral with a defined composition the range of energy required to achieve the
same degree of breakage varies substantially. This suggests that it would be limiting
to consider a single value for rock toughness per lithology when modelling the
operation of process plant equipment. Rather it is proposed that a variable that
describes the range of 'resistance to breakage' or so-called 'toughness', should be
spatially estimated into the block model. This would describe the rock properties in
a block to be mined and become a variable that can be used as an input into the
process model. This requires a review of approaches to population balance
modelling to determine how the variability and uncertainty in rock toughness can
be realistically incorporated into these models.
The general equation, according to King (2001), for the population balance model
can be written for comminution machines as follows:
𝒅 𝓡(𝝍(𝒙), 𝒙, 𝑭[𝝍(𝒙)])
𝑵 (𝒖(𝒙)𝝍(𝒙)) +
𝒅𝒙 𝜷𝒙𝟑
𝟏
+ 𝟑 ∫ 𝓡(𝝍(𝒙′ ), 𝒙′ , 𝑭[𝝍(𝒙)])𝐛(𝒙; 𝒙′ )𝒅𝒙′𝒅𝒙
𝜷𝒙 𝑹′(𝒙)
𝑵 ′ ), ′
𝒅𝜷𝒙′𝟑
− 𝟑∫ 𝝍(𝒙 𝝁(𝒙 ), 𝐚(𝒙; 𝒙′ )𝒅𝒙′𝒅𝒙
𝜷𝒙 𝑹′′(𝒙) 𝒅𝒙 Equation 31
𝑵
= − ∑ 𝑾𝒐𝒖𝒕 𝒋 ∫ ( ) 𝝍𝒐𝒖𝒕 𝒋 (𝒙)𝒅𝒙
𝒋 𝑹𝒄 𝑴
𝑵
+ 𝑾𝒊𝒏 ∫ ( ) 𝝍𝒊𝒏 (𝒙)𝒅𝒙
𝑹𝒄 𝑴
̅ (𝒙) = 𝜷𝒙𝟑
𝒎 Equation 32
A commonly used mode for abrasion rates is to assume that it is proportional to the
surface area. It is then possible to relate the change in the proportion of material in
size 𝑑𝑝 per unit time as some function of the area of the particle (Equation 33).
𝝅 𝒅𝒙𝟑 𝒌′𝝅𝒙𝟐
𝝅 =− Equation 33
𝟔 𝒅𝒕 𝟐
This suggests that the rate at which particles move in phase space is considered
constant and can be given by (Equation 34):
𝒅𝒙
𝒖(𝒙) = = −𝒌′ Equation 34
𝒅𝒕
𝒅𝒙
= −𝒌(𝒙) = −𝒌𝒙𝚫 Equation 35
𝒅𝒕
If delta is a constant varying between 0 and 1 then the rate of change of mass with
respect to time is given by:
𝒅𝒎 𝝅𝝆𝒔 𝒙𝟐+∆
= −𝒌 Equation 36
𝒅𝒕 𝟐
𝑴 𝒑(𝒙)
𝝍(𝒙) = Equation 37
𝑵 𝜷𝒙𝟑
And so, the population balance equation can be converted to (Equation 38):
There are several unit process models in use in the industry that are based on the
principles described above; these include models by Whitten (1972) and
Andersen(1988).
𝒏 𝒏
𝑷𝒊 = 𝒇𝒊 + 𝝉 ∑ ∑ 𝒃𝒊𝒋 . 𝑪𝒋 . 𝑷𝒋 − 𝝉 𝑺𝒊 𝑷𝒊 Equation 39
𝒊=𝟏 𝒋=𝟏
𝑏𝑖𝑗 is the fraction of material reporting to size class j when material in size class i is
comminuted
𝒏 𝒏
And thus, the proportion within each class can be determined using Equation 41:
The formulae are, however, not in a closed form and must be solved iteratively
across all sizes and for each class within each size fraction. The factors that are used
to define the parameters for breakage rate and selection function are often derived
through experimentation. Often a statistic, commonly the means, of values derived
from an assessment of comminution performance of several batch tests on small
samples are used in the calibration. This approach requires an assumption that the
samples are representative of the material that is to be treated in production, and
that the change in scale between laboratory scale machines to full scale operational
machines will not result in a substantial bias.
To use this function in the ‘Integrated Evaluation Model’ the breakage and selection
functions need to be fitted to each comminution device in the process flowsheet and
must also be calibrated for material domains in the kimberlite pipe.
adjustment factor (T) is derived for each size fraction, to determine the percentage
of each size fraction that will be selected for breakage. The blue block in this table is
the breakage matrix or breakage function. It describes the post-crushing
distribution of material selected and crushed from each size fraction. The
distributions in the breakage matrix can be multiplied by the proportional fraction
of each size that is selected for breakage to determine the combined size distribution
of material that is selected and broken. This distribution can be added to the
material that was not selected for breakage and passed through the unit unscathed.
In this way the crushed distribution can be determined, shown in the last row of the
table.
The calibration of this model requires several inputs to determine the functions that
underpin selection and breakage. As the material or the unit operating parameters
change, these functions will require recalibration. The model does not have a direct
link to material properties but does provide a large degree of flexibility to model a
range of comminution devices. The derivation of the different breakage functions
for each size has been the focus of a large body of work. More recent work carried
out at the JKMRC has described a form of energy breakage modelling that is based
on measurements made on single particle fracture tests (Napier-Munn et al., 1999).
The framework for this is expanded below, as it provides a way to relate material
properties to expected size distribution of the rock and hence to the diamonds that
will be liberated.
One useful form of the comminution model is based on the idea that the progeny of
breakage events can be based on a mixture of two separate populations and that
each cumulative population can be modelled in the form depicted in Equation 42.
𝒙 𝒏
𝑩(𝒙; 𝒚) ∝ ( ) Equation 42
𝒚
where:
𝑥
is the ratio of the size of the progeny to the size of the original particle
𝑦
𝒙 𝒏 𝒙 𝒏𝟐
𝑩(𝒙; 𝒚) = 𝑲 ( ) + (𝟏 − 𝑲) ( ) Equation 43
𝒚 𝒚
The data gathered from drop weight tests in which a single particle is subject to a
known input energy can be used to determine the size distribution of fine and coarse
particles. These can be plotted on log-log plot of the size of progeny vs the breakage
function value. King (2001) suggests that if 𝑑𝑟 is a representative size for a
distribution, say the size for which 80 per cent of the particles are smaller than, then
a change in the breakage energy depends inversely on the initial particle size.
𝒇(𝒅𝒓 ) = −𝑲𝒅−𝒏
𝒓 Equation 44
Where
𝐾 is a constant
𝒇(𝒅𝒓 ) = −𝑲𝒅−𝒏
𝒓
𝑲
𝑬 = − 𝟏−𝒏 𝒅𝟏−𝒏
𝒓 +𝑪 𝒏≠𝟏
Equation 45
𝑲
𝟎=− 𝒅𝟏−𝒏 + 𝑪
𝟏−𝒏 𝒓
𝑲 𝟏 𝟏
𝑬= ( (𝒏−𝟏) − (𝒏−𝟏) )
𝟏−𝒏 𝒅 𝒅
𝒓 𝒓
This form can be used to show how the equation changes for values used by Kick
where n=1, Bond where n = 1.5, and Rittinger where n=2 (Daniel, Lane and McLean,
2010). Single impact tests can be used to determine the progeny distribution that is
achieved for a given energy input. The data produced can then be plotted as the
cumulative percentage passing a given size on a relative size scale.
The distribution of the progeny can be characterised by the percentage passing one-
tenth of its original size (𝑡10 ). This is another form of the representative size 𝑑𝑟
referred to above. In the prior work this size was defined as the 𝑃80 . It is suggested
that given 𝑡10 it is possible to define a relationship that will describe the other points
on the distribution (𝑡𝑛 ) . The value for the 𝑡10 is determined experimentally and is
derived from Equation 46:
Where
Once the value of 𝑡10 has been derived then the rest of the expected progeny size
distribution for a specific energy input to a specific material can be determined. This
is often done with truncated distribution functions such as the Rosin-Rammler
described in Equation 47.
𝟏𝟎−𝟏 𝜶
( )
𝒕𝒏 = 𝟏 − (𝟏 − 𝒕𝟏𝟎 ) 𝒏−𝟏 Equation 47
Where
It is possible to plot the expected size distribution for a range of progeny sizes using
the above relationships. A worked example from King (2001) is depicted in Figure
53.
10.00
apatite 6488
0.00
Relative Particle size d/dp'
Figure 53: A plot of energy input vs product size using the t10 approach and a Rosin Rammler
breakage function modified after King (2001).
Screen models
These models are used to determine the split of material through a screening device
and have an impact on the ultimate size distribution of material that either leaves
the circuit or that is fed to the next crushing or comminution unit process.
There are several versions of the model, but all are based on determining the
probability of particles of a given size being presented to an aperture and passing
the given screen aperture. The model calibration varies depending on the shape of
the aperture, the loading of the screen and the screen deck inclination.
DMS Models
𝑫𝟕𝟓 − 𝑫𝟐𝟓
𝑬𝒑 = Equation 49
𝑫𝟓𝟎
Where:
D75 is the density at which 75% of the material reports to the sink fraction.
D50 is the density at which 50% of the material reports to the sink fraction.
D25 are the densities at which 25% of the material reports to the sink fraction.
𝑬𝒑
𝑷𝒔 = 𝑲×(𝑫 Equation 50
𝟏− 𝒆 𝒄 −𝑫𝟓𝟎 )
Where:
Estimating the proportion per density class per size fraction for a given kimberlite
size distribution is not trivial. The laboratory process for densiometric
determination typically consumes about 70kg of core that is crushed in a jaw
crusher to produce a given size distribution, and then the product is screened into
several size fractions. Each size fraction is then split into density classes using
liquids of ascending density. The products from each sink float split are weighed and
documented. This produces data that can be represented in a table such as in Figure
54.
0.45
0.4
0.35
0.3
(Percent)
0.25
0.2
0.15
0.1
0.05
0
Sample 1 Sample 2 Sample 3 Sample 4
Figure 54: A Densimetric distribution plotted for four samples derived from kimberlite,
crushed to 100% passing 12mm and grouped by density classes.
• It is, however, evident that as the size distribution changes the distribution
of density within each size class will change. The adaption of the density
distribution for changing size distribution requires a function that relates the
change in size distribution to the change in density distributions in each size
class. If one were to consider five density classes and 13 size classes one
would end up with 75 'density by size' classes, so material breaking out of
one of the larger size classes will produce progeny that has some undefined
This approach can to some extent be informed by the texture of the kimberlite rock
type and will also provide a quantitative approach to classification of textures that
result in distinctly different response variables.
𝒚𝒊 = 𝒂𝒙𝟐 + 𝒃𝒙 + 𝒄 Equation 51
This curve model can then be extrapolated into the larger diamond size classes. This
is commonly referred to as a total content curve (see Figure 55 below).
1,000
Number of Stones
10
0
0.00001 0.0001 0.001 0.01 0.1 1 10 100
Weight in carats
Figure 55: A plot of the log of diamond weight vs log of the number of stones in each class
per hundred tonnes per unit interval.
The total stone content can be derived by integrating this function across all size
fractions (Equation 52). This can be used to determine the grade between an upper
and lower cut off size by converting the stones frequency to a mass.
𝒅𝒏
𝒕𝒐𝒕𝒂𝒍 𝒄𝒐𝒏𝒕𝒂𝒊𝒏𝒆𝒅 𝒔𝒕𝒐𝒏𝒆𝒔 = ∫ (𝒚)𝒅𝒙 Equation 52
𝒅𝒊
Process plants however usually operate in the size range of 1mm to 32mm although
some plants operate beyond this size range (Technical and Financial Report, 2001).
To adapt this curve to estimate recoverable grade there is a requirement to trim the
upper and lower ends of the curve. This is done by fitting a third-order polynomial
to the total content curve at both the top and bottom end of the planned recovery
size envelope. The shape and sharpness of the top and bottom size cut-off curves is
usually determined by the modeller and is based on experience of curves achieved
at several existing operations.
16
5 tonne sam ple
14
10
8
Grade Ln(Spht/ui)
-2
-4
-6
-8
-8 -6 -4 -2 0 2 4 6 8
Size Ln(Carats)
Figure 56: A plot of the logarithm of diamond size vs stone frequency in stones per hundred
tonnes per unit interval.
This trimming can then be used to determine the resource grade and size
distribution that is reported at a bottom size cut-off of 1mm. In cases where
economics suggest that the lower cut-off should be higher than 1mm the curve is
adjusted to the recommended cut-off in a similar way to that shown in Figure 56
The trimming of the ends of the curves is also subjective in this method in the
absence of operating plant data, and small increments in the larger sizes have a
relatively low impact on the carat recovery but can have a substantial impact on the
revenue recovery. The benefits of having an underlying diamond size frequency
model when sampling for grade are described in Coward and Ferreira (2004)
Once the in situ total content has been estimated and relatively strict cut-off sizes
applied, the process of diamond recovery and loss can be modelled in more detail.
As diamonds are particulate in nature, having a large range of size distributions, one
of the ways in which the estimated diamond recovery can be modelled is to relate
the comminution of rock to the expected proportion of diamonds that will be
released or liberated from the kimberlite.
The model was originally conceived and applied to operating facilities on the west
coast of Namibia with the aim of trying to calculate the loss of diamonds that arises
from the inability to crush all the material to lower than the smallest contained
diamond (Kleingeld, 1982). Diamonds lost in this way are deemed to be "locked".
The model is centred on an assumption that it is possible to build a relationship
between the recovered size distribution of diamonds and the grind size of the ore
achieved to estimate the diamonds that are still not liberated, and hence discarded
in the tailings. The model was subsequently applied by Ferreira to a number of
kimberlitic deposits. During this time a collaboration with Lantuéjoul(1998) saw
several improvements in the formulation of the model. This author worked with
Ferreira from 1998 to 2008 to improve various aspects of sampling, data collection
and its use in implementing the model at several of De Beer's operating mines.
Ferreira (2013) gives a brief description of the model, however a more detailed
description is presented here due to its importance in this research. The model can
then be used in a variety of ways with total content curves to estimate the expected
recovered diamond distribution.
Model description
The model begins with an assumption that the diamonds that have been recovered
represent only a portion of the total population of diamonds. The total population
of diamonds is deemed to be the sum of the diamonds recovered and the diamonds
that are still ‘locked’ in the discarded tailings. The model assumes that there is a
relationship between the proportion of recovered diamonds within a size class and
the proportion of the discarder kimberlite in an equivalent size class. An additional
assumption of the model is that the that the diamonds are not damaged in the rock
crushing process.
A very simple conception of the model can be applied to a process that only includes
crushing of the rock and handpicking out the diamonds. Assume that prior to
crushing all the rock particles were in one size class, say 5mm to 4mm and that the
deposit only contains diamonds in the 5mm to 4mm fraction. If we do not crush the
ore any further and carry out a hand sorting , i.e. there are no ore particles less than
the 5-4mm size class, it can be reasonably assumed there is 0% liberation. If,
however the grind achieved meant that there was still 50% of the material in this
fraction, and the other 50% was less than 4mm it would be possible that only half of
the diamonds were liberated. By extension if the grind achieved ensured that there
was no material larger than 4mm, and none of the diamonds were damaged in the
kimberlite size reduction process, then all the diamonds would be liberated.
The range of sizes of diamonds recovered and the range of size of ore particles
(grind) produced can be obtained by sampling and screening the discarded ore, and
screening and recording the mass size distribution of the diamonds recovered. The
relationship between the size distribution of the recovered and the kimberlite
discarded can be used to develop an estimate of the diamonds that are still
contained in the discarded kimberlite stream.
There are several sampling and data assumptions required by the model:
• It is possible to define the relationship between diamond size classes and ore
size classes;
• It is possible sample for, and reconstitute the total grind of the diamond
bearing ore;
• The diamond concentration is relatively low and hence the crushed size
distribution of the rock is independent of the size distribution of the
diamonds;
• Diamonds are randomly distributed in the ore blocks; and
• There is no relationship between the diamonds size and its location in
broken particles.
Several ore sizes are tabulated in Table 24. From the recovery process the total grind
of the ore fed to the plant is recorded. This size distribution is reflected in the third
column of this table. The recovered diamonds could be sieved, and the size
distribution represented in the fourth column as the percentage of stones per class.
By summing the classes from the smallest to the largest it is possible to express the
proportion of the distribution that could still be “locked” within each ore size class.
It is then possible to calculate the proportion of carats that can be locked in each size
fraction by multiplying the proportion of the ore that is in each size class and the
proportion of the diamond size distribution that could be contained in the ore
particle. As depicted in Table 24 the second class (10mm) of ore particles make up
20% of the material discarded. 15% of the total diamonds recovered lie in this class,
and 85% of the diamonds recovered are below 10mm. This suggests that 85% of the
diamond population can be locked, and by multiplying the ore proportion by the
proportion of the diamond distribution that can be locked we calculate that the lock
up in this size class can be 17 cts. By summing the locked content estimated for each
size fraction we see that for every 100cts recovered 52.75 cts could be locked. This
implies that the liberation efficiency can be estimated to be:
𝟏𝟎𝟎
%𝑳𝒊𝒃𝒆𝒓𝒂𝒕𝒊𝒐𝒏 = 𝑿𝟏𝟎𝟎 Equation 53
𝟏𝟎𝟎 + 𝒍𝒐𝒄𝒌𝒆𝒅 𝒆𝒔𝒕𝒊𝒎𝒂𝒕𝒆𝒅
The process, however, also includes a float and sink separation process, and for a
kimberlite particle to sink and be recovered in the final process, the density of the
particle must exceed that of the effective cut point in the dense media separation
process. This means that is possible to calculate the maximum size of diamond that
can be floated out of the dense media separation process given the ore density,
diamond density and the density cut point in the plant. This change in the lock up
calculation by limiting the maximum size of diamond that can be recovered to below
the ore size is depicted in table 25.
Table 25: Calculation of lock up with constraint placed on the maximum size of contained
diamond.
In this adapted calculation, the class 2 (10mm) ore particles can only lock up
diamonds up to 8mm. There are only 50 % of diamonds below this size limit that
can potentially be locked. The same calculations are carried out. The estimated
locked content has dropped to 21cts for every 100cts recovered, equating to a
liberation of 83%.
To generate a model requires a realistic method for determining the limiting mass
of diamond that can be contained in a particle and for that particle to be recovered
by the process. One approach to derive this limit is to consider the physical nature
of the process used to separate diamonds from kimberlite post the crushing stage.
In this process the density differences between diamonds (specific gravity of 3.52
g/cm3) and most kimberlites (specific gravity of 2.7 g/cm3) is exploited to generate
a diamond rich concentrate. The maximum volume of diamond contained in a
particle of kimberlite that could be floated out of the dense media separation
process is a function of the density of media that is used to separate the diamonds
from the kimberlite, the density of the kimberlite, the density of the diamond and
the relative volume of the diamond and kimberlite in the particle. A critical particle
can be defined as one where the forces of buoyancy and the sinking force are equal.
The mass of displaced medium will equal the mass of the particle, and the mass of
the particle is the combined mass of the contained diamond and the enclosing
kimberlite.
Model assumptions
Model formulation
The general formulation of the liberation model will be described in this section and
is broadly based on the work of Kleingeld (1982) and Ferreira (2013) and
unpublished work by Lantuéjoul(1998).
The notation used is described in King (2000). The particle size distribution function
𝑃(𝑑𝑝 ) is the mass fraction of that portion of the size fraction that consists of particles
of a size less than or equal to dp where dp is the size of the particle. This function has
the properties:
P (0) =0
P (∞) =1
P(dp) increases monotonically from 0 to 1 as dp increases from 0 to ∞
𝒅𝑷(𝒙)
𝒑(𝒙) = Equation 54
𝒅𝒙
And likewise, the discrete density distribution can be related to its density function
in a similar way (Equation 55):
𝑫𝟏−𝟏
𝒑𝒊 = ∑ 𝒑( 𝒙)𝒅𝒙 Equation 55
𝑫𝒊
The usefulness of this approach for describing particle size distributions is that it
facilitates the assumption that it is possible to work with the size class rather than
the individual particles. To do this requires the definition of the ‘average’ particle in
a size class, and the allocation of several empirical distribution functions to the
particle populations.
𝐃𝐢−𝟏
𝐏𝐢 (𝐝𝐩 ) = ∫ 𝐝𝐏(𝐝𝐩 ) = 𝐏(𝐃𝐢−𝟏 ) 𝐏(𝐃𝐢 ) = ∆𝐏𝐢 Equation 56
𝐃𝐢
Pi is the mass fraction of the particle population that consists of particles between
size Di and Di-1.
∆𝑑𝑝 = 𝐷𝑖−1 − 𝐷𝑖 is known as the size class width, with the upper and lower-class
size boundaries given by Di and Di-1 respectively. The representative size in this class
is required so that the ‘average’ characteristic in this class can be used in modelling
the behaviour of all particles within this class. One method proposed by King (2001)
is to develop the number density distribution function (as opposed to a mass
distribution considered above). The number distribution function for any
characteristic can be defined as𝜑(𝑥), which is the faction by number of particles in
the population having size equal to 𝑥 or less. The number density function can be
defined as in Equation 57:
𝒅𝝋(𝒙)
𝝋= Equation 57
𝒅𝒙
Here the upper-case letters represent the class boundaries. The number
distribution facilitates the calculation of the average properties of the particles
either in the total population or in each size interval. This can be calculated as
follows in Equation 59:
𝑵𝑻
𝟏
̅𝑵 =
𝒙 ∑ 𝒙(𝒋) Equation 59
𝑵𝑻
𝒋=𝟏
Here 𝑥(𝑗) is the value of the characteristic for particle j and 𝑁𝑇 is the total number of
particles in the population. Grouping the particles, in this case diamonds, into
classes of particles having the same or similar characteristics allows the summation
of the characteristics of the properties of each class. It is then possible to calculate
the average property x in the whole population using Equation 60 as follows:
𝑵
𝟏
̅𝑵 =
𝒙 ∑ 𝒏(𝒊) 𝒙(𝒊) Equation 60
𝑵𝑻
𝒊=𝟏
In this case N is the number of classes or groups that have been formed, 𝑛(𝑖) is the
count of the number of particles in each group i and 𝑥(𝑖) is the characteristic defining
each group. If the groups have different sizes or densities for example it is possible
to weight this calculation by mass per group (Equation 61):
𝑵
𝟏
̅𝑵 =
𝒙 ∑ 𝒎(𝒊) 𝒙(𝒊) Equation 61
𝑴𝑻
𝒊=𝟏
𝑫𝒊−𝟏
𝟏
𝒅𝟑𝒑𝒊 = ∫ 𝒅𝟑𝒑 𝝋(𝒅𝒑 )𝒅𝒅𝒑 Equation 62
𝝋𝒊 (𝒅𝒑 ) 𝑫𝒊
Where 𝜑(𝑑𝑝 ) is the number distribution density function, and 𝜑𝑖 (𝑑𝑝 ) is the number
fraction of the population in a size class.
For most ore streams, the sieving is conducted with a variety of screen sequences.
This process provides data where the size class width is not constant and thus a
factor needs to be used to convert the stone count frequencies or masses observed
in each class by the relative width (a measure of ‘distance’ in size space) of the
classes. For the diamond size distribution this factor, known as the unit interval, is
derived by taking the difference of the logarithms of the critical carat mass between
the upper and lower-class boundaries (Equation 63)
𝟏
𝑼𝒊 = Equation 63
𝒍𝒐𝒈(𝑪𝒓𝒊𝒕𝑺𝒛𝒊 ) − 𝒍𝒐𝒈(𝑪𝒓𝒊𝒕𝑺𝒛𝒊+𝟏 )
where 𝐶𝑟𝑖𝑡𝑆𝑧𝑖 is the critical size in cts/stone of the screen on which the diamonds
have been retained, and;
𝐶𝑟𝑖𝑡𝑆𝑧𝑖+1 is the critical size in cts/stone of the screen above the retaining screen.
The critical carat mass for a given screen size is defined as that size of diamond that
will have a 50% probability of being passed or retained on a given sieve. These
factors are usually derived empirically for each diamond deposit or groups of
similar deposits. Likewise, the average representative size of the diamonds retained
on each screen size is also derived empirically. Table 26 gives the critical sizes and
average stone masses for several diamond deposits for a standard set of diamond
sized sieves.
Table 26: Listing of Diamond screen sieve classes and associated sieve apertures, average
stone size per class and critical stone size per class.
In diamond process modelling the diamond particles are considered to have a log
normal distribution (Rombouts, 1995). This distribution can be defined as per
Equation 64:
𝒍𝒏(𝑫⁄𝑫𝟓𝟎 )
𝑷(𝑫) = 𝑮 ( ) Equation 64
𝝈
𝒙
𝟏 𝟐 ⁄𝟐
𝑮(𝒙) = ∫ 𝒆−𝒕 𝒅𝒕
√𝟐𝝅 −∞ Equation 65
𝟏 𝒙
= [𝟏 + 𝒆𝒓𝒇 ( )]
𝟐 √𝟐
and where:
𝟏
𝝈= (𝒍𝒏𝑫𝟖𝟒 − 𝒍𝒏𝑫𝟏𝟔 ) Equation 66
𝟐
𝐷50 is the particle size at which P (𝐷50 ) = 0.5, this is called the median size.
The log normal distribution has theoretical significance in that it is the distribution
that results when a particle is crushed using several fracture events (Kolmogrov
1941). Its usefulness partly lies in the fact that this its properties suitably reflect the
diamond mass size distribution variable and it can be modelled using two
parameters (Aitchison and Brown, 1957). In the case of the granulometry model a
two-part fit is used. This requirement arises most likely from the incomplete
liberation of finer stones in the distribution, and or the loss and destruction of
diamonds from the coarser side of the distribution (Ferreira, 2013).
The graphic in Figure 57 shows the fitting of the curve to the recovered diamond
distribution.
110
100
90
80
Cumulative % Freqency Less Than Z
70
60
50
40
30
20
Actual Distribution Modelled Distribution
10
0
-3.0000 -2.0000 -1.0000 0.0000 1.0000 2.0000 3.0000
Figure 57: Plot showing the actual and modelled log normal diamond size distribution.
If all the particles of kimberlite remain larger than the largest diamond, then each of
the kimberlite particles can feasibly contain a diamond of any size from the full
range of recovered diamond sizes. Once there are kimberlite particles that are
smaller than the largest diamond, the kimberlite particles can only feasibly contain
diamonds from a selected, or trimmed range of the recovered diamond size
distribution. Hence as the kimberlite ore is crushed, and the liberated diamonds
recovered, then the estimated probability of diamonds being locked in single size
fraction of discarded kimberlite is given by:
𝒊
𝑴𝒅𝒊 𝑴𝒌𝒊
𝒑𝑳 (𝑪𝒊 ) = ∑ [ ⁄∑𝒊=𝒏 𝑴𝒅 ] × [ ⁄∑𝒊=𝒏 𝑴𝒌 ] Equation 67
𝒊=𝟏 𝒊 𝒊=𝟏 𝒊
𝒊=𝟏
Where :
Then by rearrangement for every 100cts recovered the liberated carats can be
expressed as in Equation 69 :
𝟏𝟎𝟎
%𝑳𝒊𝒃𝒆𝒓𝒂𝒕𝒊𝒐𝒏 = 𝑿𝟏𝟎𝟎 Equation 69
𝟏𝟎𝟎 + 𝒍𝒐𝒄𝒌𝒆𝒅 𝒆𝒔𝒕𝒊𝒎𝒂𝒕𝒆𝒅
The maximum diamond size that can possibly be found in a kimberlite particle that
has been discarded is, however, not only a function of the ore and diamond particle
size but must also have floated out of the dense media separation process. For this
to have happened the apparent weight of the particle must have been less than the
force of buoyancy to which the particle was exposed. A critical particle can be
defined as a particle that neither sinks nor floats, and hence the force of buoyancy
(Fb) is equivalent to the sinking force (Fs). By equating these two forces it is possible
to derive the volume and hence maximum size of diamond that can be locked in a
tailings particle for a given set of input parameters. (Equation 70)
𝐹𝑏 = 𝐹𝑠
𝑽𝑷 × 𝝆𝑴 = 𝑽𝒌 × 𝝆𝒌 + 𝑽𝒅 × 𝝆𝒅 Equation 70
𝑽𝒑 (𝝆𝑴 − 𝝆𝒌 )
𝑽𝒅 = Equation 71
(𝝆𝒅 − 𝝆𝒌 )
Where:
𝑉𝑑 : Maximum volume of diamond that can be contained in the particle and still float
𝑉𝑃 : Volume of particle
𝜌𝑀 : The density of the cut-point that the particle is exposed to in the dense media
separation process
Thus it is likely for most operations that the formulation in Equation 67 will be
optimistic as it assumes that a diamond of the same size as the particle can be locked
in the discarded rock. As shown in Equation 71, the probability of being locked can
be reduced to equate to cumulative proportion of the diamond size distribution that
lies below the maximum size of diamond (Dvmax) that can be locked in a kimberlite
particle of size class i. To find this class the maximum volume is converted to a mass
of diamond and then the class that this diamond of this mass would be classified into
is chosen as the "Max-i'. This leads to the probability of lockup in class i being
reformulated as in Equation 74.
𝒊=𝑴𝒂𝒙 𝒊
𝑴𝒅𝒊 𝑴𝒌𝒊
𝒑𝑳 (𝑪𝒊 ) = ∑ [ ⁄∑𝒊=𝒏 𝑴𝒅 ] × [ ⁄∑𝒊=𝒏 𝑴𝒌 ] Equation 72
𝒊=𝟏 𝒊 𝒊=𝟏 𝒊
𝒊=𝟏
This model provides the maximum size of diamond that can potentially be locked in
each size particle. Also having knowledge of the size distribution of diamonds
recovered, it is possible to calculate the possible locked distribution in each ore
particle size class and summing across all ore sizes it is then possible to construct
an estimate of the discarded diamond size distribution.
This in turn can be used to determine the maximum potential value of the discarded
stream of kimberlite particles, by multiplying the $/ct value for each size of diamond
by the estimate of locked diamond potential (see Table 33)
In this section the application of the granulometry model to the Venetia diamond
size frequency and a sample of two weeks production will be described.
The first part of the process is to derive the degree to which the kimberlite was
crushed, the total grind, or resulting size distribution of all the processed kimberlite
is required. This is achieved by sampling at various points in the process and
determining the relative mass flows in each stream to reconstruct the size
1 2 3 4 5 8 9 10 11
Sieve Mass % retained cumulative % kimberlite Mass Mass per size Mass Total Kimberlite
Size retained in class passing in class Kimberlite Fed to DMS Kimberlite % per class
mm (kg) % % % kg tonnes tonnes %
Table 27: Description of method used to convert the sampled tailings distribution to a total
kimberlitic distribution.
In the first column of Table 27 the aperture of the sieves used to size the sampled
material is recorded. The mass retained and the % kimberlite in each size class is
given in columns 2 and 5 respectively. This information is used to calculate the mass
of kimberlite in each size class which is shown in column 8. The size distribution is
converted to tonnes of kimberlite in each size class that will have been processed in
the period being considered by multiplying the percentage in each class by the dense
media feed tonnes for the period (6000 in this example). As there is material that is
below the bottom screen size (i.e-1mm) that will not have been sampled in the
discard material, the size distribution must be adjusted to include this material. In
this example two thousand tonnes of undersize material (also commonly referred
to as slimes) was discarded in the underflow. It is assumed that this material is all
kimberlite, which is a reasonable assumption given that most waste rock
components are usually harder than the kimberlite. This mass is added into the size
distribution in the second last row of column 10. This tonnage size distribution can
then be converted to a cumulative % retained size distribution of the processed
kimberlite for use in the model. To map the ore sizes to diamond sizes a linear
interpolation is used.
The diamonds recovered over the period are sieved and the size distribution is
modelled using a two-part log normal model.
1 2 3 4 5 6 7 8 9
Diamond Critical Log Critical Actual % Fitting of Size Distribution Cumulative Error
Sieve Size Size Passing Gauss1 Gauss2 Fit Retained on model
cts/stone log cts/stone % % % % % %
Table 28: Fitting of the lognormal model to the recovered size distribution.
Table 28 shows the resulting fit and the squared error in each of the size classes.
The graph of the actual and fitted distribution is shown in Figure 57. Although some
classes show some deviation between the actual and the model fit, in general the
form of the model and actual recovery correspond reasonably well.
The parameters of the process that has been used to separate the kimberlite is also
recorded and is given in Table 29.
For each composite particle size, Equation 60 can be used to calculate the maximum
size of diamond that can be locked in a ‘critical’ class particle. Once this has been
done for each size class considered it is possible to calculate what proportion of the
diamond distribution can be locked in this particle.
This diamond mass will most likely have a distribution that is equivalent to the
recovered distribution truncated by the maximum locked diamond particle size. The
result of these calculations is shown in Table 30.
Sieve Parameters Kimberlitic Breakdown Kimberlite Distribution Particle Size Maximum Locked diamond
Diamond Square Average Critical Cum % Class log Square Spherical Factored vol Volume Size Size
Sieve mesh Size size tonnes tonnes Mesh Volume 0.9 diamond carats mm
name mm cts/stone cts/stone % % log mm cm3 cm3 cm3 cts mm
32 0.00 1.51 17.16 15.44 2.45 43.04 18.58
200+ 30.05 213.31 199.80 0.00 0.00 1.48 14.21 12.79 2.03 35.64 17.45
150+ 27.05 160.74 149.80 0.00 0.00 1.43 10.36 9.33 1.48 26.00 15.71
100+ 23.33 107.86 99.80 0.00 0.00 1.37 6.65 5.98 0.95 16.68 13.55
75+ 21.01 81.25 74.80 0.00 0.00 1.32 4.86 4.37 0.69 12.18 12.20
60+ 19.36 65.21 59.80 0.00 0.00 1.29 3.80 3.42 0.54 9.53 11.24
45+ 17.43 49.1 44.80 0.00 0.00 1.24 2.77 2.50 0.40 6.96 10.12
30+ 15.02 32.9 29.80 0.00 0.00 1.18 1.77 1.60 0.25 4.45 8.72
25+ 14.07 27.58 24.80 0.00 0.00 1.15 1.46 1.31 0.21 3.66 8.17
20+ 12.95 22.02 19.80 0.00 0.00 1.11 1.14 1.02 0.16 2.85 7.52
15+ 11.64 16.54 14.80 0.05 0.05 1.07 0.83 0.74 0.12 2.07 6.76
+23 9.28 8.04 7.94 0.34 0.30 0.97 0.42 0.38 0.06 1.05 5.39
+21 7.09 3.69 3.79 2.85 2.51 0.85 0.19 0.17 0.03 0.47 4.12
+19 5.56 1.92 1.95 6.81 3.96 0.75 0.09 0.08 0.01 0.23 3.23
+17 4.93 1.42 1.40 12.66 5.85 0.69 0.06 0.06 0.01 0.16 2.86
+15 4.62 1.20 1.17 15.54 2.88 0.66 0.05 0.05 0.01 0.13 2.68
+13 3.85 0.70 0.71 22.94 7.39 0.59 0.03 0.03 0.00 0.07 2.24
+12 3.42 0.52 0.51 27.62 4.68 0.53 0.02 0.02 0.00 0.05 1.99
+11 2.86 0.32 0.31 34.70 7.08 0.46 0.01 0.01 0.00 0.03 1.66
+9 2.35 0.18 0.18 44.95 10.25 0.37 0.01 0.01 0.00 0.02 1.36
+7 2.00 0.12 0.12 48.05 3.11 0.30 0.00 0.00 0.00 0.01 1.16
+6 1.72 0.08 0.08 54.14 6.09 0.24 0.00 0.00 0.00 0.01 1.00
+5 1.47 0.05 0.05 59.58 5.44 0.17 0.00 0.00 0.00 0.00 0.85
+3 1.15 0.03 0.03 66.89 7.30 0.06 0.00 0.00 0.00 0.00 0.67
+2 1.03 0.02 0.19 69.66 2.78 0.01 0.00 0.00 0.00 0.00 0.60
+1 0.82 0.01 0.01 75.70 6.04 -0.09 0.00 0.00 0.00 0.00 0.48
-1 0.01 0.00 0.00 100.00 24.30 -2.00 0.00 0.00 0.00 0.00 0.01
100.00
Table 30: Calculation of the maximum locked diamond size in kimberlite particles in each
sieve class.
While particles are assumed to have a spherical shape, their volume, expressed in
cm3, is based on the square mesh size, 𝑑𝑖 , specified in mm (Equation 73).
𝟒 𝒅𝒊 𝟑
𝑽𝒊 = × 𝝅 × ( ) × 𝟏𝟎𝟎𝟎 Equation 73
𝟑 𝟐
This volume must be factorised to some volume lower than a sphere. In the model
presented a value of 0.9 is used. This value can be calibrated for various ore types
and crushing methods, by direct measurement of the volume of particle volumes in
each sieve class. Once the representative volume of each sieve class has been
calculated it is possible, using the parameters given in Table 29 and Equation 71, to
calculate the maximum volume (in cm3) and mass of diamond that can be locked up
in this particle size class.
It is now possible to work out the proportion of the size distribution of diamonds
that can be locked in each size class. This calculation is demonstrated in Table 32.
The parameters used to fit a log normal distribution to the recovered diamond size
distribution (Table 28) are used to determine this percentage given the maximum
locked diamond size. However, this percentage of the distribution represents a
number of carats locked for every 100 carats recovered and will most likely occur
in the same size distribution as that of the recovered diamonds. The single rider is
that if the carats locked in any given size is smaller than the lower critical size of that
sieve class, the locked carat potential is moved down to the next smallest size class.
This calculation is show in Table 31.
Sieve Parameters Allocation of Locked potential in each size class using the recovered diamond distribution
Diamond Square Reallocated
Sieve mesh Cts/class
name mm +19 +17 +15 +13 +12 +11 +9 +7 +6 +5 +3 +2 +1 -1 cts
32
+19 5.56 0.13 0.13
+17 4.93 0.09 0.06 0.16
+15 4.62 0.07 0.05 0.02 0.14
+13 3.85 0.28 0.19 0.09 0.15 0.71
+12 3.42 0.21 0.14 0.06 0.11 0.06 0.59
+11 2.86 0.43 0.30 0.13 0.24 0.12 0.10 1.32
+9 2.35 0.62 0.44 0.20 0.34 0.17 0.14 0.11 2.02
+7 2.00 0.59 0.41 0.18 0.33 0.16 0.14 0.10 0.00 1.91
+6 1.72 0.53 0.37 0.17 0.30 0.15 0.12 0.09 0.00 0.00 1.74
+5 1.47 0.66 0.46 0.21 0.37 0.18 0.15 0.12 0.00 0.00 0.00 2.16
+3 1.15 0.48 0.34 0.15 0.27 0.13 0.11 0.08 0.00 0.00 0.00 0.03 1.59
+2 1.03 0.13 0.09 0.04 0.07 0.04 0.03 0.02 0.00 0.00 0.00 0.01 0.01 0.44
+1 0.82 0.10 0.07 0.03 0.06 0.03 0.02 0.02 0.00 0.00 0.00 0.01 0.01 0.00 0.35
-1 0.01 0.08 0.06 0.03 0.04 0.02 0.02 0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.01 0.27
Total locked per class 4.40 2.99 1.31 2.28 1.06 0.84 0.56 0.00 0.00 0.00 0.04 0.02 0.00 0.01 13.52
Table 31: Allocation of the locked potential in each size class according to the probabilities
derived from the recovered diamond size distribution.
The re-assigned locked potential is shown in Table 32 in the column titled ‘Re-
distributed per class cts’.
Table 32: Calculation of the locked carat potential per diamond sieve class.
In the example presented here approximately 13.5 carats are locked for every 100
hundred recovered. The liberation derived for this period would be calculated as
per Equation 69, and result in estimated carat liberation of 88%. Knowing the
recovered size distribution and the locked potential facilitates the calculation of the
locked and liberated revenue. This is essentially a weighting of the distribution by
the revenue obtained for each size of diamond. This calculation is summarised in
Table 33.
Sieve Parameters Recovered locked Estimated Total Distribution Revenue Distribution Caclulation Cumulative% of Revenue
Diamond Square Carats per Redistributed Total Total Cumulative Revenue Revenue Revenue
Sieve mesh Class locked Contained Contained % retained per class Free Locked Free Locked
name mm Cts Cts Cts per class % per class % $/ct $/100cts $/100cts % %
31.5
200+ 30.05 0.00 0.00 0.00 0.00 0.00 100.00 0.00 0.00 0.00 0.00
150+ 27.05 0.00 0.00 0.00 0.00 0.00 100.00 0.00 0.00 0.00 0.00
100+ 23.33 0.00 0.00 0.00 0.00 0.00 100.00 0.00 0.00 0.00 0.00
75+ 21.01 0.01 0.00 0.01 0.01 0.01 100.00 1.01 0.00 0.06 0.00
60+ 19.36 0.03 0.00 0.03 0.02 0.03 100.00 2.65 0.00 0.20 0.00
45+ 17.43 0.04 0.00 0.04 0.04 0.07 100.00 4.03 0.00 0.43 0.00
30+ 15.02 0.12 0.00 0.12 0.11 0.18 100.00 12.41 0.00 1.12 0.00
25+ 14.07 0.08 0.00 0.08 0.07 0.25 100.00 7.76 0.00 1.55 0.00
20+ 12.95 0.12 0.00 0.12 0.11 0.35 100.00 12.31 0.00 2.24 0.00
15+ 11.64 0.21 0.00 0.21 0.19 0.54 100.00 21.40 0.00 3.44 0.00
+23 9.28 0.67 0.00 0.67 0.59 1.13 75.00 50.43 0.00 6.25 0.00
+21 7.09 1.34 0.00 1.34 1.18 2.31 70.00 93.53 0.00 11.47 0.00
+19 5.56 2.77 0.13 2.89 2.55 4.86 65.00 179.94 8.14 21.51 0.45
+17 4.93 2.03 0.16 2.18 1.92 6.78 40.00 81.02 6.23 26.03 0.80
+15 4.62 1.48 0.14 1.62 1.43 8.21 35.00 51.96 4.73 28.93 1.07
+13 3.85 6.13 0.71 6.84 6.03 14.24 30.00 183.94 21.35 39.19 2.26
+12 3.42 4.57 0.59 5.16 4.54 18.78 26.00 118.78 15.26 45.82 3.11
+11 2.86 9.51 1.32 10.83 9.54 28.32 15.00 142.59 19.82 53.77 4.21
+9 2.35 13.77 2.02 15.79 13.91 42.23 12.00 165.23 24.27 62.99 5.57
+7 2.00 13.01 1.91 14.92 13.14 55.37 11.00 143.10 21.02 70.97 6.74
+6 1.72 11.82 1.74 13.55 11.94 67.31 10.00 118.19 17.36 77.57 7.71
+5 1.47 14.69 2.16 16.85 14.84 82.15 8.00 117.53 17.26 84.13 8.67
+3 1.15 10.62 1.59 12.20 10.75 92.90 7.00 74.32 11.10 88.27 9.29
+2 1.03 2.93 0.44 3.37 2.97 95.87 6.00 17.55 2.67 89.25 9.44
+1 0.82 2.30 0.35 2.65 2.33 98.21 5.00 11.49 1.75 89.89 9.54
-1 0.01 1.76 0.27 2.04 1.79 100.00 5.00 8.81 1.37 90.39 9.61
TOTAL 100.00 13.52 113.52 100.00 1420.00 1620.00 172.33
% 𝑹𝒆𝒗𝒆𝒏𝒖𝒆 𝑳𝒊𝒃𝒆𝒓𝒂𝒕𝒊𝒐𝒏
𝟏𝟎𝟎 Equation 74
= 𝑿𝟏𝟎𝟎
𝟏𝟎𝟎 + 𝒍𝒐𝒄𝒌𝒆𝒅 𝒓𝒆𝒗𝒆𝒏𝒖𝒆
In this case the revenue liberation would be estimated to be in the vicinity of 90%,
suggesting that there remains an additional 11% of current revenue to be recovered
if the crushing circuit was modified to release all of the contained diamonds.
Using this approach to determine liberation allows for the identification of which
size fractions are most important in terms of locking up carats and which will have
the largest impact on the variation in liberated revenue. In this way it is possible to
carry out trade-off studies to determine steps to find the best balance between
process cost and revenue.
It is possible, using a platform such as Microsoft Excel, to combine the unit process
models to create a model of a diamond liberation and dense media separation plant.
An example is provided in the appendix and has been created using the add-in
package “Limn” developed, and kindly sponsored for this research, by Dave
Wiseman. Using this model, it is possible to determine the total grind, estimate DMS
separation efficiency and estimate the recovery and loss of diamonds that will be
achieved for each given set of rock properties when the plant is in a stable state.
The output of this model can be stored in a matrix that can be used in an integrated
way to interact with the block model containing spatially varying rock properties.
In cases where this model is shown to be a robust estimator of the diamonds that
are liberated and locked for a given size distribution of ore particles, the process
simulation that is required will consist of a comminution model that produces an
estimate of the final grind size distribution. The comminution model can be
integrated with a spatial model of the properties of the rocks that govern the final
grind, then a methodology can be devised that will facilitate the relationship
between ore properties and diamond recovery.
There are several limits to both the granulometry model and the process
simulations model and their use in integrated models. These include:
• Stress testing the process responses when the material changes and the
process responds in different ways; and
• Time that the process takes to reach steady state and its incorporation into
process models.
Each of these difficulties has implications for the reliability of the estimation of the
recovery factor these will be explained in some detail in this section.
6.8 Conclusion
It is possible to estimate the expected total grind that will be achieved for a given
process flowsheet and ore type characteristic given sufficient data. Ideally for
existing processes the experiments can be designed to define relationships between
operating parameters, ore types and the resulting diamond recovery. For new
deposits without this information it is possible to generate some understanding of
the range of expected recoveries and to determine which processes are likely to
constrain impact on recovery.
In the subsequent chapters the methods developed here are integrated with
orebody sampling, simulation and estimation approaches suggested in the prior
chapters to develop an integrated value chain model that simulates the interaction
of variable ore characteristics with the mining and processing models to produce
period by period production outputs.
7.1 Introduction
This case study demonstrates an integrated value chain model that comprises
multiple orebody realisations with mining and treatment process models. The value
chain model is used to simulate the mine operation and generate daily production
summaries. This output is used to quantify the impact that the rock characteristics
have on the range and variability of the metallurgical recovery factor.
The case study was published as “Integrated Mine Evaluation – Implications for
Mine Management” in the proceedings of the AusIMM Mine Managers Conference
(Nicholas et al., 2007). The author of this thesis presented this paper at this
conference held in Melbourne, Australia in 2007, and was responsible for
developing the orebody models, the VBA coding, execution of mining and process
simulations and providing links into the financial model. The work was completed
in collaboration with Grant Nicholas, Kurt Petersen, Alain Galli and Margaret
Armstrong.
In this case study the in situ diamond grade and revenue were relatively well
constrained. However uncertain orebody geometry and the impact that the complex
geometry would have on mining rate and recovery, were not.
The case study demonstrates how it is possible to acquire data from the orebody to
spatially simulate the vertical displacement, the thickness of the dyke and the grade
using conventional geostatistical techniques. These characteristics of the orebody
are used in a mining simulation to generate a variable mining rate and feed
characteristics for the process plant. The process plant model was configured to
respond to the feed ore characteristics and predict throughput, comminution and
diamond recovery.
Using the novel approach developed during this research it was possible to model
and simulate the key aspects of this project including:
• Simulating mining that adheres to the planned sequence of mining with the
rate of mining each block being determined by the interaction of the
morphology of the dyke and mining equipment used;
The simulated production outputs were used as inputs into a financial model to
calculate the value of each of the alternatives considered, defined by the Net Present
Value. The benefit of using a “Master Synthetic Orebody model” or so-called “V-Bod”
model to represent reality is that it is possible to compare and then track the
differences between the processing of the Master model (simulated reality at small
scale) and the models generated from the sampling of the Master model and using
these sample values to generate estimated resource models.
Project Background
The orebody is a kimberlite dyke that dips at 15 degrees towards the north east and
is on average 2.8m thick. Most of the kimberlite dyke is hosted within an Archean
multiphase suite of intrusive granitoids, with a minor portion of the kimberlite dyke
emplaced within overlying metavolcanics and metasediments of the Archean
greenstone belts.
The geometry of the dyke is variable. On the regional scale (100’s of metres) the
Snap Lake dyke appears to be a ‘simple’ continuous, gently dipping sheet, although
three areas of offset have been identified by surface seismic imaging' (McBean et al.,
2001). At a more local scale (10-100m), orientation changes and splits and large
This variability in the geometry of the deposit has several implications for grade
estimation, mine design and plant operation, including but not limited to:
The mine plan was based on kriged estimates of the grades, volumes and other
variables and is assumed, on average, to provide a way to develop an unbiased
prediction of the annual production. In the derivation of the plan no allowance was
made for the short impact of short-scale variability (in both the spatial and temporal
sense) on the rate and efficiency (ore recovery, ore loss, diamond recovery) of the
mining and treatment processes.
Sampling data that is used in any evaluation play a fundamental part in producing
estimates that aim to reflect an unknowable reality. Although the inclusion of more
samples reduces uncertainty associated with both the mean and variance of
resource estimates, it does not alter the true "natural" variability within the deposit.
It does however alter out quantification of it which can only be estimated from the
results of the samples.
The limitations of designing a sampling campaign for multiple variables have been
discussed by Kleingeld and Nicholas (2004). In this case study three orebody
variables were considered in the evaluation model:
Synthetic core drilling was used to delineate geological variability on three different
grid densities; 50 by 50m, 25m by 25m and 10m by 10m, creating scenarios one,
two and three. These were designed to sample within the expected variogram
ranges of 75m for thickness and 50m for v1. Sampling campaigns set beyond the
range would return the variable’s average value and would not detect the short-
scale variability. A 50m by 50m drilling grid was used to sample for grade, using
large diameter drilling (LDD). Grade was not deemed to have any significant
variability between scenarios and therefore, a single sampling campaign was
deemed sufficient.
Table 1 describes the design of the simulated sampling campaigns on the virtual
orebody (V-bod); sampling occurred at point support and simulation grid nodes
were 4m by 4m in dimension.
Table 34: Summary of the characteristics of three sampling campaigns and that used to
define the 'Virtual Orebody(V-bod )
The graphic base maps of the V-bod and each sampling campaign are shown in
Figure 58 (warmer colours represent higher values while darker colours are low
values).
V1 Thickness Grade
Scenario 1
V1 Thickness Grade
Scenario 2
V1 Thickness Grade
Scenario 3
V1 Thickness Grade
Figure 58: Comparison of the thickness and v1 base maps for the kriged and simulated
outputs of each scenario with that of the V-bod. Grade was held constant for each scenario.
Table 35: The descriptive statistics for the V-bod and each scenario for grade, dyke thickness
and the geometrical variability of the dyke surface (v1).
The block model data for each realisation (the kriged model is one realisation, and
the 25 simulated realisations constituted the remainder) were exported from the
Isatis software into a text file containing the information associated with each block
in individual rows. These files can then be processed individually through the
system. The model is built in an Excel VBA framework, and consists of several
modules that represent the functions for mining, stockpiling, crushing and dense
media separation processes.
The mine simulation presented an opportunity to evaluate the impact of the dyke
geometry on mining selectivity and rate.
The proposed mining method utilised a mixed fleet of trackless vehicles to mine in
a stope and pillar configuration. The designed mine plan targeted an average
extraction rate of 75 percent of in situ kimberlite at average daily delivery of 3150
tonnes of kimberlite with minimal dilution . Each mining panel, approximately 250
m by 250 m was mined in a sequence that required the establishment of rim tunnels,
then stope tunnels and finally excavations to facilitate stope slashing or drifting.
The model operated as the scale of the smallest mining unit (SMU) with each block
having planar dimensions of 4 m by 4 m, but with variable heights. The rate, and
hence time, to mine each SMU was determined at run time as a function of the
machine used to carry out the mining and the height that was mined. The mined
height, (and hence mined volume, tonnage, dilution and grade) was determined by
the tunnel type, a set of mining height constraints for each tunnel type were selected
as follows :
• stope blocks minimum 1.0 m (height), for small machines and 1.5m (height) for
large machines.
Figure 59: A diagram depicting the implementation of the mining constraint logic.
Processing parameters
The system has several constraints that can be modified. A set of constraints that is
used in processing was referred to in this case as a scenario. Note that in later
models, and publications, this term has been specifically reserved for future
operating contexts.
When processing the orebody models through the value chain the variability of the
orebody geometry impacts on the mining rate and the relative proportion of
kimberlite and waste rock mined. This mix of kimberlite and waste contained in the
material processed impacts on the plant throughput rate, the degree of
comminution and the recovery efficiency of the DMS.
To determine the relationship between the dilution and process efficiency, data
were acquired from the sample plant that had been treating samples from the
deposit. A process simulation model was developed using Limn software in
collaboration with Dr. K. Petersen (Petersen, 2005) . The simulation represented the
planned full-scale process plant using the sample plant data to calibrate both
expected size reduction and recovery. The model was run for several different
combinations of granitoid waste, metavolcanic waste and kimberlite. The resulting
data was used to investigate the relationship between the proportion of dilution and
comminution and DMS performance. It was observed that the comminution and
DMS responses could be simplified and represented by a least squares linear fit to
the simulation outputs. This approach allow content range of responses for differing
proportions of waste and kimberlite vs diamond recovery were plotted and a least
squares linear model was fitted.
95
90
85
% Diamond Liberation
80
75
70
65
60
55
50
30 40 50 60 70 80 90 100
% Kimberlite in Headfeed(%)
Once relationship was established it was possible to use the standard error of the
regression line to determine the uncertainty in the parameters of the fit and use this
to seed a simulation of the value for these parameters (Figure 60). The parameters,
in this case M and C for the regression line can be randomly drawn from a calibrated
normal distribution to generate a set that can be used in the value chain model.
Simulated parameters that produce a line that falls outside the limits found at other
similar operations are rejected from the set of feasible parameters. In this way it is
possible to introduce a range of uncertainty that is constrained by prior observation
and gives a realistic model to the variability in recovery that the changing ore
properties will drive.
The use of these simplified functions facilitates simulation of the entire life of mine
at a block scale and to reflect the interactions of the ore characteristics with the
process. This would not currently be possible in realistic timeframes suing a full
population balance model.
The model can be used to demonstrate how changes in dyke footwall location will
impact on both development and mining rates. When there were changes in footwall
location that resulted in mining roadway slopes that are too steep to be safely used
by the mining equipment, the mining teams would have to retreat and redevelop
ramps and roadways. The additional activity would reduce production output. The
model also shows how it is possible to evaluate impact of this constraint in areas of
high dyke variability in which small machines with less tolerance for steep
roadways are typically allocated. This unexpected result suggests that more cover
drilling may be required to inform roadway design and machine allocation planning.
The model can be used to show how the impact of reduced mining rates can to some
extent be managed by additional advance development to create flexibility to mine
ore from several operating areas simultaneously. It can be shown that adopting this
strategy will increase the dilution of ore fed to the process plant whilst setting up
panels to mine, but the average dilution in the later years of operation would be able
to be brought inline with initial expectations.
Establishing a relationship between the rock mix fed to the plant, the comminution,
liberation and hence recovery, required a combination of sample plant processing
and process modelling to predict the nature of the relationship. The assumption that
the impact of diluting waste on comminution that was observed at sample plant
scale (Samples comprising ~80 tonnes each) will be observed at production scale
appears reasonable but will have to be verified during commissioning of the full-
scale plant. This relationship does, however, highlight that evaluation of the impact
of dilution without using a block-by-block evaluation model (as opposed for
instance with doing a sensitivity analysis in the financial model) will most likely
underestimate the impact that dilution will have on the operation.
This model has assumed that the 3000-tonne storage to buffer surges between the
mine and the plant is a fixed maximum. The model can be used to demonstrate how
increasing this capacity constraint will have a material impact on the value of the
operation. The flexibility that additional surge capacity will provide can be valued
and hence the capital that is allocated to this part of the operation can be justified.
If a smoothed model of the orebody, or a smoothed feed rate was assumed the value
of storage would be materially underestimated.
The value chain modelling approach can accommodate not only unsystematic risk
(risks associated with the specifics of the envisaged project) but can also be used
simultaneously to evaluate the impact of systematic risk, i.e. risks that are
independent of the project configuration, such as exchange rate and diamond sales
price. In this holistic approach it is possible to contrast and compare the relative
merits of technical risk mitigation and mitigating project risk through various
financial engineering measures.
8.1 Introduction
This case study demonstrates an integrated value chain model that comprises
multiple orebody realisations with several alternative mining and treatment
process options. The value chain model is used to simulate the mine operation and
generate daily production summaries. This output is used to quantify the impact
that the rock characteristics have on the range and variability of the metallurgical
recovery factor.
The main source of uncertainty in this project was the impact that hard and dense
kimberlite breccias would have on diamond recovery and process performance. The
hardness of the kimberlite was expected to compromise liberation, and the zones of
higher density were expected to compromise the DMS operation. The yields
expected in some parts of the orebody were higher than average, and if this
coincided with poor comminution and poor separation the throughput would be
severely limited by the capacity in the final recovery process. In addition to the
orebody grade, diamond assortment data had also been collected. These data were
used to fit a model to the revenue distribution within each size class that results
from the mix of diamond colour, clarity and shape within each size class. This
assortment model was used to quantify the influence of the range of the diamond
selling price on project cashflows.
The integrated evaluation model can test the impact of uncertainty related to the
orebody, the engineering design decisions (e.g. plant capacity) and future outcomes
(diamond selling price) and then combine the outputs in several different ways. To
clarify the use and operation of the model the terminology used is given in Table 36.
1
Spatial
2
Realisations realisations of the
…
Orebody
25
1 Several Mine
Mine Plans
2 Plans
1
Alternative
2 Several versions
Iterations 3 of process
4 performance
5
1
Versions of future
Scenarios 2
outcomes
3
Table 36: Schematic depiction of terminology used to describe aspects of the Integrated
Evaluation Model.
Alternative - this term refers to the combination of inputs, processes and outputs
that are considered in the model. At this level it is possible to compare and contrast
very different approaches to operating strategy for a mining project.
Realisation - refers to one image of the orebody that honours the sampling data.
Typically, all realisations are processed through the model at a block-by-block scale.
Mine plans- there may be typically one to five mine plans that are developed based
on different assumptions including ground conditions, capacity, fleet size, angle of
repose etc.
Project background
The orebody is a kimberlite pipe located in Botswana. The tri-lobate pipe-like
orebody has intruded through the basement granites and is variably contaminated
by diluting basalt. This dilution is variable with maximum measured values of 45%
measured at 900m amsl in the north lobe and 25% measured in core retrieved from
the south lobe at 800m amsl.
A number of mitigating steps were taken during the design phase to provide
sufficient flexibility in the process plant to cope with anticipated comminution and
yield challenges. Some of these design features included:
• Stockpile capacity ahead of the process plant, as well as a 150t surge bin
ahead of the recovery plant which would allow mining to change zones
should high yield be experienced;
• Optimisation of cyclone configuration to allow for higher spigot loadings
without compromising efficiency;
• Inclusion of capability to adapt the HPGR crushing circuit to cope with harder
material;
• Elimination of recycle of final concentrate by including grease belts and mills
in the final recovery flowsheet; and
• Creation of flexibility in the layout of the final recovery to allow for inclusion
of additional units as technology and needs change - e.g. inclusion of rare
earth drum separators to deal with increased DMS yield.
Each of these risk mitigation features were designed to deal with the advent of one
of the risk factors and almost all the existing models were based on annual averages.
This meant that the existing project evaluation methods were unable to quantify
how well the project would respond to a combined synchronous 'onslaught' of all
the modelled uncertainty.
After consultation with the project team an integrated model was developed to
quantify the impact of various sources of uncertainty on project performance. The
next section describes the components of this model.
The capital for this project was limited and hence the project was based on a two-
phase operation, with the first phase of low throughput lasting approximately three
years, followed by a substantial expansion to increase throughput in phase two. This
would allow the project to acquire more information on the grade of the orebody
and to use the initial commissioning to gain greater insights into rock characteristics
and their impact on liberation and DMS separation. This approach to capital
rationing required a model that would be able to be adapted over time to reflect the
planned increase in mining rate and plant throughput.
The second phase targeted the southern lobe and was designed to extend down to
390m. This phase was based on annual surfaces that used tag blocks that were to be
mined in any period. A simplified sequence was derived based on reasonable
Figure 61: A view of the three lobes of the deposit looking from the west to the east (South
Lobe in Dark Blue) adapted after Campbell (2009).
Secondary Crushing 1 2
Fines DMS 1 3
Coarse DMS 1 2
Reconcentration DMS 1 2
Pneumatic drier 1 2
X-ray machines 4 8
Table 37: Summary of major capital items planned for each phase.
The design criteria for the sizing of these units was based on extensive studies of
core samples that were treated to determine the yield from dense media separation
that could be expected from various depths in the orebody (Table 38)
0-70m 4% 2%
Although these average yields were problematic the design team felt that it would
be uneconomic to design the plant in the early phase to cope with this high level of
yield. It was also important to consider that this yield was based on small samples
and the project required an understanding of the sequence of high yielding blocks
to evaluate the effect that stockpiling, and blending might have on ameliorating the
negative impacts of designing for high instantaneous yield.
For every block in the mine plan, the grade and average stone size was used to
calculate the number of stones in each block. The mass of each stone was then
determined by drawing at random from a modelled cumulative stone size
distribution. In each mass class a cumulative distribution of the revenue in that size
class was used to draw a value at random for each stone. In this way the value for
the diamonds in each block could be cumulated to derive the realised value that
could be expressed as either the $/ct for diamonds in each block, or the $/tonne of
that block (Figure 63).
Average $/ct
Size in Cts/stn
4 150
3
100
2
50
1
0 0
0 5000 10000 15000 20000
Stone Number
Figure 63: Plot showing a simulation of the size of 20 000 diamonds drawn.
Annual
Costs Unit Base Escalation
factor
Ore Mining cost US$/tonne 3.02 1.1
Waste Mining Cost US$/tonne Waste 2.97 1.1
Ore Processing cost US$/Tonne 8.62 1
Ore Treatment Cost US$/tonne 27.32 1
Carat Recovery Cost US$/ct 188.69 1
Table 39: List of model settings used for the financial model.
Action
Instruction Manipulates
Controller
User Database
View Model
Update Notify
Figure 64: Schematic of the Integrated Evaluation Model Architecture (adapted after
Burbeck, 1992) .
In this architecture the interaction of the user with the application is separated from
the control of the model and access to the data by a so-called "View" object. The
"controller" object interprets user instructions that are sent from the view to the
controller. These instructions are then routed to the "Model" object. This model
object is the heart of the simulation and is capable of sending multiple structured
requests to the database, as-well as sending outputs to the database. When the
model has completed an instruction, it notifies the "controller" object. Depending
on the logic embedded in the controller it may then either re-manipulate the
"model" or send an update to the view or do both.
• This view object can be readily changed and adapted as the application
evolves and acquires additional functionality. It is also possible to obscure
functionality from the user until it is ready to be included;
• Ability to separate the development of the individual components, so that
each component can be developed and tested separately. At the point where
the model is to be integrated, the logic that controls the interaction between
the components can be designed, tested and implemented. This sequence
reduces the complexity of detecting and eliminating errors from the system;
and
• Development of parallel processing architecture, in the IEM case each
combination of orebody and value chain can be separated and run
simultaneously by the "model object" which is far quicker than running each
of these sequentially.
The "view" was developed in Microsoft Excel as it provided a familiar and readily
customisable user interface. The "controller" was written in Visual Basic for
Applications in Excel. Several commercial applications were used to support the
sub-models that underpin the IEM framework e.g. Isatis for spatial simulation,
Statistica for generation of stone distributions, Datamine for mine plan sequencing.
Microsoft SQl express was used as a database for storing input data, processing
parameters and result data.
The orebody models can be quite large and exist in a multitude of formats in the
software that is used to generate them. They were thus exported from their source
software and stored as flat text files. These files were in turn read into the database
with a table for each realisation. Likewise, the mine plans were contained in
separate text files, with each block having been assigned an ideal year for extraction,
which was brought into the database. Lists of blocks to be processed in each year
were generated by executing a dynamic query that was triggered by the process
model. The process model used the simulated block properties to determine the
treatment time and the liberation that would be achieved on a block-by-block basis.
In each year the treatment ceased when the allocated hours available in a year had
been consumed. The production outputs were written back to the database and the
next year's list of available blocks were retrieved. The production consumption data
and diamonds produced were carried forward to the financial model to determine
the costs and revenue in each year.
The use of an integrated system meant that post initial configuration and testing, a
standard i7 processor desktop computer was able to process a set of five
alternatives (each alternative consisting of 25 realisations, four iterations and three
scenarios) in just over eight hours.
In this case study the block models contained more than just grade information.
Each block also contained variables that could be used to determine the
comminution, liberation and DMS efficiency and the results of spatial simulations of
diamond size distribution and value within each size class. The project also required
differing treatment rates to be achieved over the life of the operation.
The aim of this case study was specifically to identify processing challenges and how
these could be addressed through pre-emptive risk mitigation that would not
compromise the capital constraints placed on the project.
The summary statistics of the simulated variables in the mined blocks are presented
in Table 40.
The yield value has the highest coefficient of variation followed by the grade and
then the density. This suggests that the uncertainty in the yield is higher than that
for grade. The apparent low range in density values is partly a result of inserting a
background value of 2.90 for blocks to be estimated that were further away from
samples than the range of the variogram.
Throughput
The throughput model planned for an annual mined capacity of 600kt per quarter
and a process plant or mil capacity of 525 Kt per quarter. A summary of the tonnage
treated is given in .
In the initial period the ramp up for mining and plant increases markedly from year
one to year two. The plant capacity is satisfied from the middle of the project
onwards. During the ramp-up period however the cash flows derived from
production fall short of the plan. This is primarily an impact of the slower rates
achieved due to the processing of harder ore.
In the initial period the ramp up for mining and plant increases markedly from year
one to year two. The plant capacity is satisfied from the middle of the project
onwards. During the ramp-up period however the cash flows derived from
production fall short of the plan. This is primarily an impact of the slower rates
achieved due to the processing of harder ore.
Table 41: Summary Statistics of $/tonne depleted by lobe based on simulated stone
values.
The highest $/tonne value, on a mining block scale, was 87.83 $/tonne in the North
lobe which also showed the highest coefficient of variation. This was to a large
degree driven by the coarser stone size distribution in this lobe and the contribution
of larger stones in the assortment.
Project Valuation
The initial modelling identified that using a 10% discounted cashflow model the Net
Present Value(NPV) ranged from BWP -262 million (P10) to BWP +1 million(P90)
with a P50 NPV of BWP -156 ( Figure 66) .
The model identified that the cashflows in the plan for 2010 and 2012 had the
highest uncertainty, and that the project team needed to review mining and process
flexibility in these periods. A shortfall of the order of 10% of tonnes treated in this
period of the project would translate to close to a 10% change in project value.
Figure 66: Summary Plot of Cumulative Discounted Cashflow for the Ak project, P50 case
shown in green,P80 and P20 case shown in red, individual cases shown in grey.
The traditional approaches to deriving project value using single smoothed orebody
models are likely to under-estimate the impact of orebody variability on project
value. The integrated modelling approach facilitates a far richer exploration of the
impact that orebody uncertainty will have on project performance. The same
potentially biased estimation of recovery factors can also result from the use of
single averaged assumptions about process performance.
Figure 67: Grade size plot showing the impact of applying a strict size cut-off to a total
content curve.
In this particular case a combination of bulk sample and microdiamond results were
used. This approach applied to this project gives a range of total content carat
recovery ranging from 0.84 at 1mm down to 0.51 at 2mm bottom cut-off size. Using
the average grade of the resource and its average distribution this would translate
to a revenue recovery range of just over 3 $/tonne. This clearly underestimates the
variability that is likely to be experienced during the life of the operation as shown
using the integrated model.
Using this approach it is possible to define the recovery factor in several alternative
forms, working back from the bulk sample recovery factor (Equation 75)to the
intended main treatment plant recovery factor(Equation 76) . The ratio of the
intended plant recovery factor to the bulk sample plant recovery factor is then used
to determine the resource to reserve factor that is used to account for metallurgical
efficiency (Equation 77).
Using this approach with average bulk sample plant recovery as the input, the range
of the metallurgical (resource to reserve) recovery factor was calculated to range
from 0.97 to 1.08. When using the integrated evaluation model approach, it became
apparent that the variability due to processing constraints was far higher. The
variability produced by the IEM correctly reflects the combined impact of the
variable grade of the ore being treated and the variable efficiency of the process that
results from the changing characteristics of the rock treated. This approach also
allows a much finer temporal scale evaluation of variability in throughput, total
carat production and hence project value. Using the IEM approach, it is also possible
to assign a localised ore recovery factor and in doing so identify where in the
orebody the recovery is likely to vary most, and why it will vary.
The rock characteristics developed here were mainly content variables, however
their impact on liberation of dense species within the ore also impacts on
throughput and recovery quality.
The use of a log normal stone characteristic simulation allowed the model to
incorporate the uncertainty of diamond assortment on the expected recovered
revenue. Even with a small number of parcels of stones it is possible to demonstrate
the range of expected revenue that extends several orders of magnitude beyond that
achieved with a simple static cut-off analysis.
This case study has demonstrated that it is important to consider variable small-
scale (mining block scale) impacts of rock properties, rather than the average
impact of average kimberlite characteristics, to determine correctly the range of
metallurgical recovery that will be experienced over the life of the operation. This
requires multiple images of the orebody to be processed at a block scale through a
dynamic integrated value chain model. This method is feasible and relatively
straight forward to implement.
The final chapter concludes with a summary of the developments that this research
has provided in the derivation and use of metallurgical recovery factors for project
evaluation.
9 DISCUSSION
9.1 Introduction
This thesis describes current practices used to derive and evaluate metallurgical
recovery factors for kimberlitic diamond mines. The recovery factors are used to
account for the discrepancy between in situ values of variables that drive project
value (diamond stone concentration (stones/ht), diamond grade (Ct/ht), diamond
size (Cts/stone), diamond value($/ct), and ore value ($/tonne)), and the expected
'recovered value' of these variables. The literature reviewed shows that predictions
of recovery that are based on assumptions of average rock characteristics acquired
from a few spatially dispersed samples can be biased.
The application and benefits of each of these improvements are briefly described
below.
The building of spatial models of orebody characteristics that can be used to derive
recovery factors requires data with sufficient spatial intensity and appropriate
support to characterise their spatial behaviour. Historically, sampling for rock
characteristic variables that can be used to predict recovery has often been limited
to a few large 'representative' samples that do not capture the range or the spatial
variability of the characteristics of interest.
There is a requirement for a taxonomy to classify the numerous tests, data sources
used to quantify the variables used in recovery factor evaluation. An ideal taxonomic
framework will guide the selection of samples and the tests that they are subjected
to so that the data sets generated can be effectively used. The suggested taxonomy
provides a useful framework for design and implementation of the approached
described here. Variables used in recovery factor evaluation range from diamond
content variables to characteristics that describe the primary properties and
responses of treated rocks to some energy input (Figure 68). The use of two such
approaches (Coward et al, 2003) and (Keeney and Walters, 2008) are described to
demonstrate how they can be used to balance the rationalisation of sampling with
the need for sufficient data of appropriate support to facilitate spatial estimation
and simulation of the characteristics of interest.
Figure 68: A schematic representing the primary response framework for geometallurgical
variables.
A taxonomy developed by Keeny and Walters (2008) suggests there are four levels
of data (Figure 69), with level one being geologically focussed and spatially
representative and level four data being typically acquired from large composite
samples that have a metallurgical focus. This suggests that Level 1, 2 and 3 data are
derived from drill core, and in some cases from large diameter drilling (LDD) that
produces rock chips. On operating mines it is possible to gather additional level 1-3
data from in situ samples taken from the pit, or from post production blasting and
during mineral processing.
Figure 69: A data typology adapted after Keeney and Walters (2008).
Figure 70: A landscape for sample type classification in terms of both spatial continuity and
primary response dimensions.
In Figure 70, a landscape for classification of tests is presented - the scales on the X
and Y axes are broken down into two regions - giving four quadrants into which
each sample type can be placed.
Principles Explanation
Spatial coverage The data used for geometallurgical modelling should be spatially distributed
and on an appropriate support to permit reasonable estimation of the chosen
variables, including domaining and selection of estimation techniques.
Primary Response Rock characteristic variables can be usefully classified as either primary
Variable Classification attributes that are closely associated with in situ rock characteristics and
response variables that are closely related to the response of the process to
rock characteristics. The relationship between primary and response variables
is also a function of the process used to generate the response.
Sample rationalisation Sample selection and analysis are rationalised to meet the demands of being
both adequately representative of geology and processing responses. For
example, fewer samples will be required for early exploration of a kimberlite
to inform the decision to continue sampling, than the number of samples
required to develop a model that can be used for resource classification.
Co-location As far as practically possible, samples chosen for analysis should be co-
located in space and at a scale that is appropriate and relevant to the model
that is being built and the intended use of that model. Isotopic data, i.e. having
as many of the same measures on samples at all sampled locations, have value
in their potential to be used for development of proxy measures.
Calibration and new Where response variable proxies are being proposed to predict primary rock
technology characteristic variables, calibration of the relationship between the proxy
variable to primary characteristics must be undertaken at an appropriate scale.
This requirement is applicable to both existing and new technology.
Business case An appropriate business case should be made for any sampling campaign
with respect to the tactical and/or strategic model that is to be designed and
created and used to support decision making.
Generic vs specific Generic guidelines are considered useful for sampling problem framing and
sample to model planning and development at a strategic level. It is important
that specific site, domain and variable specific sampling experiments should
be designed given the heterogeneity of kimberlitic deposits.
Objectives and The objectives of each sampling campaign should be clearly articulated, and
measurements measurement criteria defined to assess the success or failure of each
campaign
Principles of correct Correct sampling practices should lead to unbiased results. Design and
sampling execution of any sampling experiment must consider good sampling science
(Gy, 2004) and be designed to minimise sampling bias.
Building on existing Where possible, new sampling campaigns must allow existing data to remain
data valid so that the database grows organically, and that no repeat sampling is
required; e.g., consistent use of a set grind size for slime content analysis.
Table 42: A list of guiding principles for rock characteristic sampling.
These guiding principles provide a useful framework to plan and execute a sampling
programme that will produce a data set of sufficient quality for the purposes of
spatially estimating rock characteristics that can be used to evaluate metallurgical
recovery factors.
Prior work on increasing spatial coverage of rock characteristic data (Dowd, 1997)
has shown that it is possible to infer rock characteristics at unsampled locations
using a combination of direct and indirect data. This research has extended this
concept and presented an experiment to demonstrate how a similar approach can
be used in Kimberlitic rocks. The experiment has shown how it is possible to acquire
data sets from a combination of in-hole measurements, measurements on core and
regional geophysical measurements. These data make it possible to improve the
qualitative understanding of geological domain boundaries and create a framework
for finding proxy measurements for difficult and costly destructive rock tests.
The collection of the data requires attention to be paid to descriptions of the core
and careful colocation of the measurements made down the hole and on the core. It
was found in this specific data set (i.e. a southern African volcaniclastic kimberlite
and kimberlite breccia) that averaging to approximately 70cm increased
correlations between various measurements. This result suggests that the scale of
geophysical response and errors in location are minimised at this level of support -
for the rock types tested here. In applications to other deposits a similar approach
will be required to improve the correlations between the response measures made.
The methodology of finding the minimum point of the summed within-sample
variance and between sample variance can be applied to several variable pairs and
could produce different ideal support sizes.
Work described here has identified that scale-up is not complex where variables
exhibit linear relationships and can be considered to have additive behaviour. The
approach required for variables that exhibit non-linear relationships, of which there
are many in this field (e.g., bond work index, abrasion index), are likely to require
more considered, and potentially far more complex approaches (Carrasco, 2014). A
pathway for developing suitable approaches for spatial modelling of a variety of
rock characteristic variables is discussed in chapter 6. This approach suggests that
the parameters used to generate the spatial model are not independent of the use
for which the model is intended. For example, in some cases the when generating a
kriged grade estimate, the modeller may choose a set of kriging parameters that will
be better suited to providing an estimate that is more suited to a global or local
The linking of populated multivariate models of the orebody with process models
that respond at an appropriate scale opens several avenues to derive and evaluate
the Metallurgical Recovery Factor for Diamonds recovered from Kimberlitic
Deposits.
In early stage projects where there is little information on the orebody, and still
significant flexibility in design criteria selection, it is still useful to have a
quantitative model that links the orebody characteristics to the expected recovered
diamond population. Although the expected value for total recovered diamond
mass, size distribution and revenue distribution are important, it is perhaps more
important to be able to develop and apply a methodology to quantify the ranges of
these values.
The published literature has demonstrated that it is common practice to use the
average rock properties either at a global scale or at a domain scale to predict the
fracture and process rates of rocks through a given process (Farrow, 2019). This
limited approach is, in some cases, justified by the limited availability of rock
characteristic data and the inability of traditional approaches to incorporate this
information effectively in the derivation of recovery factors.
System theory (Boulding, 1956) suggests that dynamic effects that arise from the
interaction of constraints with variability will not be correctly reflected in models
that are based on long-term averages. This same effect is referred to in the theory of
constraints (Goldratt, 1990). In this theory, the average of the throughput of several
interdependent processes will not equal the average throughput. This also known
as the 'flaw of averages' (Savage, 2008) and although it seems that these biasing
effects would be explicitly addressed in mining project evaluations, it is evident that
several projects are evaluated on data that are summarised on an annual scale for
inclusion in financial models. The biases that arise from this shortcoming are
difficult to detect and may be very complex. It is for these reasons that a highly
granular interdependent system model of the mining and treatment process should
be used to evaluate the expected range and variability of performance of the
diamond recovery processes.
9.5 Conclusion
This research aimed to investigate ways to identify and measure the rock
characteristics that have an impact on the metallurgical recovery factor. A case
study has shown it is possible to sample for several response characteristics and
then estimate these effectively using a so-called proxy framework to augment
response property data with non-destructive measurements in the hole and on core.
These data also facilitate the generation of several spatial simulations of these
characteristics. The development of a way to process these multivariate models of
the orebody at a fine scale (block scale resolution) enables the assessment of the
impact of these rock characteristics on diamond recovery on widely varying time
scales. The output data can be analysed in many varied ways to quantify the mean
and the range of recovery factors for diamonds recovered from kimberlitic deposits.
Data processing speed has increased exponentially for several decades (Moore,
1965). Effective access to this increasing rate of processing has to some extent been
limited. Recent developments in arbitraging rates for excess server capacity by
some of the broadcast networks, via various service providers, including Amazon
Web Services and Oracle, suggests that the constraint on processing large and
previously unwieldly data sets is becoming affordable to mid-tier mining
companies. This suggests that the cost of providing platforms for processing large
multivariate orebody models will reduce and their use in the mining industry will
become more common. These developments mean that the methodology presented
here is not only valid but can now be practically implemented at a cost that is of a
similar scale to those traditionally associated with resource estimation and project
evaluation.
A large non-technical challenge to the adoption of this methodology has been the
ability to convey the outputs of the range analysis in a way that is embraced by
senior stakeholders in the mining industry. Although the concept of issuing
"Guidance Ranges" for critical performance targets is being more widely adopted
(e.g. Newmont annual report, 2015) it's use in corporate decision-making requires
more work but is gaining acceptability. Cyclicity, decreasing margins, lower average
grades and increasing competition for fewer resources are hall marks of the mining
industry. The development of methods that can validly assess, and respond to, the
impacts of uncertainty and variability of project value will confer special
competitive advantages on the companies that embrace it. The methodologies
presented here, when used as part of a strategy for improved decision-making, will
add substantial value to the mining industry.
10 SUMMARY AND
CONCLUSIONS
10.1 Introduction
The specific limitations of traditional recovery factor evaluation that have been
addressed in this thesis are those associated with the difficulty of incorporating
variable and uncertain rock properties in the derivation of the metallurgical
recovery factors.
Metallurgical recovery factors are required to account for the difference between
the estimated in situ diamond content and expected diamond recovery. These
factors are used to modify the stone concentration, the mass concentration and
diamond size frequency so that the expected recovered $/tonne can be predicted.
Failure to explicitly include the impacts of rock properties in traditional project
valuation can result in sub-optimal mine design, incorrect processing configuration
(design, operating strategy) and potentially biased production and cashflow
forecasts. Not only are there vast differences between the scale of measurement and
the scale of estimation of these properties but the data acquired from tests and
measurements on small samples may not directly correlate with the rock-process
relationship that will eventuate at full-scale operation. The consequences of
potential biases are amplified when global estimates of rock and diamond
characteristics, based on few and spatially sparse data, are used for planning and
designing the entire life of the project.
The third area of research was focused on a means to evaluate the range and
uncertainty of the derived forecasts for metallurgical recovery. This required the
development of an integrated value chain model that could be used to translate the
technical uncertainty into financial metrics. This approach allows for comparisons
to be made between various project risk mitigation strategies.
The primary response framework (Coward et al., 2009) was developed to clarify the
taxonomy for variables that are used in this research. This conceptual framework
provides a basis for developing quantitative models of the relationships between
variables that drive uncertainty in the recovery of diamonds. The benefit of this
framework is that it provides guidance in the use, and limits to some extent potential
misuses, of methods for generating spatial models of rock characteristics.
Kimberlites comprise a complex suite of rock types that exhibit a wide range of
physical characteristics. These arise from a combination of differences in
composition, texture and weathering or alteration state. Accounting for these three
aspects of the rock require suitable adjustments in the acquisition of samples and
the testing of characteristics of those samples. This is especially true for coarse
textured breccias that require a larger sample support. To mitigate some of the risks
associated with the constraints on the number and spatial coverage of physical
samples it is possible to use more cost effective, and potentially less accurate, proxy
measurements to enhance the estimation of values at unsampled locations (e.g.,
using acoustic velocity to augment various measures of rock strength that are used
to predict fracture.)
Spatial estimation and simulation are different methods that both produce spatial
models but that have differing characteristics that are useful in certain
circumstances. Both approaches require attention to spatial coverage, support and
scale-up, and provide different benefits to the evaluation process.
When sufficient data have been acquired it is possible to derive parameters for the
spatial estimation and spatial simulation of the rock characteristics of interest. Best
linear unbiased estimation methods such as ordinary kriging provide models that
are unbiased but potentially smoother than the reality that will be encountered
when mining. Conditional spatial simulation methods provide a means to replicate
the in situ variability of the characteristics of interest. As a suite, the realisations can
be used to determine the uncertainty of these characteristics. This is important and
there are often constraints in the mining and processing of these rocks that respond
to extreme values rather than the average characteristics.
Conventional unit process models have traditionally been limited by both speed of
processing and the limited availability of valid spatial models of characteristics that
could be used as inputs for dynamic process simulation. The evolution of a
methodology for using spatially estimated (and spatially simulated) rock
characteristics to simulate total comminution and link total comminution to
diamond liberation and diamond recovery provides a quantitative way to forecast
the mean and expected range of metallurgical recovery for a given process
flowsheet.
A linked value chain model is an appropriate way of evaluating the impact of rock
and diamond variability on the expected value and range of the metallurgical
recovery factor. The benefits of an integrated value chain model include:
The fitting of functions to data imposes the modellers will, not only on the
parameters that are used to control the location and shape of the function used, but
in the selection of the form of the equation that is used to resemble a real-world
phenomenon. Some of the functions that are used in mineral processing models
have been validated by comparing their predictions, on average, with the averages
of observations of real-world process plant responses (e.g., exponential
comminution models). The approach developed and demonstrated in this thesis
uses errors, or residuals, of model fitting to generate a feasible range of the
parameters for the models used. One such approach used here allows the inclusion
of the standard error to induce a measure of uncertainty for both linear and
curvilinear equations has specific merit as it can be readily used in the value chain
models.
Several of the findings of this research have resulting in suggestions for either a
deeper investigation or additional experimentation to validate and expand specific
findings.
In tandem with this research, the author has been responsible for adapting the
methodology developed here to quantify recovery in various operations and some
of these models have been reported in the literature. This has resulted in the
identification of areas which required additional work to enable value chain models
to be applied to projects for other mined commodities including iron ore, uranium
and gold projects.
Evolution of access to less costly computing power continues to enable a far greater
range of more complex unit process models. This includes access to online cloud
infrastructure that allows access to multiple configurable servers for limited time at
almost no cost. The value chain approach lends itself to parallel computing. A single
orebody, single mine plan, single process configuration and associated financial
model constitutes a single yet complete value chain model, and so it is possible to
distribute each value chain model to disparate servers for processing. Work is
continuing in this area and is likely to lead to a reduction of several orders of
magnitude in processing speed.
Several of the non- destructive geophysical tools were identified as having some
weak yet discernible correlation with the destructive measures. As these tools
improve and the ratios of the signal strength to noise increases the use for this
additional, relatively cheap, data will increase. This will require specific calibration
of the tools to the environments and mineralogy of the rocks in which they are to be
used.
Value chain model validation can be improved by using this approach on existing
operations in a predictive way. Ensuring that the value chain model that includes
orebody characteristics and a suitable process model is up to date can provide
several benefits to these operations. This includes being able to create a production
performance forecast, and to identifying periods in which the metallurgical recovery
factor will be compromised because of variable rock characteristics. This approach
will allow mine decision-makers to take pre-emptive action to mitigate recovery
risk and harness opportunities for increasing recovery and process performance.
11 REFERENCES
Agricola, G., 1556. De Re Metallica. 1950 ed., New York: Dover Publications.
Anglo American, 2018. Anglo American Ore Reserves and Mineral Resources Report
2018., p.53.
Ameluxen, P. 2003. The application of the SAG Power Index to orebody hardness
characterization for the design and optimization of comminution circuits. McGill
University, Montreal, Canada.
American Society for Testing and Materials,1988: Standard Method for Splitting
Tensile Strength of Intact Rock Core Specimens D 3967-86. 1988 Annual Book of
ASTM Standards, Vol. 04.08, Soil and Rock; Building Stones; Geotextiles, 471–475.
American Society for Testing and Materials 1998: D7012 - 14e1 Standard Test
Methods for Compressive Strength and Elastic Moduli of Intact Rock Core
Specimens under Varying States of Stress and Temperatures.
Appleyard, G.R. ,2001. An Overview and Outline, in Mineral resource and Ore
Reserve estimation- The AusIMM guide to good practice’, in Edwards, A. . (ed.).
Melbourne: The Australasian Institute of Mining and Metallurgy, pp. 3–12.
Ashley, K.J. and Callow, M.I. 2000. Variability: Exercises in Geometallurgy [online],
Available from: <http://e-j.com/ar/mining_ore_variability_exercises/; > [Accessed:
10 March 2004].
Bagnell, W., Bedell, P., Bertrand, V., Brummer, R., Farrow, D., Gagnon, C.L.G., Gormely,
L., Magnan, M. and St-Onge, J. 2013. NI 43-101 Technical Report for The Renard
Diamond Project, Québec, Canada, Stornoway Diamond Corporation, Québec,
Canada.
Bearman, R.A., Pine, R.J., and Wills, B.A. 1989. Use of Fracture Toughness Testing in
Characterizing the Comminution Potential of Rock, Proceedings of MMIJ/IMM Joint
Symposium, Kyoto pp161-170
Beniscelli, J., Carrasco, P., Dowd, P.A. Ferguson, G. and Tulcanaza, E. 2000. Estimation
of Resources and Conversion to Reserves - Protocols for the Assessment, Reduction
and Management of Risk, paper presented to Mass Min 2000, Brisbane, Australia.
Bojcevski, D., Vink, L., Johnson, N.W., Landmark, V., Johnston, M., Mackenzie, J. and
Young, M. F. 1998. Metallurgical characterisation of George Fisher Ore Textures and
Implications for Ore Processing, paper presented to Mine to Mill conference,
Brisbane Australia.
Boulding, K.E., 1956. General Systems Theory - The Skeleton of Science. Management
Science, 2(3), pp.197-208
Boychuk, K.G., Garcia, D. H., Sharp, A.W., Vincent, J.D. and Yeomans, T. J., 2012. NI 43-
101 Technical Report on the Pitarrilla Project, Durango State, Mexico, Silver
Standard Resources.
Bradfield, R., Wright, G., Burt, G., Cairns, G., Van Der Heijden, K., 2005. The origins
and evolution of scenario techniques in long range business planning. Futures, 37(8),
pp.795–812.
Bratvold, B. and Begg, S. 2002. Would You Know a Good Decision if You Saw One,
paper presented to SPE Annual Technical Conference, San Antonio, Texas.
Brennan, M.J. and Schwartz, E.S. 1985. Evaluating Natural Resource Investments,
Journal of Business, 58(2).
Brown, R., Tait, M., Field, M., Sparks, R.S.J. 2008. Geology of a complex kimberlite
pipe (K2 pipe, Venetia Mine, South Africa): Insights into conduit processes during
explosive ultrabasic eruptions. Bulletin of Volcanology, 71, pp.95–112.
Bye, A.R. 2011. Case Studies Demonstrating Value from Geometallurgy Initiatives. In
Geomet 2011. pp. 5–7.
Carrasco, P., Chiles, J. P., Séguret, S.A. 2008. Additivity, Metallurgical Recovery, and
Grade, in Geostats 2008: VIII International Geostatistics Congress (eds: J Ortiz and X
Emery), Santiago, December, pp 465-476.
Copur, H., Billgin, N., Tuncdemir, H. and Balci, C. 2003. A set of indices based on
indentation tests for assessment of rock cutting performance and rock properties,
South African Institute of Mining and Metallurgy.
Cornah, A., Vann, J., & Driver, I. 2013. Comparison of three geostatistical approaches
to quantify the impact of drill spacing on resource confidence for a coal seam (with
a case example from Moranbah North, Queensland, Australia). International Journal
of Coal Geology, 112, 114-124. DOI: 10.1016/j.coal.2012.11.00
Coward, S.J., Vann, J., Dunham. S., Stewart, M. 2009. The Primary-Response
Framework for Geometallurgical Variables. In Seventh International Mining Geology
Conference. pp. 109–113.
Daniel, M., Lane, G. and McLean, E. 2010. Efficiency, economics, energy and
emissions – Emerging criteria for comminution circuit decision making., paper
presented to XXV International Mineral Processing Congress (IMPC) Brisbane, Qld,
Australia, 6 - 10 September 2010.
Davis, G.A. 1995. An Investigation of the Under Pricing Inherent in DCF Valuation
Techniques, paper presented to SME Annual Meeting, Denver, Colorado, 6-9th
March 1995.
De Beers Group Services, 2017. De Beers Annual Finance Seminar, London UK.
Dowd, P.A. and Dare-Bryan, P.C, 2004. Planning Designing and Optimising
Production Using Geostatistical Simulation. Proceedings of the International
Esbensen, K. 2002. Multivariate Data Analysis - In Practice, 5th ed, 598 p (Camo
Process AS: Oslo).
Farrow, D.J. 2015. 2015 Mineral Resource Update for the Renard Diamond Project -
Ni 43-101 Technical Report, Vancouver.
Field, M. and Scott Smith, B.H. 1999, ‘Contrasting geology and near-surface
emplacement of kimberlite pipes in Southern Africa and Canada’, The J. B. Dawson
Volume, pp. 214–237.
Fullagar, P.K. and Fallon, G.N. 1997. Orebody Delineation and Rock Mass
Characterisation, paper presented to Explor97, Toronto, Canada, 14-18 September
1997.
Garman, M.B. and Kohlhagen, S.W. 1983. Foreign currency option values,
International Money Finance, Vol 2: p231-237.
Goldratt, E.M. 1990. Theory of Constraints. Great Barrington: North River Press, pp.1–
159.
Griffith, A.A. 1921. The Phenomenon of Rupture and Flow in Solids, Philosophical
Transactions of the Royal Physical Society, Vol. 221, pp. 163-198.
ISRM, 1972. Suggested Methods for Determining the Uniaxial Compressive Strength
and Deformability of Rock Materials.
ISRM, 1994. Suggested Methods for Determining Mode 1 Fracture Toughness Using
Cracked Chevron Notched Brazilian Disc.
Johnson, D., Meikle, K., Pilotto, D. and Lone, K. 2014. NI 43-101 Technical Report for
the Gahcho Kué Project 2014 Feasibility Study Mountain Province Diamonds Inc.,
Montréal.
Journel, A.G. and Huijbrechts, C. 1978. Mining Geostatistics, 600 p (Academic Press:
London).
Kahneman, D., 2011. Thinking fast, thinking slow 1st ed., New York: Farrar, Straus and
Giroux.
Kleingeld, W.J. 1976. Reconciliation of Ore Reserves in the Number One Diamond
Mining Area.
Kleingeld, W. J., Lantuéjoul, C., Prins, C. F. and Thurston, M.L. 1996. The Conditional
Simulation of a Cox Process with Application to Diamond Deposits and Discrete
Particles, paper presented to Geostatistics Congress, Wollongong, Australia.
Kleingeld, W.J. and Nicholas, G.D. 2004. Diamond Resources and Reserves -
Technical Uncertainties affecting their Estimation, Classification and Evaluation.
Proceedings of the International Symposium on Orebody Modelling and Strategic
Mine Planning. Pub AusIMM (Melbourne). ISBN 1 920806 22 9;
Krige, D.G., Guarascio, M. & Camisani-Calzolari, F.A. 1988. Early South African
Geostatistical Techniques in Todays’s Perspective. In M. Armstrong, ED.
Geostatistics: Proceedings of the Third International Geostatistics Congress.
Avignon, p. 1027.
Lane, K.F. 1988. The economic definition of ore: cut-off grades in theory and practice,
London: Mining Journal Books.
Lerchs, H. and Grossman, I.F., 1965. Optimum Design of Open Pit Mines,
Transactions CIM, 68(633): pp17-24.
Lilford, E.V. & Minnitt, R.C.A. 2005. A comparative study of valuation methodologies
for mineral developments. Southern African Institute of Mining and Metallurgy,
pp.p29-41.
Lynn, M., Nowicki, T., Valenta, M., Robinson, B., Gallagher, M., Bolton, R. and Sexton,
J. 2014. Karowe Diamond Mine, Botswana, NI 43-101 Independent Technical Report
Lucara Diamond Corp., Vancouver, BC, Canada.
Mackey, P.J. and Nesset, J.E. 2003. The impact of commissioning and start-up
performance on a mining/metallurgical project, paper presented to Proceedings of
the 35th Annual Meeting of the Canadian Mineral Processors.
Matheron, G. 1973. The Intrinsic Random Functions and their Application, Advances
in Applied Probability, 5: pp 439-468.
McKee, D.J., Chitombo, G.P., & Morrell, S. 1995. The relationship between
fragmentation in mining and comminution circuit throughput. Minerals
Engineering, 8(11), 1265–1274. https://doi.org/10.1016/0892-6875(95)00094-7
McBean, D., Kirkley, M. and Revering, C. 2001. Structural controls on the morphology
of the Snap Lake Dyke. 8th International Kimberlite Conference.
Momayez, M., Sadri, A. & Hassani, F.P. 1995. MSR: A technique for determining the
mechanical properties of rocks. In 35th US Rock Mechanics Symposium. Lake Tahoe,
Nevada, pp. 843–848.
Morrell, S., Dunne, R. C. and Finch, W. 1993. The liberation of a grinding circuit
treating gold bearing ore, paper presented to XVIII Int. Min Proc Cong, Sydney.
Morell, S. 2003. Predicting the Specific Energy of Autogenous and Semi Autogenous
Mills from Small Diameter Drill Core Samples Minerals Engineering, 17: pp 447-451.
Napier-Munn, T., Morell, S., Morison, R. and Kojovic, T. 1999. Mineral Comminution
Circuits, Their Operation and Optimisation, 2nd ed, 413 p (JKMRC, University of
Queensland: Indooroopilly).
Nicholas, G., Coward, S., Armstrong, M. and Galli, A. 2006. Integrated Mine Evaluation
- Implications for Mine Management. In: AUSIMM. ed. International Mine
Management Conference. Melbourne. Australia.
Nicholas, G.D., Coward, S. J., Rendall, M., and Thurston, M.L. 2007. Decision-Making
Using an Integrated Evaluation Model Versus Sensitivity Analysis and Monte Carlo
Simulation. In Canadian Institute of Mining and Metallurgy International conference.
Montreal, Canada.
Nicholas, G., Coward, S. & Ferreira, J. 2008. Financial Risk Assessment Using
Conditional Simulations in an Integrated Evaluation Model. In the Eighth
International Geostatistics Congress. Santiago, Chile, p. 10.
Ozturk, C. N. E., Bilgin, N. 2004. The assessment of Rock cut ability, and physical and
mechanical rock properties from a texture coefficient, Journal of the South African
Institute of Mining and Metallurgy.
Petersen, K., 2005. Development of a Flowsheet Model for the Snap Lake Mine.
Unpublished Report.
Petra Diamonds, 2019. Petra Diamonds Limited Annual Report 2019, London UK.
Richmond, A.J. 2003. Multi-Scale Ore Texture Modelling for Mining Applications,
University of Brisbane, Brisbane.
Rittinger, Peter Ritter von. 1867. Lehrbuch der Aufbereitungskunde, Berlin: Verlag
von Ernst & Korn.
Royle, A.G. 1986. Alluvial Sampling Formula and Recent Advances in Alluvial Deposit
Evaluation, Transactions of The Institution of Mining and Metallurgy: pp B179-
B182.
Samis, M., Laughton, D., and Davis, G.A. 2006. Valuing uncertain asset cash flows
when there are no options: A real options approach, Resources Policy, 30: p285-298.
SAMREC, 2016. The South African Code for the Reporting of Exploration Results,
Mineral Resources, and Mineral Reserves (the SAMREC code).
Silva, D.S.F. & Boisvert, J.B. 2014. Mineral resource classification: a comparison of
new and existing techniques. Journal of the Southern African Institute of Mining and
Metallurgy, 114, pp.265–273. Available at:
http://www.scielo.org.za/scielo.php?script=sci_arttext&pid=S2225-
62532014000300017&nrm=iso.
Sothcott, J., Hennah, S.J., Mc Cann, C., Black, S and Stevenson, I. 2005. Measurement
of the Broadband Acoustic Properties of Kimberlite: A Feasibility Study, De Beers
Consolidated Mines South Africa, Johannesburg.
South African Institute for Mining and Metallurgy, 2002. The SAMVAL code, Draft
Standards and guidelines for valuation of mineral projects, properties and assets in
the mining industry of South Africa.
Sparks, R.S.J., Brooker, R., Field, M. 2009. The nature of erupting kimberlite melts.
Lithos, 112, pp.429–438.
Taggart, A.F. 1964. Handbook of Mineral Dressing, 8th ed (John Wiley and Sons:
London).
Tavares, L. & King, R.P. 1998. Single-particle fracture under impact loading.
International Journal of Mineral Processing, 54, pp.1–28.
De Beers Consolidated Mines Limited , 2001., Technical and Financial Report, 149
p.
Vose, D. 2002. Risk Analysis, a Quantitative Guide (John Wiley and Sons Ltd:
London).
Whiten, W.J. 1972. Simulation and Model Building for Mineral Processing, PhD
dissertation - University of Queensland, Australia.
12 APPENDICES
The appendices provided here are to give the interested reader more technical
substantiation of aspects of metallurgical recovery estimation that are addressed by
this research.
Brief depiction of the layout, the types of samples collected and the raw data that
were generated from this experiment. In Figure 71 a surface schematic is shown
depicting the area in which the sample holes were located.
Two categories of drill holes were defined. The first comprises holes where all the
material was sampled, and the second where only portions were sampled. In Figure
72 the layout of the holes is shown with the different coloured banding representing
the type and location of samples. The white areas of cores were not sampled. The
full details of the sampling are contained in Figure 73 and in Figure 74
Figure 72: A view of the layout of the core holes depicting the location of the subsamples
Figure 73:Listing of subsamples taken from cores that were fully sampled
7.5 GM-2E/7.5m/Retain
10.0 Retain
10.0 GM-2E/10m/Retain
17.5 GM-2E/17.5m/Retain
20.0 Retain
20.0 GM-2E/20m/Retain
27.5 GM-2E/27.5m/Retain
30.0 Retain
30.0 GM-2E/30m/Retain
37.5 GM-2E/37.5m/Retain
40.0 Retain
40.0 GM-2E/40m/Retain
50.0
Figure 74: Depiction of the subsamples taken from cores that were partially sampled.
The data gathered during the orebody sampling experiment were used in several
ways to generate spatial estimates of the primary and response characteristics of
the orebody. This Appendix presents images of the data obtained from the Venetia
K2 orebody.
Figure 75:A view from the south-west of Venetia K2 sampled area - showing downhole
density.
Figure 76:A view from the south-west of Venetia K2 sampled area - showing downhole
density and P wave velocity.
Figure 77:A view from the south-west of Venetia K2 sampled area - UCS sample values as
scaled spheres and estimated block density in transparent blocks.
Figure 78:A view from the south-west of Venetia K2 sampled area - t10 sample values as
spheres and estimated t10 in transparent blocks.