Final E.geophysics 12345
Final E.geophysics 12345
1.1 Introduction
Geophysical methods can be broadly classified according to whether the field source for which a
subsurface response is measured natural or artificial. The natural field methods utilize natural
fields associated with Earth such as gravitational, magnetic and electrical field. The most
common geophysical methods that detect variations in natural fields are gravity, magnetic and
self potential methods. These geophysical methods also called passive methods. Advantages of
natural sources (passive source) are:
1
They do not require an additional deployment to generate the field.
Provide information on Earth properties from significantly greater depths.
Logistically simpler to carry out than artificial source methods.
Artificial source methods require input into the ground to generated energy artificially. The
generated energy transmitted into the subsurface and record what comes back. The most
common geophysical methods that use artificial source are seismic reflection, seismic refraction,
DC- resistivity and induced potential. The main advantages of artificial field are
1) Gravity Method
measure variations in density within the Earth
the value of “g” is measured by using a sensitive instrument- the Gravimeter
common unit of “g” for geophysical purposes is the “milligal” 1 gal = 1 cm/s2
Therefore, a small variation in density of subsurface rocks gives rise to measurable
gravity anomaly on the surface.
2
2) Magnetic Method
Method searches for changes in magnetic susceptibility (K=I/H –the degree to which a
body gets magnetized) of rocks in the subsurface.
Materials like magnetite, illmenite, chromite show large K values and moreover
different rock units exhibit different magnetic susceptibilities.
The natural field of the Earth is used as magnetizing force and records are made by
magnetometers.
Common unit in geophysical surveys is the “gamma” which is equal to nanotesla.
3) Electrical Methods
The measured parameter is variations in resistivity of subsurface materials for the vertical
or lateral field survey.
3
4) Seismic Methods
measure acoustic impedance (rV) of the layers of the subsurface
signals are picked up by geophones and recorded by seismometers
two methods
Seismic reflection technique
Seismic refraction
4
Mapping structural discontinuities like fractures ( faults and joints) and fissures
that may be zones of localized mineralization and alteration
5. For geothermal exploration
To map the high conductivity rock-water-heat interaction zones.
6. Location of buried conductor pipes, cables, land mines, and archaeological infrastructures
7. Environmental problems
Waste disposal site characterization for example for mapping structural
discontinuities like fractures (faults and joints) that may be act as conduit for
leakage.
5
Chapter two
Gravity method
The Gravity method is a passive geophysical method of exploration and, is based on the small
natural variation in the earth’s gravitational field caused by differences in density of near surface
rocks. The relevant geologic parameter in gravity survey is not density, but density contrast.
Gravimeter measures the variation in gravitational acceleration and is based on two well-known
laws of physics:
1. The Law of Universal Gravitation;
F=−G
[ m1 m2
r
2
,
] G – gravitational constant = 6.67 ´ 10-11 N-m2/kg2
2. Newton’s second law of motion;
a = F / m, a – acceleration
The Earth, due to its spherical shape, its mass can be treated as if all concentrated as a point, at
the center, and any object having a mass mo resting on the Earth’s surface, will be attracted to the
center by a force (its Weight):
F=−G
[ ]
me mo
R 2e
me – mass of the Earth,
Re – average radius.
If the object is lifted a short distance and allowed to fall, the acceleration due to gravity:
6
Absolute gravity is the true/actual gravitational acceleration (g). Determination of the
acceleration due to gravity in absolute terms requires very careful experimental and normally
only undertaken under laboratory conditions. Two methods of measurement are used, namely
falling body and pendulum method. However, it is more easily measured the relative variation in
gravity that are of interest and value to explorations.
Relative gravity reflects the difference in gravitational acceleration (g) at one station (g1)
compare to another (g2). In gravity exploration it is not normally necessary to determine the
absolute value of gravity, but it measured the relative gravity.
How do we Measure Gravity? Gravitation acceleration can be measure in different ways. These
are falling body measurements, Pendulum measurements and Mass on spring measurements.
One drops an object and directly computes the acceleration the body undergoes by carefully
measuring distance and time as the body falls.
7
Pendulum measurements
In this type of measurement, the gravitational acceleration is estimated by measuring the period
oscillation of a pendulum. The period of oscillation is the time required for the pendulum to
complete one cycle in its motion. This can be determined by measuring the time required for the
pendulum to reoccupy a given position. It can be shown that the period of oscillation of the
pendulum, T, is proportional to one over the square root of the gravitational acceleration, g. The
constant of proportionality, k, depends on the physical characteristics of the pendulum such as its
length and the distribution of mass about the pendulum's pivot point. In the example shown
below, the period of oscillation of the pendulum is approximately two seconds.
8
Like the falling body experiment described previously, it seems like it should be easy to
determine the gravitational acceleration by measuring the period of oscillation. Unfortunately, K
cannot be determined accurately enough. We could measure the period of oscillation of a given
pendulum at two different locations. Although we cannot estimate k accurately enough to allow
us to determine the gravitational acceleration at either of these locations because we have used
the same pendulum at the two locations, we can estimate the variation in gravitational
acceleration at the two locations quite accurately without knowing k.
Gravimeters are sophisticated spring balances from which a constant mass is suspended. The
weight of the mass is the product of the mass and acceleration due to gravity. The greater the
weight acting on the spring, the more the spring is stretched. The amount of extension of the
spring (∂l) is proportionality to the extending force, i.e. the excess weight of mass (∂g). The
constant of proportionality is the elastic spring constant k. This relationship is known as Hooke’s
law. By measuring the extension of the spring (∂l), difference in gravity can be determined.
9
Figure. Gravimeter and extension of spring (∂l) due to additional gravitational pull (∂g).
The most common type of gravimeter used in exploration surveys is based on a simple mass-
spring system. Gravimeter measures the variations in the earth's gravitational field. The
variations in gravity are due to lateral changes in the density of the subsurface rocks in the
vicinity of the measuring point. Because the density variations are very small and uniform, the
gravimeters have to be very sensitive.
The most commonly used gravimeters do not measure an absolute gravitational acceleration but
differences in relative acceleration. There are several gravity meter manufacturers where the
accuracies of these meters can vary greatly. The common gravimeters on the market are the
Worden gravimeter, the Scintrex and the La Coste Romberg gravimeter.
10
2.3 Factors that Affect the Gravitational Acceleration
The complicating factors the affect the gravitation acceleration can be subdivided into two
categories: those are temporal variations and spatial variations.
Just as the gravitational attraction of the sun and the moon distorts the shape of
the ocean surface, it also distorts the shape of the earth. Because rocks yield to
external forces much less readily than water, the amount the earth distorts under
these external forces is far less than the amount the oceans distort. The size of the
ocean tides, the name given to the distortion of the ocean caused by the sun and
moon, is measured in terms of meters. The size of the solid earth tide, the name
given to the distortion of the earth caused by the sun and moon, is measured in
terms of centimeters. This distortion of the solid earth produces measurable
changes in the gravitational acceleration because as the shape of the earth
changes, the distance of the gravimeter to the center of the earth changes (recall
that gravitational acceleration is proportional to one over distance squared).
11
2. Spatial Based Variations - These are changes in the observed acceleration that are space
dependent. That is, these change the gravitational acceleration from place to place.
i. Latitude Variation - Changes in the observed acceleration caused by the
ellipsoidal shape and the rotation of the earth.
Shape: The shape of the earth is elliptical, with the widest portion of the ellipse aligning
with the equator and thinner at the poles. Although the difference in earth radii measured
at the poles and at the equator is only 22 km (this value represents a change in earth
radius of only 0.3%), this, in conjunction with the earth's rotation, can produce a
measurable change in the gravitational acceleration with latitude.
Rotation: In addition to shape, the fact that the earth is rotating also causes a change in the
gravitational acceleration with latitude. Because the earth rotates on an axis passing through the
poles at a rate of once a day and our gravimeter is resting on the earth as the reading is made, the
gravity reading contains information related to the earth's rotation.
We know that if a body rotates, it experiences an outward directed force known as a centrifugal
force. The size of this force is proportional to the distance from the axis of rotation and the rate at
12
which the rotation is occurring. For gravimeter located on the surface of the earth, the rate of
rotation does not vary with position, but the distance between the rotational axis and the gravity
meter does vary. The size of the centrifugal force is relatively large at the equator and goes to
zero at the poles. The direction of this force acts is always away from the axis of rotation.
Therefore, this force acts to reduce the gravitational acceleration we would observe at any point
on the earth, from that which would be observed if the earth were not rotating.
13
Therefore, when interpreting data from our gravity survey, we need to make sure that we don't
interpret spatial variations in gravitational acceleration that are related to elevation differences in
our observation points as being due to subsurface geology. Clearly, to be able to separate these
two effects, we are going to need to know the elevations at which our gravity observations are
taken.
iii. Slab Effects - Changes in the observed acceleration caused by the extra mass
underlying observation points at higher elevations. As a first-order correction for
this additional mass, we will assume that the excess mass underneath the
observation point at higher elevation, point B in the figure below, can be
approximated by a slab of uniform density and thickness. Obviously, this
description does not accurately describe the nature of the mass below point B. The
topography is not of uniform thickness around point B and the density of the rocks
probably varies with location.
14
iv. Topographic Effects - Changes in the observed acceleration related to
topography near the observation point. Although the slab correction described
previously adequately describes the gravitational variations caused by gentle
topographic variations (those that can be approximated by a slab), it does not
adequately address the gravitational variations associated with extremes in
topography near an observation point. Consider the gravitational acceleration
observed at point B shown in the figure below.
15
effects can be removed, with a good accuracy, from the measured data during
reduction.
Density is the diagnostic parameters in the gravitational methods of exploration and the small
anomalies, sought in this method, are generally related to the local variations in it. The range of
variation in density is very small (~ 2 mgal), compared to such other parameters in geophysics.
Factors that affect density of sedimentary rocks are composition, cementation, age, depth of
burial, tectonic process, porosity and pore-fluids. Sediments that remain buried for a long time
consolidate and lithified, resulting in reduced porosity. The degree to which each of these factors
affects rock density is given in table below. The density contrast between adjacent sedimentary
strata is not greater than 0.25 mg/m 3. In general, the average density of sedimentary rocks is
lower than that of Igneous and Metamorphic rocks.
Igneous rocks tend to be denser than sedimentary rocks although there is overlap. Density
increase with decreasing silica content, so basic igneous rocks are denser than acid ones.
Similarly, plutonic rocks tend to be denser than their volcanic equivalents as shown below.
16
Table variation of density with silica content and crystal size for selected
igneous rocks; density ranges and average density in parentheses are given in
mg/m3
The density of metamorphic rocks tends to increase with decrease acidity and increasing grade of
metamorphism. For example schist may have lower densities than gneiss. Metamorphosed
sediments such as Marble, Slate & Quartzite are denser than the original, Limestone, shale &
Sandstone. Likewise, Gneiss Vs Granite and Amphibolites Vs Basalt show similar increment in
density though, not significant.
17
Quartzite 2.6 - 2.8
Rhyolite 2.4 - 2.6
Rock salt 2.5 - 2.6
Sandstone 2.2 - 2.8
Shale 2.4 - 2.8
Slate 2.7 - 2.8
The most common field procedures in gravity methods are Profile and Grid method. In both
methods of field procedures, establish the location of one or more gravity base stations are the
first step. Because we will be making repeated gravity observations at the base station, its
location should be easily accessible from the gravity stations comprising the survey. The
spacing between the profile line and the spacing between stations are depends on the target of
investigation.
For regional mapping:
Station spacing: 1 – 10 km
Conducted: along roads and tracks
Base station visit: 1 – several hours
Follow up and detail:
Station spacing: depends on targets varies from (30m – 1km )
Base station visit: every an hour
Conducted: along profile lines or regular grids.
18
Before the results of the survey can be interpreted in geological terms, these raw gravity data
have to correct to a common datum. Remove the effect of all other factors from the observed
gravity data except the effect of density.
The instrumental drift and tides can be determined simply by repeating measurements at the
same base station at different times of day, typically 1 – 2 hours. Values at the base station are
known (or assumed to be known) accurately. Data from the base station may be used to
normalize data from other stations.
Before starting to make gravity observations at the gravity stations, the survey is initiated
by recording the relative gravity at the base station and the time at which the gravity is
measured.
Move the gravimeter to the survey stations and measure the relative gravity at some
gravity station and the time at which the reading is taken.
Usually on the order of an hour return to the base station and re-measure the relative
gravity at this location. Again, the time at which the observation is made is noted.
If necessary, go back to the survey stations and continue making measurements, returning
to the base station every hour.
After recording the gravity at the last survey station, or at the end of the day, return to the
base station and make one final reading of the gravity.
The procedure described above is generally referred to as a looping procedure with one loop of
the survey being bounded by two occupations of the base station. The looping procedure defined
here is the simplest to implement in the field. Using observations collected by the looping field
procedure, it is relatively straight forward to correct these observations for instrument drift and
tidal effects. The basis for these corrections will be the use of linear interpolation to generate a
prediction of what the time-varying component of the gravity field should look like.
19
Example -1 if you collected gravity data using gravimeter at the station number 158, 159,
160,161, 162 and 163. The base station for your survey is at the point denoted by 9625 as shown
below in the topographic map. The observed relative gravity and time obtained from the field for
each station and its base station is given below in the table. Remove instrument drift and tides
for each station using linear interpolation.
The value of the temporally varying component of the gravity field at the time is computed using
the expressions given below.
20
Tidal and instrumental drift can be removed from the observed gravity using tidal and
instrumental drift curve. Tidal and instrument drift curve for each gravity survey is plotted; the
successive gravity reading at base station on the vertical axis and time on horizontal axis. From
21
this curve we can determine tidal and instrumental drift corresponding to the time in which
gravity reading is taken.
But for simplest we use 0.81sin (2f) mgal/km, where, f = geographic latitude, since gravity
increases with latitude (both N and S), the above correction is positive as one goes towards the
equator, But it negative when the survey is from equator to north or South Pole.
22
2.6.3 Free-air correction
The gravitational attraction decreases with elevation since, it varies inversely with the square of
the distance from earth’s center. Hence, it is necessary to correct the reading so that all field
readings are reduced to a datum surface. From Newton’s law, gravitation acceleration is
determined using the equation given below.
By substituting the value of G and R in the above equation the gravitational acceleration varies
about -0.3086 mgal/m in elevation difference. The minus sign indicates that as the elevation
increases, the observed gravitational acceleration decreases. The magnitude of the number says
that if two gravity readings are made at the same location, but one is done a meter above the
other, the reading taken at the higher elevation will be 0.3086 mgal less than the lower.
23
The free-air correction is added/subtracted to the field reading depending on whether the station
lies above (+) or below (-) the reference datum.
The Bouguer correction accounts for the gravitational attraction due to materials between the
station and the datum plane, which has been ignored in the free air.
Station
(b)
an infinite slab of
h
Base (a) density,
Reference, Datum plane.
The two basic assumptions made in formulating the Bouguer slab, (1) of uniform density and (2)
of infinite horizontal extent, are not usually valid for real field situation.
Corrections based on simple slab approximation are referred to as the Bouguer Slab Correction.
It can be shown that the vertical gravitational acceleration associated with a flat slab can be
written simply as 0.04193h. Where the correction is given in mgals, is the density of the slab
in gm/cm^3, and h is the elevation difference in meters between the observation point and
elevation datum.
Notice that the sign of the Bouguer Slab Correction makes sense. If an observation point is at a
higher elevation than the datum, there is excess mass below the observation point that wouldn't
be there if we were able to make all of our observations at the datum elevation. Thus, our gravity
reading is larger due to the excess mass, and we would therefore have to subtract a factor to
move the observation point back down to the datum. Notice that the sign of this correction is
opposite to that used for the elevation correction.
Also, notice that to apply the Bouguer Slab correction we need to know the elevations of all of
the observation points and the density of the slab used to approximate the excess mass. In
choosing a density, use an average density for the rocks in the survey area. For a density of 2.67
gm/cm^3, the Bouguer Slab Correction is about 0.112 mgals/m.
24
2.6.5 Terrain correction
Like Bouguer Slab Corrections, when computing Terrain Corrections we need to assume an
average density for the rocks exposed by the surrounding topography. Usually, the same density
is used for the Bouguer and the Terrain Corrections.
To compute the gravitational attraction produced by the topography, we need to estimate the
mass of the surrounding terrain and the distance of this mass from the observation point. The
specifics of this computation will vary for each observation point in the survey because the
distance to the various topographic features varies as the location of the gravity station moves.
As you are probably beginning to realize, in addition to an estimate of the average density of the
rocks within the survey area, to perform this correction we will need knowledge of the locations
of the gravity stations and the shape of the topography surrounding the survey area.
Estimating the distribution of topography surrounding each gravity station is not a trivial task.
One could imagine plotting the location of each gravity station on a topographic map, estimating
the variation in topographic relief about the station location at various distances, computing the
gravitational acceleration due to the topography at these various distances, and applying the
resulting correction to the observed gravitational acceleration. A systematic methodology for
performing this task was formalized by Hammer in 1939. Using Hammer's methodology by hand
is tedious and time consuming. If the elevations surrounding the survey area are available in
computer readable format, computer implementations of Hammer's method are available and can
greatly reduce the time required to compute and implement these corrections. Terrain correction
is always positive.
The Bouguer gravity anomaly is obtained after all the preceding corrections have been applied to
the observed gravity. Terrain correction is not always applied. When applied, it is termed as
complete Bouguer anomaly. If not, it is known as simple Bouguer anomaly.
25
Simple Bouguer anomaly: gb = gobs – gi + gn + 0.3086h - 0.04193h (mgal)
Example-2. The following data are from a gravity survey at 42° N over a suspected buried
shear zone. The location of observation point in meters to the south relative to the north end of
the profile is given below in the first column, H.I. indicates the height of the instrument above
the ground surface, Elev is the ground elevation relative to the datum (so both need to be
considered when computing the free-air correction), and g-obs is the measured gravity value
which is free from instrumental drift and tidal effects.
I. Apply the latitude, elevation and slab correction to the data assuming a
density of 2670 kg/m3.
II. Determine free-air and simple Bouguer anomaly to the data, Make a table and
a graph of the results.
26
27
2.9 Data presentation, enhancement and interpretation of gravity data
Any gravity map or profile plot is generally a wide spectrum entity, consisting of various
frequency components (high frequency and low frequency). On two extreme sides, it contains a
regional component, representing large scale variations arising from deep seated, broad
structures and, short wave length or small scale variations due to shallow, local features.
From exploration point of view, we are often interested in the short wave length features as they
represent the near surface variations in formation densities and/or geological structures.
However, the large scale structures pre-dominate the gravity map to such an extent that it may be
very difficult to recognize smaller or shallower features. One have to, then, remove the regional
component from the reduced gravity data in order to obtain the Residual field which is of
primary interest in most gravity prospecting work.
1. Graphical method
Example 1: This shows two density structures (a) higher density bedrock that dips to the right
and which produces a slow decrease in gB from left to right. (b) a shallow buried cylinder.
28
The dotted line denotes the regional trend, and is obtained by finding the best-fitting straight
line to the data. When the regional trend is subtracted from g bthe anomaly of the cylinder is
much easier to interpret.
Example 2: This features the same dipping bedrock layer and a shallow, low density river
channel. Again, removal of the regional trend makes the effect of the river channel much clearer.
29
2. Upward continuation is one method of filtering that enhances deeper features.
3. Downward continuation is one method of filtering that enhances shallow features.
4. Horizontal derivatives of gravity: This is effectively another way of removing the
regional trends in the data.
2.9.3 Interpretation
As with all geophysical interpretation, the analysis of gravity data has two distinct aspects:
qualitative and quantitative. The qualitative process is largely map or profile based and
dominates the early stages of a study. The resultant preliminary structural element map is the
cornerstone of the interpretation. Qualitative interpretation involves recognition of:
the nature of discrete anomalous bodies including intrusions and faults
structural styles
Quantitative interpretation
Gravity modeling is usually the final step in gravity interpretation and involves trying to
determine the density, depth and geometry of one or more subsurface bodies. The modeling
procedure commonly involves using a residual gravity anomaly. When modeling a residual
gravity anomaly, the interpreter must use a density contrast between the body of interest and the
surrounding material, while modeling Bouguer gravity anomalies; the density of the body is
used.
30
Figure: Two-dimensional gravity model. The solid line is the calculated gravity values due to the
model (b) and the stars are the observed data.
Where G is the gravitational constant, m is the mass of the point mass, and r is the distance
between the point mass and our observation point. The figure below shows the gravitational
acceleration observes over a buried point mass. Notice, the acceleration is highest directly above
the point mass and decreases as we move away from it. The vertical component of the
gravitational acceleration caused by the point mass can be written in terms of the angle Ө as
Now, it is inconvenient to have to compute r and Ө for various values of x before we can
compute the gravitational acceleration. Let's now rewrite the above expression in a form that
makes it easy to compute the gravitational acceleration as a function of horizontal distance x
rather than the distance between the point mass and the observation point r and the angle Ө.
31
Ө can be written in terms of z and r using the trigonometric relationship between the cosine of an
angle and the lengths of the hypotenuse and the adjacent side of the triangle formed by the angle.
Likewise, r can be written in terms of x and z using the relationship between the length of the
hypotenuse of a triangle and the lengths of the two other sides known as Pythagorean Theorem.
Substituting these into in the above expression, the vertical component of the gravitational
acceleration caused by a point mass becomes
Knowing the depth of burial, z, of the point mass, its mass, m, and the gravitational constant, G,
we can compute the gravitational acceleration we would observe over a point mass at various
distances by simply varying x in the above expression.
It can be shown that the gravitational attraction of a spherical body of finite size and mass m is
identical to that of a point mass with the same mass m. Therefore, the expression derived on the
previous page for the gravitational acceleration over a point mass also represents the
gravitational acceleration over a buried sphere. For application with a spherical body, it is
32
convenient to rewrite the mass, m, in terms of the volume and the density contrast of the sphere
with the surrounding earth using
Where v is the volume of the sphere is the density contrast of the sphere with the surrounding
rock, and R is the radius of the sphere. Thus, the gravitational acceleration over a buried sphere
can be written as
Notice that our expression for the gravitational acceleration over a sphere contains a term that
describes the physical parameters of the spherical body; its radius, R, and its density contrast, ,
in the form
R and are two of the parameters describing the sphere that we would like to be able to
determine from our gravity observations (the third is the depth to the center of the sphere z). That
is, we would like to compute predicted gravitational accelerations given estimates of R and,
compare the observed, and then vary R and until the predicted acceleration matches the
observed acceleration. The model producing the best fit between observed and synthetic data is
taken as the final result; however it is never unique! There is an infinite amount of other
geological models that also fit the data equally well (in the same statistical sense). Often
additional geophysical measurements are required to reduce the ambiguity or geological
constraints are added to the interpretation.
The maximum gravity anomaly for sphere is obtained at x = 0, then the value of x is substituted
in the gravity equation below
33
Half-width method
The depth of the spherical mass anomaly can be found by measuring the half-width of the
anomaly which we will assign the variable name X 1/2. Where, Z is the depth to the center of the
sphere.
34
Example-1 Assume a cannon ball is buried at 1 meter below the surface and has an excess mass
of 200 kg. When the radius of the sphere is 10cm, calculate g max and x1/2. Express your answer
in units of milligals.
Example 2 By measuring the gravity across a buried sphere of lead, you have determined the
half-amplitude width and Gmax of the anomaly. If the density contrasts between the lead sphere
and surrounding rock is 2,000 kg/m3. Assume X1/2 = 100 m and Gmax = 2 milligals.
35
Chapter Three
Magnetic method
Understanding the magnetic effects associated with Earth materials requires a knowledge of the
basic principles of magnetism. In this section, review the elementary physical concepts that are
fundamental to magnetic prospecting.
If two magnetic poles of strength P0 and P1 are separated by a distance r, a force (F) exists
between them. When the poles have the same polarity, the force will push the poles apart, and if
they are of opposite polarity, the force is attractive and will draw the poles together. The
equation for F is obtained from coulomb’s law and mathematically given by:
F=
1 P0 P 1
μ 2
r
The constant μ, known as magnetic permeability, depend upon the magnetic properties of the
medium in which the poles are situated.
The magnetic field strength at a point is defined as the force per unit pole strength which would
be exerted upon a small pole of strength P0 if placed at that point. The field strength (H) due to a
pole of strength (P) at a distance (r) is
F P
H= =
P r
2
0 μ
Magnetic susceptibility (K) is the property of a material which determines how much body is
magnetized due to external magnetic field. Magnetic susceptibility is the fundamental parameters
in magnetic prospecting. Rocks that have a significant concentration of ferro and/or ferri
magnetic minerals tend to have the highest susceptibilities. Consequently, basic and ultra basic
rocks have the highest susceptibilities, acid igneous and metamorphic rocks have intermediate to
36
low value, and sedimentary rocks have very small susceptibilities. It is defined as the ratio of
intensity of magnetization (I) to the magnetic field strength (H).
I
K=
H
The geomagnetic field at or near the surface of the Earth originates largely from within and
around the Earth’s core. Ninety percent of the Earth's magnetic field looks like a magnetic field
that would be generated from a dipolar magnetic source located at the center of the Earth. The
remaining 10% of the magnetic field cannot be explained in terms of simple dipolar sources. As
observed on the surface of the earth, the magnetic field can be broken into three separate
components.
The main field is the largest component of the magnetic field and is believed to be caused by
electrical currents in the Earth's fluid outer core. For exploration work, this field acts as the
induced magnetic field. The external magnetic field is a relatively small portion of the observed
magnetic field that is generated from magnetic sources external to the earth. This field is
believed to be produced by interactions of the Earth's ionosphere with the solar wind. Hence,
temporal variations associated with the external magnetic field are correlated to solar activity.
The crustal field is the portion of the magnetic field associated with the magnetism of crustal
rocks. This portion of the field contains both magnetism caused by induction from the Earth's
main magnetic field and remnant magnetization.
It was recognized that the Earth’s magnetic field changes its direction and intensity with time.
These variations resolved in to secular variation, diurnal variation and variations due to magnetic
storms.
Secular Variations are long-term (changes in the field that occur over years) variations in the main
magnetic field that are presumably caused by fluid motion in the Earth's Outer Core. The rate of
change, although very significant on a geological time scale, does not affect the data acquisition
37
on a typical exploration survey unless it covers large geographical areas and takes many months
to complete.
Diurnal Variations are variations in the magnetic field that occur over the course of a day and
are related to variations in the Earth's external magnetic field. This variation can be on the order
of 20 to 30 nT per day and should be accounted for when conducting exploration magnetic
surveys.
Magnetometers used specifically in geophysical exploration can be classified into three groups:
the torsion, fluxgate and resonance types. Magnetometers measure horizontal and/or vertical
components of the magnetic field or the total field. There are two main types of resonance
magnetometer: the proton free-precession magnetometer, which is the best known, and the alkali
vapour magnetometer. Both types monitor the precession of atomic particles in an ambient
magnetic field to provide an absolute measure of the total magnetic field (F). The proton
magnetometer has a sensor which consists of a bottle containing a proton-rich liquid, usually
water or kerosene, around which a coil is wrapped, connected to the measuring apparatus
(Reynolds, 1997).
Magnetic data can be acquired in two configurations. These are rectangular grid pattern and
along a traverse (profile). Grid data consists of readings taken at the nodes of a rectangular grid;
traverse data is acquired at fixed intervals along a line. In both traverse and grid configurations,
the station spacing, or distance between magnetic readings, is important.
In ground based surveys, it is important to establish a local base station in an area away from
suspected magnetic targets or magnetic noise and where the local field gradient is relatively flat.
38
A base station should be quick and easy to relocate and reoccupy. As the survey progresses, the
base station must be re-occupied every half of an hour in order to compile a diurnal variation
curve for later correction.
Airborne magnetic methods are used to determine the Earth's magnetic field and its anomalies by
measurements from an aircraft. Since crustal rocks show different magnetizations, magnetic
measurements can reveal information on the crustal structure.
The most important factors to be specified for any airborne geophysical survey are the flight
height, the traverse line separation and the traverse line orientation (direction). For aeromagnetic
surveys, the selection of line direction depends on two main parameters, the magnetic inclination
in the survey area (sometimes called the magnetic latitude of the area), and the geological strike
that is significant for the investigation.
The preferred flight line direction would be north - south if the anomalies in the area were
distributed randomly. Because regional surveys are conducted over very large areas usually
containing various geological strike directions, a north - south traverse line orientation is usually
preferred for aeromagnetic surveys. On the other hand, if the survey area is known to contain a
pronounced geological strike direction and the magnetic latitude is either very high or very low it
may be advantageous to orient the traverse line direction perpendicular to the geological strike
direction. The advantages of this orientation arise because many of the significant magnetic
features arise from linear features like dykes and or faults, and by orienting the traverse lines at
right angles to these features, we can be confident that only a very few anomalies may be missed
by the selected flight lines.
The sensor height will usually be the distance from the sensor to the surface. As a rule of thumb,
the line spacing should equal the sensor height for complete definition of the anomalous
magnetic field. However, economic considerations may require larger line spacing. Control lines
are flown to allow leveling of the survey data. In small surveys, at least three control lines should
be flown at right angles to the traverse line direction. In large surveys, control lines should be
39
spaced at intervals of five to ten times the traverse line spacing. A typical flight path lay out is
shown in the figure below.
This involves the removal of spurious noise and spikes from the data. Such noise can be caused
by cultural influences such as power lines, metal objects and others. This system will ideally
include systematical and detailed viewing of all data in graphic profile form.
The most significant correction is for the diurnal variation (B) in the earth’s magnetic field.
In the case of ground survey, the base station readings taken every half or one hour.
40
Measurements of the total field made at other stations can easily be adjusted by the variation
in the diurnal curve. For example, at a point A in figure below, the magnetic field has
increasing by 10 nT, then 10 nT should be subtract from the value measured at point A.
Similarly, at B, the magnetic field has fallen by 19 nT and so the value at B should be
increased by 19 nT. The effect of diurnal variation from magnetic raw data can also be
removed using linear interpolation similar to instrumental drift and tidal gravity data
correction.
In the airborne and shipborne surveys, it is obviously impossible to return to a base station
frequently. By designing the survey so that the track line intersect, the data set can approximately
corrected.
Remove the strong Earth main field from each observed magnetic data. This is done because the
main field is dominantly influenced by dynamo action in the core and not related to the geology
of the upper crust. Survey data at any location can be corrected by subtracting the theoretical
41
field value (obtained from international geomagnetic reference field (IGRF)) from the observed
value.
The magnetic anomalies due to surface geological features are obtained based on the formula
below.
ΔF = B ±∂ B − B
obs d
Once magnetic data have been fully corrected and reduced to their final form, they are usually
displayed in profile and map forms. Contour maps show the areal variation magnetic values or
profile plots, reflecting the variation along a specific line.
Any magnetic map or profile contains a regional and local component. Regional component
represent the responses from deep seated and broad structures. It has low frequency and long
wave length. Local component represent the response arising from shallow and local features. It
has high frequency and short wave length.
Regional magnetic anomalies are the response of deeper structures that have long wave length
(low frequency). Residual (local) anomalies are the response of show structures that have short
wave length (high frequency). Therefore, depend on the purpose of the survey the residual
anomaly can be separated from regional anomaly using different filter methods.
42
1. Graphical method
Example 1:
3.7.3 Interpretation
As with all geophysical interpretation, the analysis of magnetic data has two distinct aspects:
qualitative and quantitative.
The qualitative process is largely map and profile based and dominates the early stages of a
study. The resultant preliminary structural element map is the cornerstone of the interpretation.
Qualitative interpretation involves recognition of:
the nature of discrete anomalous bodies including intrusions and faults
structural styles
relative depth from the magnetic profiles
43
For two anomalies shown below, anomaly A has a short wave length compared with anomaly B,
indicated that the magnetic body causing anomaly A is shallower than the body causing B. As
the amplitude of anomaly B is identical to that of anomaly A, despite the causative body being
deeper, this suggest that the magnetization of body B is much greater for body A, as amplitude
decreases with increasing separation of the sensor from the magnetized body.
It is possible to obtain a very approximate estimate of depth to a magnetic body using the shape
of the anomaly. There are two methods of depth estimations.
I. Graphical method
II. Slope method (peters’ half slope method)
44
Graphical method
This method is important to determine the depth to the center of magnetic body. It is preferable
for simple sphere or horizontal cylinder anomaly. The width of the main peak at its half
maximum value (Fmax/2) is equal to the depth to the center of the magnetic body.
Figure simple graphical method to estimate the depth the center of the magnetized body.
This method is important to determine the depth to the top of magnetic body. It is preferable for
dipping sheet or prism anomaly.
45
Find max slope of anomaly
Construct 2 lines with slope = half the max slope which is tangent to the curve
Measure the horizontal distance x and determine z using the formula below.
X cos α
Z=
n
Where z = depth to the top of the magnetic body
= the angle subtended by the normal to the strike of the magnetic anomaly and
the true north.
The value of n is vary from 1.2 to 2 depend on the thickness of the magnetize body.
n= 1.2, if body is very thin
n=2.0, if body is very thick
n=1.6, if the body is intermediate thickness
46
Chapter Four
Electric methods
This is an active method that employs measurements of electrical potential associated with
subsurface electrical current flow generated by a DC. The main parameter in this method is the
vertical or lateral variations in resistivity of subsurface materials.
47
2. Induced Polarization (IP) method
This is an active method that is commonly done in conjunction with DC Resistivity. It employs
measurements of the transient (short-term) variations in potential as the current is initially
removed from the ground. It has been observed that when a current is applied to the ground, the
ground behaves much like a capacitor, storing some of the applied current as a charge that is
dissipated upon removal of the current. Overvoltage decay times and rise times are measured and
are diagnostic of the nature of the subsurface.
decaying voltage
t0 t1
Time
3. Self Potential (SP) method
This is a passive method that employs measurements of naturally occurring electrical potentials
commonly associated with the weathering of sulfide ore bodies. Measurable electrical potentials
have also been observed in association with ground-water flow. The basic idea is measure
natural potential differences in the subsurface without injecting current.
Introduction
The electrical resistivity method is used to map the subsurface electrical resistivity structure,
which is interpreted by the geophysicist to determine geologic structure and/or physical
properties of the geologic materials. The basic parameter of a DC electrical measurement is
resistivity. The electrical resistivity of a geologic unit or target is measured in ohmmeters, and is
48
a function of porosity, mineral grain, temperature, water saturation and the concentration of
dissolved solids in pore fluids within the subsurface.
In 1827, Georg Ohm defined an empirical relationship between the current flowing through a
wire and the voltage potential required to drive that current.
V = IR
Ohm found that the current, I, is proportional to the voltage, V, for a broad class of materials that
we now refer to as ohmic materials. The constant of proportionality is called the resistance of the
material and has the units of voltage (volts) over current (amperes), or ohms.
The important parameter for this method is resistivity, not resistance.why? The problem with
using resistance as a measurement is that it depends not only on the material out of which the
wire is made, but also the geometry of the wire. If we were to increase the length of wire, for
example, the measured resistance would increase. Also, if we were to decrease the diameter of
the wire, the measured resistance would increase. We want to define a property that describes a
material's ability to transmit electrical current that is independent of the geometrical factors.
The quantity that is used is called resistivity (). In the case of the wire, resistivity is defined as
the resistance in the wire, multiplied by the cross-sectional area of the wire, divided by the length
49
of the wire. The units associated with resistivity are thus ohm.m. Resistivity is a fundamental
parameter of the material making up the wire that describes how easily the wire can transmit an
electrical current. High values of resistivity imply that the material making up the wire is very
resistant to the flow of electricity. Low values of resistivity imply that the material making up the
wire transmits electrical current very easily.
Resistivity surveys give a picture of the subsurface resistivity distribution. To convert the
resistivity picture into a geological picture, some knowledge of typical resistivity values for
different types of subsurface materials are important. Resistivity of rocks depends on the local
factors such as, porosity, temperature, fluid quantity & quality, and composition rock materials.
50
Current flow and Equipotential
The usually practice in the field is to apply an electrical direct current (DC) between two
electrodes implanted in the ground and to measure the difference of potential between two
additional electrodes that do not carry current. Usually, the potential electrodes are in line
between the current electrodes, but in principle, they can be located anywhere.
In a homogeneous earth, current flows radially outward from the source to define a
hemispherical surface. The current distribution is equal everywhere on this surface which is also
called an equipotential surface. The electric potential at a distance R from the source is given by:
I
V=
2 πR
51
In practice there are always two current electrodes one as source through which the current
enters to the ground and one as sink to which it returns. Further, we do not measure a potential
as such but always a potential difference, that is, a voltage difference. The general arrangement
of the electrodes is shown in Figure below:
I
C1 P1 P2 C2
52
The goal in resistivity surveying is to measure the potential different between two points due to
the current from two current electrodes. The potential at each electrode is determined due to the
current electrodes. Suppose that the two currents and the two potential electrodes C1, C2, and
P1, P2 respectively are in one line as in above. C1 is the positive and C2 is the negative current
electrode.
The potential at P1 due to C1 is
Iρ 1
V=
2 π r1
The potential at P1 due to C2 is
Iρ 1
V=−
2 π r2
The total potential at P1 is simply the sum of the two potential.
Iρ 1 Iρ 1
V 1= −
2 π r1 2 π r2
Iρ 1 1
= ( − )
2 π r1 r 2
53
The potential at P2 due to C1 is
Iρ 1
V=
2 π r3
The potential at P2 due to C2 is
Iρ 1
V=−
2 π r4
The total potential at P2 is simply the sum of the two potential.
Iρ 1 Iρ 1
V 2= −
2 π r3 2 π r4
Iρ 1 1
= ( − )
2 π r3 r4
Iρ 1 1 1 1
ΔV = V 1 − V 2 = ( − − + )
2 π r1 r2 r3 r 4
If the arrangement of the electrons (k) is change, the resistivity values also change. Due to this
the resistivity is called apparent resistivity. Knowing the locations of the four electrodes, and
by measuring the amount of current input into the ground, I and the voltage difference between
the two potential electrodes, V, we can compute the resistivity of the medium,a, using the
following equation.
[( ]
2 πΔV 1
ρa =
)
I 1 1 1 1
− − +
r1 r2 r3 r4
54
ΔV
ρa = k
I
Depth penetration of the current is directly related with current electrode spacing. When the
current electrode spacing is greater they go through greater depth.
What this implies is that by increasing the electrode spacing, more of the injected current will
flow to greater depths, as indicated in the figure above.
How does the presence of depth variations in resistivity affect the flow of electrical current? In
the previous examples, we assumed that the Earth has a constant resistivity. Obviously, this isn't
true or else we wouldn't be trying to map the variation in resistivity throughout the Earth.
Although resistivity could conceivably vary in depth and in horizontal position, we will initially
only consider variations in depth. In addition, we will assume that these depth variations in
55
resistivity can be quantized into a series of discrete layers, each with a constant resistivity. Thus,
initially we will not consider variations in resistivity in the horizontal direction or continuous
variations with depth.
Shown below are current-flow paths (red) from two current electrodes in two simple two-layer
models. The model to the left contains a high-resistivity layer (250 ohm-m) overlying a lower
resistivity layer (50 ohm-m). The model to the right contains a low-resistivity layer (50 ohm-m)
overlying a higher resistivity layer (250 ohm-m). For comparison, we've also shown the paths
current would have flowed along if the Earth had a constant resistivity (blue) equal to that of the
top layer. These paths are identical to those described previously.
Notice that the current flow in the layered media deviates from that observed in the
homogeneous media. In particular, notice that in the layered media the current flow lines are
distorted in such a way that current preferentially seems to be attracted to the lower-resistivity
portion of the layered media. In the model on the left, current appears to be pulled downward
into the 50 ohm-m layer. In the model on the right, current appears to be bent upward, trying to
remain within the lower resistivity layer at the top of the model. This shouldn't be surprising.
What we are observing is the current's preference toward flowing through the path of least
resistance. For the model on the left, that path is through the deep layer. For the model on the
right, that path is through the shallow layer.
56
4.2.3 Resistivity Equipments, Electrode configurations and Field Procedure
DC Resistivity Equipments
Resistivity measurements of the ground are normally made by injecting current through two
current electrodes and measuring the resulting voltage difference at two potential electrodes.
From the current (I) and voltage (V) values, an apparent resistivity (ρa) value is calculated,
There are a number of electrode arrangements (or Arrays) used in electrical measurements. As a
convention, the two current and potential electrodes are denoted by A, B and M, N respectively.
The most commonly used ones are Schlumberger, wenner and dipole – dipole array.
a) Schlumberger array
The Schlumberger array is a symmetrical arrangement in which the points A, M, N and B are
taken on a straight line such that the points are symmetrically placed about the center of the
spread O. When the distance between the potential electrodes is ‘ a’ and‘s’ is half distance of the
current electrodes, the apparent resistivity given by:
( )(
a2
2
s −
ρa = π
a
4 ΔV
I )
57
( )
2
a
s2 −
4
K=π
a Geometrical factor
The depth penetration of current increase by increasing the spacing between current electrode A
and B, but kept constant the space between potential electrodes.
b) Wenner Array
The Wenner array is also a symmetrical arrangement in which the points A, M, N and B are
taken on a straight line such that the points are symmetrically placed at a fixed separation ‘a’.
Depth penetration of the current is increase by increase a. The apparent resistivity given by:
ρa = 2 πa ( )
ΔV
I k = 2 πa --- Geometric factor
58
C) Dipole-Dipole Array
In the Dipole-Dipole array, the distance between the current dipole and the potential dipole
increased in order to increase the depth of penetration. The general electrode arrangement of
dipole - dipole configuration is as shown in below.
ρa = π an ( n + 1 ) ( n + 2 ) ( )
ΔV
I k = π an ( n + 1 )( n +2 ) --- Geometric factor
Field Procedures
There are two major field procedures of resistivity measurements, i.e., Sounding and Profiling.
Resistivity sounding: In resistivity sounding, which is also known as Vertical Electrical
Sounding [VES], the positions of the electrodes changed with respect to a fixed point (known as
the sounding point) and, the measured values reflect the vertical distribution of resistivity values
in a geologic section. The procedure is used to outline the boundaries between horizontal
59
horizons in stratified media. By the merit of its relative advantage over other arrays, the
electrode configuration widely used in sounding is that of the Schlumberger array. In the
Schlumberger sounding, the values MN and the corresponding AB values are chosen in order to
get overlapping readings whenever a change-over of MN values occurs.
Example
Resistivity profile: all the electrodes are moved laterally without changing their relative
configurations. This approach provides the lateral variation of resistivity values within a certain
depth level in the subsurface. The method is hence also called lateral inhomogeniety hunting. It
is a procedure used to map vertical and near vertical contacts between rock formations and weak
60
zones. For such constant depth investigations, the Wenner array is considered convenient due to
predefined separations and relative ease in field operation.
Example
61
4.2.4 Data presentation and Interpretation of Resistivity Data
Data presentation
The resistivity sounding data are routinely plotted on a double logarithmic graph paper of,
preferably, 62.5mm modulus. The x-axis is the electrode spacing [L/2 or AB/2], while the y-
axis is the apparent resistivity ra.
The resistivity profile data are routinely plotted on a semi logarithmic graph paper. The x-axis is
station spacing [a] (normal), while the y- axis is the apparent resistivity ra (logarithmic).
Interpretation
As with all geophysical interpretation, the analysis of electrical resistivity data has two distinct
aspects: qualitative and quantitative
Qualitative Interpretation
Profiling surveys:
62
Sounding surveys
If the ground is composed of a single homogeneous and Isotropic layer of finite resistivity and
infinite thickness the apparent resistivity curve will be a straight horizontal line.
Consider a ground section which is composed of two homogeneous and Isotropic layers, one
with finite resistivity (r1) and thickness (hi) on top, and another having finite resistivity (r2) and
an infinite thickness (h2= ¥) at the bottom. The curve will be an ascending type if r2 / r1 > 1 or
will be a descending type if r2 / r1 < 1.
63
Descending type if r2 <r1 Ascending type if r2 > r1
64
65
Case -4 four layer curves
Quantitative interpretation
The objective of this interpretation is to determine the thickness and resistivity of different layers
and finally to obtain the complete geological picture of the area. The interpretation can be done
manual and computer based.
Manual – time consumed
Computer based – using special design software’s
66
Two extreme cases
- r2 = ¥, k = 1
- r1 = ¥, k = -1
67
Chapter Five
Seismic exploration
5.1.1 Introduction
Like the DC resistivity method, seismic methods, as typically applied in exploration seismology,
are considered active geophysical methods. In seismic surveying, ground movement caused by
some source is measured at a variety of distances from the source.
There are several different kinds of seismic waves, and they all move in different ways. The two
main types of waves are body waves and surface waves. Body waves can travel through the
earth's inner layers, but surface waves can only move along the surface of the planet like ripples
on water. Earthquakes radiate seismic energy as both body and surface waves.
Seismic body waves can be further subdivided into two classes of waves: P waves and S waves.
There are two classes of surface waves, Love and Rayleigh waves that are distinguished by the
type of particle motion they impose on the medium. Velocities of Seismic Waves are Depends on
density and elastic moduli
√ √
4μ
K+ μ
Vp=
3 Vs=
ρ ρ
Where K = bulk modulus, m = shear modulus, and r = density.
The seismic wave velocity (p and s wave) for different materials are given below in the table.
68
Material P wave Velocity (m/s) S wave Velocity (m/s)
Air 332
Water 1400-1500
Petroleum 1300-1400
Huygens' Principle
Every point on a wave front can be thought of as a new point source for waves generated in the
direction the wave is traveling or being propagated.
69
Snell’s Law
What will happen when seismic waves reach at layer boundary? When the seismic wave travels
through the earth and reach at the layer boundary, they encounter changes in physical properties
of the materials like bulk modulus, shear modulus and density. Change in physical properties of
the materials cause the rays to be reflected and refracted. Snell’s law is a law that described how
the waves refracted when the waves reach at the boundary.
sin i V 1
=
sin r V 2
sin(i c ) V 1
=
sin 90 V2
V1
sin(i c )=
V2
V1
i c=sin−1
V2
70
Seismic refraction makes use of critically refracted, first-arrival energy only. The rest of the
wave form is ignored.
Principal of Reciprocity
According to this principle, the time arrival for forward shooting and reverse shooting are equal.
The seismic refraction technique is illustrated in the photos and schematic drawing in figure
below. An impulsive source creates a seismic wave which travels through the earth. When the
wave front reaches a layer of higher velocity (e.g. bedrock) a portion of the energy is refracted,
or bent, and travels along the refractor as a “head wave” at the velocity of the refractor
(bedrock). Energy from the propagating head wave leaves the refractor and returns to the surface,
71
where its arrival is detected by a series of geophones and recorded on a seismograph. The angle
of refraction depends on the ratio of velocities in the two materials (Snell’s Law). Travel times
for the impulsive wave-front to reach each geophone are measured from the seismograph
records. From those travel times and distances, seismic velocities in each layer, and depths to
each layer can be calculated and physical properties inferred.
Travel-Time Curves
Plots of the times of arrivals of the various recorded waves versus offset from the source are
called travel-time curves. Travel - time curves are graphs when the time plotted on the vertical
axis and distance from the source along the horizontal axis.
Z1
72
Xcrit Xc
If you examine the shot record shown above carefully, you can see the three seismic waves
(direct, reflected, and refracted). Using the snapshots or movies of wave propagation presented
earlier, try to identify the three arrivals on this shot record.
Cross over distance (Xc): is the point where the direct and refracted waves arrive at the same
time.
Critical distance (Xcrit): is the point where both reflection and refracted wave arrive at the
same time.
x
t=
Travel time:
v1
Direct wave is a straight line that passing through the origin. The first arrival at short offsets is
direct waves. At larger offsets, the first arrival is refracted waves.
73
Reflected wave: are waves reflecting from the interface. Remember that the reflected waves can
never be the first arrival recorded on a given seismogram. It approaches the moveout of the
direct waves at very large offsets.
Refracted (head) wave: are waves refracting across the interface, traveling along the boundaries
and then back up to the surface.
2 z1
x √ v 2 2 − v 12
t= +
Travel time:
v2 v1 v2
Total time taken to travel the wave that passes along the point A and B from the source
(S) to the receiver(R) is:
t = t SA + t AB +t BR
SA AB BR
= + +
V1 V 2 V1 SA = BR
74
2 SA AB
= +
V 1 V2 AB = X – (DA + BE), DA = BE
AB = X- 2DA, tan = DA/Z, DA = Z tan
2Z X − 2 Z tan θ
= +
V 1 cosθ V2 AB = X – 2Z tan
cos = Z/SA, SA = Z/ cos
2Z X 2 Z tan θ
= + −
V 1 cosθ V 2 V2 sin = v1/v2
X sin θ 2Z
= + (−sin 2 θ + 1 )
V1 V 1 cos θ
X sin θ 2 Z cos 2 θ
= +
V1 V 1 cos θ
X sin θ 2 Z cosθ
= +
V1 V1
sin = v1/v2, cos = (1- (v1/v2)2)1/2
75
t=
X
+
√
2Z v 2 − v
2 12
V2 V 1V 2
This is equation of straight line with slope 1/v 2 and time intercepts (ti)
t i=
√
2Z v 2 − v
2 12
V 1V 2
The travel time refracted curves for forward and reverse shooting are shown below.
ti
xc
76
How can be determine velocity of layers? Example - 1
0 0 30
6 8 32
12 16 34
18 24 36
24 32 38
30 40 40
36 48 42
42 56 44
48 64 46
54 72 48
60 80 50
66 88 52
72 96 54
120
100
Time (miliseconds)
80
60
40
20
0
0 20 40 60 80 100
Distance (meters)
77
If we plot the time taken by the first arrivals, from the shot instant, the slopes of the two lines
yield the reciprocal of the velocities in each medium.
Depth determination
For depth calculation, at an impact point, there exists two approach: the cross-over distance and
the intercept time (the intersection b/n the prolongation of the V2 line and the time axis through
the shot point).
t=
X
+
√
2Z v 2 − v
2 12
V2 V 1V 2
t i=
√
2Z v 2 − v
2 12
V 1V 2
ti v1 v2
Z=
2 √ v 22 −v 12
Using crossover distance
At crossover distance the direct wave and refracted wave arrive at the same time.
time for direct wave
x
t1 =
v1
time for refracted wave
t 2=
X
+
√
2Z v 2−v
2 12
V2 V 1V 2
78
t 1 = t2
= +
√
x c X c 2 Z v 22 − v 12
v1 V 2 V 1V 2
xc
−
Xc
=
2Z v 2 − v 2
2 1 √
v1 V 2 V 1V 2
79
80
81
Dipping-layers
The problem presented by a dipping boundary between layers adds some geometric complexity
to the derivation of the depth formula. Several important concepts of seismic refraction theory
must be introduced at this point. To learn about the geometry of a dipping boundary, the
refraction profile must be reversed. For a single array, a minimum of two shots must be fired,
one from each end of the array. This concept is termed as "reversed-profile shooting," and the
practice\\\\ should be followed routinely in all seismic-refraction studies.
The total travel time of waves from shot point A to shot point D, and in the opposite direction,
from shot point D to shot point A, must be equal; that is, Tu must equal Td because, the same
wave path is followed in each case.
The depths (da and db) to the refractor vertically below the spread end points can easily be
calculated from derived values of perpendicular depths (z a and zb) using the expression
d=z/cosα. If the layer is dipping (relative to ground surface), opposing travel time curves will be
asymmetrical.
82
The travel time over a refractor dipping at an angle α is given by:
AB BC CD
T ABCD = + +
V1 V2 V1
Za Zb
AB = CD =
cosθc , cosθc ,
BC = X cos α − [ Z a + Z b ] tanθ c
Za X cos α −( Z a + Z b )tan θc Zb
T ABCD = + +
V 1 cosθ c V2 V 1 cosθ c
2
θc
=
X cos α Z a + Z b
+ −
[ Z a + Z b ] sin
V2 V 1 cos θ c V 1 cos θc
X cos α [ Z a + Z b ] cos θc
T ABCD = +
V2 V1
83
Down dip travel time
X cos α [ Z a + Z b ] cosθ c
T d= +
V2 V1
V1
V2 = and Z b = Z a + X sin α
sin θ c
After substitute these variables the above equation becomes
X sin ( θ c + α ) 2 Z a cos θc
T d= +
V1 V1
The down dip apparent velocity
V1
Vd =
sin ( θ c +α )
84
Up-dip travel time
X cos α [ Z a + Z b ] cos θc
Tu= +
V2 V1
V1
V2 = and Z a = Z b − X sin α
sin θ c
After substitute these variables the above equation becomes
V1
Vu=
sin ( θc −α )
Using the Vd and Vu we can solve for dip and critical angle
α=
1
2 [ sin
−1
( V)
V
d
− sin
−1
(V Vu )]
1
85
θc =
1
2 [ sin
−1
( V)
V
d
+ sin
−1
(V Vu)]
1
V1
V2 =
sin θ c
Finally, the intercept times can be used to determine the perpendicular depth to the reflector:
1 Ti d V 1
Za =
2 cos θ c
1 Ti u V 1
Zb =
2 cos θ c
86
Example -1 given v1= 1500m/5, Vu = 3200m/s, Vd = 2870m/s, Tid = 100se and Tiu = 150se
Find
Example-2 Offsets in meters and times in milliseconds are given below. Answer the following
questions based on the data given below.
87
I. Construct a travel time curve for both forward and reverse profiles
II. Determine the number of layers from the graph
III. Estimate cross over distances and intercept times
IV. Evaluate velocity and depth for each layers
V. Discus dipping nature of layers
Hidden Layers
Can layers exist in the subsurface that are not observable from first arrival times? The answer is
yes!! Layers that cannot be distinguished from first arrival time information are known as
hidden layers. There are three causes’ possibilities that produce hidden layers: Velocity
inversion; lack of velocity contrast; and the presence of a thin bed with large velocity contrast.
I. Velocity inversion – in the situation where a layer with a low velocity underlies one
with a higher velocity, then the low velocity layer may not be detected using seismic
refraction method. No critical refraction can occur in such a situation and thus no
head waves from the interface are produced.
Notice that this travel-time curve is indistinguishable from the curves produced by a model
containing a single interface. Hence, from this data alone you would be unable to detect the
presence of the middle layer. Using the methodology described earlier, you would interpret the
subsurface as consisting of a single layer with a velocity of 1500 m/s (from the slope of the
travel-time curve for the direct arrival) underlain by a second layer with a velocity of 5000 m/s
from the slope of the head wave travel-time curve). Using the value of intercept time from the
88
graph and the values of the velocities, you would guess that the thickness of the layer is 314 m!!
You would be wrong.
II. Thin, Large Velocity Constrast Layers - Another type of hidden layer is produced
by media whose velocity greatly increases with a small change in depth. Consider the
model shown below.
Notice that in this model there is a thin layer that is underlain by the third layer, and the third has
a velocity much larger than the upper layer.
Unlike the previous example, head waves are produced at both interfaces just as described
previously. Because the layer is thin and the velocity of the underlying medium is larger,
however, the head wave coming from the top boundary is never observed as a first arrival!! It is
overtaken by the rapidly traveling head wave coming from the bottom boundary before it can
overtake the direct arrival. The travel-time curve you would observe is shown below.
The red line in the figure shows the travel times for the head wave coming off of the top
boundary. As described above, it is never observed as a first arrival. Therefore, like before, you
would interpret the first arrivals as being generated from a subsurface structure that consists of a
single layer over a halfspace. Again, like before, you can correctly estimate the velocities in the
top layer and the halfspace, but because you missed the middle layer, the depth you would
compute from time intercept to the top of the halfspace would be incorrect.
III. Lack of velocity contrast: if there is little velocity contrast between the layers, then
it may be extremely difficult the arrival head waves from this interface.
89
5.2 Seismic reflection
Introduction: Seismic reflection surveying is the most widely used geophysical techniques. This
method was developed in the 1920s and 30s as a tool for hydrocarbon (oil and gas) in
sedimentary basin (petty, 1976). Financial incentive led the petroleum industry to refine the
technique to sophisticated levels by the 1970s; many advances in computer equipment and
processing can be attributed to this effort. Its predominant applications are hydrocarbon
exploration and research into crustal structure, with depths of penetration of many kilometers.
Since around 1980, the method has been applied increasing to engineering and environmental
investigations where the depths of penetration are typically less than 200m.
General principles: the reflection method requires a source of seismic energy and an
appropriate method of both detectors (geophones) and recorder (seismographs). The seismic
waves are reflected back when there is change in acoustic impedances (Z), which is the product
of the seismic velocity (V) and the density (ρ) of each layer (Z = Viρi) for the ith layer.
90
Data acquisition: Field geometries vary considerably according to the environment (marine vs.
land), the nature of the geological problems, and the accessibilty of the area. For seismic
reflection survey there are different types of data gather methods. These are centeral shot
gather, end shot gather and common mid point gather.
If more than one shot loaction is used, reflection arising from the same point on the interface will
be detected at different geophones. This common point of reflection is known as the commom
mid point (CMP). The wave is reflected from mid way between source and receiver. A seismic
reflection survey is commonly conducted in the common midpoint (CMP) mode. This method
provides redundancy of information that enhances signal to noise ratio as compare to one shot.
91
92
Arrival time for Reflected wave: when seismic waves traveling in one layer encounters a layer
with different acoustic impedance, some of the wave is reflected back into the first layer (Fig.
3.28). Snell’s law illustrates that the angle of reflection (θ 2) is exactly the same as the angle of
incidence (θ1, because both rays travel at the same velocity. Therefore, reflected wave follows V-
shaped ray paths.
Total time arrival of reflected wave from the top of the second layer can be determined from the
total length of the reflected path (L), which is the sum of the incident ray (L 1) and reflected ray
(L2), and velocity of the first layer.
93
Fig. 3.29b illustrates that the time to (t0) go vertically down to the interface and straight back up
to the shot location is a constant, given by:
The travel time equation for a reflected wave from a horizontal interface overlain by a constant
velocity medium is therefore:
T
f2 x2
− =1
t 2 4 h2
o
94
Normal moveout (NMO)
Normal moveout is the changes in travel time as a function of offset between the source and
receiver. It is the difference between the travel time along the hyperbola (Tf) and the intercept
time (t0):
TNMO ( Δ t) = Tf - t0
95
Velocity and depth determination
The travel time for a reflected wave from a horizontal interface overlain by a constant velocity
medium is therefore:
Tf =
Δ
t + to =
√ t
o2
+
x2
v 2
1
(√ )
2
2 x2
( Δt + t o ) = t 2+
o v2
1
x2
Δt 2 + 2 Δtt o + t 2 = t 2 +
o o v 2
1
If the receiver offsets are small compared to the layer thickness (this is common), then Δt is
small and Δt2 ≈ 0. The source-receiver distance is called offset.
x2
Δt =
2t 0 v 2
1
96
x
v1 =
√ 2 t0 Δt
Now we know all of these terms from t-x graph except v1, so we can solve this!!
After calculating velocity of the first layer using the above equation thickness of the first layer is
determined using the equation given below.
t 0 v1
h1 =
2
NB. The most critical parameter in seismic surveying, irrespective of the type or scale of
application, is the determination of seismic velocity. It is this factor which is used to convert
from the time domain (the seismogram) to the depth domain (the geological cross-section).
Consequently, correct analysis of velocities is of vital importance.
Example-1: Below a flat, horizontal surface is a layer of 1500 m thickness that has a constant
velocity of 2500 m/s. Twelve detectors are placed at 100 m intervals from the source. Table 5.2
lists total path length to each detector. Determine T0, and NMO (ΔT) for traces corresponding to
each detector. List answers in ms. (1 ms = 1000 s.)
97
Table 5.2 Reflection path lengths
In this section we will analyze the time-offset response of a layered Earth model. We will
assume that our geological model is composed of a stack of n horizontal layers (Figure 1.18).
According to Snell law
Now we multiply all the left hand side (LHS) terms and equate them to the multiplication of all
the right hand side (RHS) terms:
It is easy to see that for the nth layer the following condition is satisfied:
or
98
We have just proved that in a horizontally stratified media the horizontal ray parameter p is
constant. In Figure (1.18) we are assuming that the emergent wave is of the same type of the
incident wave. In this case the total offset x is given by
Similarly, we can compute the total time as the sum of the time the wave travels in each layer.
We can use the fact that the ray parameter is constant to simplify the above two equations and
rearrange as
The last two equation express the time of the reflection at the n th layer and the distance x in
parametric form (p is the parameter).
99
When multiple reflectors are encountered, multiple NMO hyperbolas are produced
100
Velocity and depth determination
In a multilayered ground, rays reflected from the nth interface undergo refraction at all higher
interfaces to produce a complex travel path. At offset distances that are small compared to
reflector depths, the travel-time curve is still essentially hyperbolic but the homogeneous top
layer velocity in the above equations (single reflector ) is replaced by the average velocity or the
root-mean square velocity Vrms of the layers overlying the reflector.
The root-mean-square velocity of the section of ground down to the nth interface is given by:
Where
v i is the interval velocity of the ith layer andτ i is the one-way travel time of the reflected
ray through the ith layer.
101
vi Δz
The interval velocity ( ) for a layer is the thickness of the layer ( ) divided by the travel time
Δt
spent in the layer ( ).
Average Velocity (Vav): The average velocity of the section of ground down to the nth interface
is given by:
Where
v i is seismic velocity of the ith layer andτ i is the one-way travel time of the reflected ray
through the ith layer.
the root mean square velocity (V rms) is better approximation than the average velocity down to n th
interface. Therefore, the total travel time t n of the ray reflected from the n th interface at depth z is
given to a close approximation by
and at small offset compare to the thickness of the layer the NMO for the nth reflector is given
by
102
The individual NMO value associated with each reflection event may therefore be used to
determine a root-mean square velocity value for the layers above the reflector. Values of Vrms
down to different reflectors can then be used to compute interval velocities using the Dix
formula. To compute the interval velocity Vn for the nth interval
where V rms,n-1, t n-1 and Vrms,n, tn are, respectively, the root-mean-square velocity and
reflected ray travel times to the (n - 1)th and nth reflectors (Dix 1955).
Thickness of the layers is determined from the time intercept and velocities of the layers, and its
formula is geven below.
Dipping reflector
In the case of a dipping reflector the value of dip θ enters the time–distance equation as an
additional unknown. The equation for the dipping reflector is derived similarly to that for
horizontal layers by considering the ray path length divided by the velocity of the layer. The
travel-time for the dipping interface case can be derived with the aid of Figure 5.3:
103
Where
2 ( 2 z s )2 + x 2 + 4 xz s sin (φ )
T =
v
12
104
( x 2 + 4 z 2 + 4 xz s sin ( φ ) )1/2
s
T=
v1
The equation still has the form of a hyperbola, as for the horizontal reflector, but the axis of
symmetry of the hyperbola is now no longer the time axis. Proceeding as in the case of a
horizontal reflector, using a truncated binomial expansion, the following expression is obtained:
( x 2 + 4 xz s sin (φ))
T = to +
2v 2to
1
2zs
to =
v1
Consider two receivers at equal offsets x up-dip and down-dip from a central shot point (Fig.
5.4a). Because of the dip of the reflector, the reflected ray paths are of different length and the
two rays will therefore have different travel times. tx is greater than t-x and the travel time curve
is asymmetric about x = 0. The minimum travel time does not occur at x = 0 and also the
reflection received at x = 0 did not originate beneath x = 0.
Dip moveout Δ Td is defined as the difference in travel times tx and t-x of rays reflected from the
dipping interface to receivers at equal and opposite offsets x and –x.
2 x sin ( φ)
ΔT d =
v1
vΔT d
φ≈
2x
105
Hence the dip moveout Δ Td may be used to compute the reflector dip φ if V is known. V can
be derived using the NMO Δ T which, for small dips, may be obtained with sufficient accuracy
by averaging the up-dip and down-dip moveouts:
Fig. 5. 4 (a) Geometry of reflected ray paths and (b) time–distance curve for reflected rays from
a dipping reflector. Δ Td = dip moveout.
Seismic instrumentation
106
Typical seismic acquisition systems consist of the following components.
- Seismic Source
- Geophones
- Recording System
Seismic Source
Two types of sources are most commonly used for both refraction and reflection investigations
of the near surface.
- Impact Sources
- Explosive Sources
Impact Sources: Sources that generate seismic energy by impacting the surface of the Earth are
probably the most common type employed. The most commonly used type of impact source is a
simple sledgehammer. In this case, an operator does nothing more than swing the sledgehammer
downward onto the ground. Instead of striking the ground directly, it is most common to strike a
metal plate lying on the ground. The sledgehammer is usually connected to the recording system
by a wire. The moment the sledgehammer strikes the plate, the recording system begins
recording ground motion from the geophones.
Explosive Sources: Explosive sources can impart a large amount of seismic energy into the
ground given their relatively small size. These sources can vary in size and type from small
blasting caps and shotgun shells to larger, two-phase explosives. All explosive sources are
triggered remotely by a devise known as a blasting box. The blasting box is connected to both
the explosive and the recording system. At the moment the box detonates the explosive, it also
sends a signal to the recording system to begin recording ground motion from the geophones.
Geophones: Geophones are remarkably simple devices that detect the seismic waves. Different
styles of geophone cases are available for use in different environments.
107
Recording System: Multi-channel seismic seismographs are widely available from a number of
different manufacturers.
108
Applications of seismic methods:
for oil and petroleum exploration
Stratigraphy
mapping geologic structure
Landfill investigations
Rock quality
Water table depth and aquifer characterization
Bedrock depth determination and its morphology etc
Chapter six
Geophysical Well logging
The fragments of rock flushed to the surface during drilling are often difficult to interpret as they
have been mixed and leached by the drilling fluid and often provide little information on the
intrinsic physical properties of the formations from which they derive. How can be obtaining the
intrinsic physical properties of the formation? This can be obtained using geophysical well
logging. It is also known as down-hole geophysical surveying or wire-line logging. Well logging
means continuous recording of a physical parameter of the formation with depth.
Advantages compared to surface geophysical surveys well logging has an advatage in that the
measuring devices are in contact with the formations either directly- through measurement in the
borehole or almost directly- measurements past the drilling mud.
The youngest branch of applied geophysics and the first measurenents were taken for the
petroleum industry by Marcel and schlumberger in 1927. The schlumberger brothers develop a
resistivity tool to detect difference in the porosity of sandstones. The method has since
undergone intensive improvement in technique, instrumentation and interpretation. It is also the
most comprehensive of the geophysical methods. it uses the principles of almost all the methods
used in surface geophysical work- magnetic, gravity, electrical, EM, seismic, radiometric, etc.
109
A geophysical well log measures physical and chemical properties of the formations and fluids in
or around the vicinity of the well, and results in a series of curves plotted on a graph showing
changes in the properties with depth.
Well logging geophysics is achieved using one specific measuring tool (equipments), lowered
down a well to measure a range of physical parameters at various depth. The instrumentation
necessary for borehole logging is housed in a cylindrical metal tube known as a sonde. Sondes
are suspended to the borehole. They are lowered to the base of the section of the hole to be
logged, and logging is carried out as the sonde is winched back up through the section. Logging
data are commonly recorded on a paper strip chart and also on magnetic tape in analogue or
digital form for subsequent computer processing.
Logging techniques are very widely used in the investigation of boreholes drilled for
hydrocarbon exploration, as they provide important in situ properties of possible reservoir rocks.
They are also used in hydro-geological and mineral exploration for similar reasons.
110
Logs are recorded to measure different physical parameters a well to ascertain the capacity of the
well to flow fluids (hydrocarbon and water). There are many physical parameters that can be
recorded in Logs depending upon the need. The most common well logging geophysical methods
are
1. Electric logging
- resistivity logging
- self potential
2. Radioactive logging
- Natural gamma logging
- neutron logging
111
3. Induction logging
4. Sonic logging
5. Fluid logging
- Temperature
- fluid resistivity
- Flow meter & tracer logging.
6. Caliper logging.
Electrical logging
The use of electrical logs, in general, may be
i) The correlation of lithogies and structures on the basis of the log
- differentiation between shales, sands, carbonates and evaporites, and permeable beds
ii) The determination of water saturation Sw and porosity f of formations
Resistivity logging
Principle
Resistivity logging has to be carried out in uncased wells. During resistivity logging current and
potential electrodes is lowered into the well. Potential differences, which result from the currents
injected at the current electrodes, are measured at potential electrodes. Various resistivity logging
techniques have been developed over the years (long and short normal resistivity logging
techniques).
The layout for long and short normal resistivity logging are presented in figure below. For both,
the long and short normal setups, the same current electrode arrangement is used: one electrode
C2 at the bottom of the cable head and another electrode C 1 at the land surface. The electrode at
the ground surface consists of a metal stake which is grounded at a considerable distance from
the recording instruments. Both C1 and C2 are connected with the current input unit powered by
batteries or generator set.
112
Theoretical aspects
Physical parameters measured during resistivity logging are potential differences and current
strength. These can be used for the computation of the formation resistivities of individual sub
surface formations in the well (well resistivity). The general equation for computing apparent
resistivity Ra for any down-hole electrode configuration is
where C1, C2 are the current electrodes, P1, P2 the potential electrodes between which there is a
potential difference V, and I is the current flowing in the circuit. This is similar to equation in
chapter-4 with a factor of 4 instead of 2, as the current is flowing in a full space rather than the
halfspace associated with surface surveying.
Interpretation
Lithologies with high resistivities include sand and gravel while clay and shale have the lowest
resistivity. Resistivity logs assist in determining the rock zones of highest water yield. This
enables well designers to locate the screens or perforated casing in the most desirable position
with more accuracy than relying on the cutting log.
113
Self potential (spontaneous potential) logging
Instrumentation
Self potential means a single potential electrode is lowered into the uncased well filled with
drilling fluid. A second potential electrode is grounded in the pit with drilling fluids or in wetted
area not too far from the well. Between the two electrodes a spontaneous potential which is
generated inside the well is measured with a sensitive voltometer. This meter is coupled with the
revolving chart of recording unit for graphical display of the spontaneous potential log. A
systematic layout of the spontaneous potential logging technique is presented in the figure below.
114
The Uses of SP logs are to:
The SP logs are records of natural potential between the borehole fluid and the surrounding
rocks. SP circles closes only in the presence of the fluid. Drilling fluid and formation water
seldom have the same ion concentration. Since the migration velocities of positive and negative
ions are not the same, charge difference will be develop between the part of permeable formation
invaded with drilling fluid and the not invaded part of the formation.
115
116
Radioactive logging
The principal radioactive emissions of interest in borehole geophysics are gamma rays and
neutron. Other radioactive products such as alpha particles and beta particles can penetrate small
distance through rocks that they are not useful for logging.
Principles
The simplest radioactive method in geophysical well logging is the natural gamma log. These
logging tools record the level of natural occurring gamma ray emission from the rock around the
borehole. The tool records only the total gamma ray signal. This signal is from the radioactive
isotopes of the elements potassium ( 40K) thorium (232Th) and uranium (238U) and the daughter
products in the decay of each series. In natural gamma logging a specially designed sensor is
lowered down a well to record the gamma radiation produced by radioactive materials in the
subsurface formation.
Interpretation
Isotopes of uranium, thorium and potassium are responsible for radio-activity and gamma
radiation in the subsurface rock layers. The radioactive mineral contents of different rock types
can be describe as follows:
Clay and shale: clay is unconsolidated sediment consists of weathering product of feldspars and
mica. Shale is consolidated of clay and silt. Both may contains greater concentration of K 40
isotopes. Therefore, clay and shale usually have high level of gamma radiation.
Sand and sandstone: as quartz is usually the dominant non radioactive mineral, unconsolidated
sands and consolidated sandstones have low level of gamma radiation.
117
radioactivity in API unit
0 50 100 150 200
Clay
Sand
Clay
Neutron logging
The probe contains a source of neutrons, and detectors provide a record of the neutron
interactions that occur in the vicinity of the borehole. Most of these neutrons interactions are
related to the quanity of hydrogen present in groundwater environments.
Chapter seven
Planning and Implementation of Geophysical Exploration
118
There is no short-cut to developing a good survey style; only by careful survey planning backed
by a sound knowledge of the geophysical methods and their operating principles can cost-
effective and efficient surveys are undertaken within the prevalent constraints.
The general procedures/outline that should be follow during planning of geophysical surveys are:
119