The University of Texas at El Paso
Pan-American Center for Earth
and Environmental Studies
Fri 09-Jan-2009 PACES Home > Gravity Database Home > Gravity Database Getting Started
Last modified: Tue 07-Jun-2005
ABOUT PACES
Mission
Newsletter
Sponsors

RESEARCH
Geoinformatics
Remote Sensing
Geoscience
GIS
Gravity/Magnetics

DATA
Portals Page
GIS Data
Image Data
ETM+ Scene Viewer Gravity Data
Gravity Base Stations
Magnetic Data
Photo Gallery

WHO WE ARE
Faculty/Staff
Students

OUTREACH
GLOBE
PACES Scholars
MultiSpec

  ADMINISTRATIVE
  Employee Login
Admin. Login


National Aeronautics and Space Administration

Geological Sciences Department at The University of Texas at El Paso

Systems and Software Engineering Affinity Lab

Getting Started:
Using and Understanding Gravity Data

G. Randy Keller
Department of Geological Sciences
Pan American Center for Earth and Environmental Studies
University of Texas at El Paso, El Paso, Texas, USA


Introduction
Studies of the Earth's gravity and magnetic field (and those of other planetary bodies) are prime examples of modern applications of classical Newtonian physics. There are applications that use knowledge of the Earth's gravity field to study topics such as the details of the Earth's shape (geodesy), predicting the orbits of satellites and the trajectories of missiles, and determining the Earth's mass and moment of inertia. However, the gravity studies addressed here are those that involve geophysical mapping and interpretation of features in the Earth's lithosphere (the relatively rigid outer shell that extends to depths of ~100 km beneath the surface). In fact, the emphasis is on the upper crust of the Earth that extends to depths of about 20 km, because it is this region where gravity and magnetic data can best help delineate geologic features related to natural hazards (faults, volcanoes, landslides), natural resources (water, oil, gas, minerals, geothermal energy), and tectonic events such as the formation of mountain belts. Such studies provide elegantly straightforward demonstrations of the applicability classical physics and digital processing to the solution of a variety of geological problems. These problems vary in scale from very local investigations of features such as faults to regional studies of the structure of tectonic plates.
A big advantage of using gravity data is that a considerable amount of regional data is freely available through universities and governmental agencies. As more detailed data are needed, commercial firms can provide this product in many areas of the world. Finally, relative to most geophysical techniques, the acquisition of land gravity data is very cost effective. The determination of the precise location of the instrument is in fact more complicated than making the actual measurement. Marine and airborne gravity measurements are also common but require complex instrumentation and thus greater costs. Applications
As mentioned above, gravity data is widely available and relatively straight forward to gather, process, and interpret. A particularly nice aspect of gravity techniques is that the instrumentation and interpretative approaches employed are mostly independent of the scale of the investigation. Thus, these techniques can be employed in a wide variety of applications. In addition, images of gravity anomalies are ideal candidates for layers in a Geographic Information System (GIS), and typical image processing software provides numerous mechanisms to merge gravity anomaly data with data sets such as Landsat images.
The regional geophysics section of the Geological Survey of Canada and the western and central regions of the U. S. Geological Survey maintain web sites that include nice case histories demonstrating applications of gravity and magnetic techniques as well as data sets and free software. Geol. Survey of Canada - Geophysical Applications
USGS Western Geophysical Investigations - Menlo Park
USGS Central Region Mineral Resources - Products
USGS Crustal Imaging and Characterization - Products

A good example of the utility of gravity anomalies is their use to delineate the geometry and lateral extent of basins that contain ground water resources. The sedimentary rocks that fill a basin have low densities and thus produce a negative gravity anomaly that is most intense where the basin is deepest. Gravity modeling is used to provide quantitative estimates of the depth of the basin and thus the extent of the water resource.
In the search for petroleum, gravity data are used in conjunction with other geophysical data to map out the extent and geometry of subsurface structures that form traps for the migrating fluids. For example, salt is a low density substance whose tendency to flow often creates traps. Gravity data have been used to detect and delineate salt bodies from the very early days of geophysical exploration to the present.
Gravity data can often delineate ore bodies in the exploration for mineral resources. Maps of gravity anomalies also reveal structures and trends that may control the location of ore bodies even when the bodies themselves produce little or no gravity anomaly.
Studies of geologic hazards often rely on the use of gravity anomalies to detect faults and evaluate their size. Gravity anomalies also can be used to help study the internal plumbing of volcanoes.
The intent here is to introduce only those terms and concepts necessary to understand the basics of gravity and magnetic techniques. The Gravity and Magnetics Committee of the Society of Exploration Geophysicists maintains a web site that includes a Dictionary of Terms, and a link to a Glossary that is maintained by the Integrated Geophysics Corporation. The Gravity Technique
Technically, what we think of as the force of gravity (g) is the gravitational acceleration due to the sum of the attraction of the Earth's mass and the effects of its rotation. However, it is common practice for geophysicists to say that they are measuring gravity and think of it as a force ( a vector) representing the attraction of the Earth on a unit mass (F = Me g). This attraction varies on a global scale and with elevation in ways that can be predicted from basic physics. These variations are of little geologic interest and raw gravity readings are processed to remove them so that local variations (gravity anomalies) can be detected and mapped. Measurement of Gravity
The measurement of absolute gravity values is a very involved process that usually requires the use of sophisticated pendulum systems However, a gravimeter is an elegantly simple and accurate instrument that measures differences in gravity. These instruments were perfected in the 1950's, and although new designs are being developed, most instruments work on the simple principal of measuring the deflection of a suspended mass as a result of changes in the gravity field. A system of springs suspend this mass, and it mechanically easier to measure the change in tension on the main spring required to bring the mass back to a position of zero deflection than to actually measure the minute deflection of the mass. If gravity increases from the previously measured value, the spring is stretched and the tension must be increased to return it to a zero deflection position. If gravity decreases, the spring contracts and the tension must be decreased. The gravimeter must be carefully calibrated so that the relative readings it produces can be converted into differences in mGals. Thus, each gravimeter has its own calibration constant, or table of constants if the springs do not behave linearly over the readable range of the meter. Instruments that make measures on land are the most widely used and can easily produce measurements that are correct to 0.1 mGal and meters with a precision of 0.001 mGal are available. Specially designed meters can be lowered into boreholes or placed in waterproof vessels and lowered to the bottom of lakes or shallow portions of the ocean.
Considering the accelerations that are present on moving platforms such as boats and aircraft and the precision required to make meaningful measurements, it is surprising that gravity measurements are regularly made in such situations. The gravimeters are placed on platforms that minimize accelerations and successive measurements must be averaged, but precisions of 1-5 mGal are obtained in this fashion.
By reading a gravimeter at base station and then at a particular location (usually called a gravity station, Figure 1), we can convert the difference into an observed gravity value by first multiplying this difference by the calibration constant of the gravimeter. This converts the difference from instrument readings into mGal. This difference is then corrected for meter drift and Earth tides (see below) and added to the established gravity value at the base station to obtain the observed gravity value (Gobs) at the station. This process is to some degree analogous to converting electromagnetic sensor readings from a satellite to radiance values. The Earth's Gravity Field
The theoretical treatment of the Earth's gravity field is based on potential theory (e.g., Blakely, 1996). The gravitational potential at a point is the work done by gravity as a unit mass is brought from infinity to that point. This concept is less abstract if we realize that mean sea level is an equipotential surface that we call the geoid. However, it is important to remember that equipotential surfaces are not equal gravity surfaces because one differentiates the potential to arrive at the gravity field. In addition, a plumb bob (a weight on a string) always hangs in a direction perpendicular to the geoid. This is the very definition of vertical and is very important in surveying techniques. The technical definition of elevation is height above the geoid. Thus, mapping the geoid is an important consideration for the continents where it can be thought of as the surface sea level would assume if canals connecting the oceans where dug across the continents and the water was allowed to freely flow and reach its equilibrium level.
If the Earth was a perfect sphere consisting of concentric shells of constant density and was not rotating, Newton's law of gravitation would predict the gravitational attraction (g) between the Earth (mass = Me) and a mass (m1) sitting on its surface as:
g = g Me m1 / Re2, where Re is the radius of the Earth and g is the International gravitational constant.
In actuality, the Earth's gross shape slightly departs from being spherical, there is topography on the continents that can be thought of as variations in Re, the density within the Earth, particularly in the upper crust, varies in a complex fashion, and there is a slow rotation present. However, all of these complications are second order, and Newton's law of gravitation should form the basis for our intuitive understanding of most aspects of gravitational imaging.
In studies of lithospheric structure, the search is for gravity anomalies (differences between what is expected based on first principals and observed gravity values). With respect to the total gravity field of the Earth, these anomalies are at most only a few parts per thousand in amplitude. Images (maps) of the values of these anomalies are used to infer Earth structure and are well suited to be integrated with other data such as satellite images. For example, a simple overlay of gravity anomalies on a Landsat image provides a easy and effective depiction of how subsurface mass distributions correlate with surface features. Qualitative interpretation gravity anomalies is no more complex than calling upon Newton's law to tell us that positive anomalies indicate the presence of a local mass excess with negative anomalies indicating local mass deficiencies.
As discussed below, several different types of anomalies have been defined based on what known variations in the Earth's gravity field are considered before calculating the anomaly value. However, we start from a basic formula for the gravitational attraction of a rotating ellipsoid (Figure 2) with flattening, f, derived by Clairaut in 1743. This formula predicts the value of gravity (Gt) at sea level as a function of latitude (f). In the 20th century, higher order terms were added so that the formula takes the form:
Gt = Ge (1+f2 sin2 f - f4 sin4 f), where f2, f4, and Ge are defined below.
Ge = global average value of the gravitational acceleration at the equator.
W = angular velocity of the Earth's rotation.
m = W2 a / Ge, (W2 a = centrifugal force at the equator).
f2 = -f + 5/2 m + 1/2 f 2 - 26 f m + 15/4 m 2
f4 = 1/2 f 2 + 5/2 f m
The values of Ge, a, b, and f are known to a considerable level of precision but are constantly being refined by a variety of methods. Occasionally, international scientific organizations agree on revised values for these quantities. Thus, all calculated values of gravity anomalies need to be adjusted when these revisions are made. As of 2000, the most commonly used values yield the formula:
Gt = 978031.846 (1+ 0.005278895 sin2 f + 0.000023462 sin4 f).
The National Imaging and Mapping Agency maintains a web site (http://164.214.2.59/GandG/pubs.html) that has the latest information on geodetic systems.
The units for gravity measurements are cm/s2 or Gals in honor of Galileo, and the formula above produces values whose units are milliGals (mGal). We learn that the value of the Earth's gravitational attraction is 980 cm/s, but this formula shows that at sea level the gravitational attraction of the Earth varies from about 978 cm/s2 at the equator to about 983 cm/s2 at the poles. Gravity surveys on land routinely detect anomalies that have amplitudes of a 0.1 mGal and thus have the rather remarkable precision of 1 part in 1 million. Surveys with a precision of ).01 mGal are in fact common.
By merely subtracting Gt from an observed value of the gravitational acceleration (Gobs), we calculate the most fundamental value of a gravity anomaly. However, the effects of elevation are so large that such an anomaly value means little except at sea level. Instead, the Free Air, Bouguer, and residual anomaly values described below are calculated and interpreted. Maps of these anomaly values have been constructed and interpreted for decades, and with modern techniques, it is these values that are imaged. Corrections to Gravity Measurements
In order to arrive at geologically meaningful anomaly values, as series of "corrections" are made to raw observations of differences between gravity measured at a station and a base station. The use of this term is misleading because most of these "corrections" are really adjustments that compensate (at least approximately) for known variations in the gravity field that do not have geological meaning. Drift Correction
Although gravimeters are simple and relatively stable instruments, they do drift (i.e., the reading does vary slightly with time). Considering the sensitivity of these instruments, one would expect them to be affected by temperature variations, fatigue in the internal springs, and minor readjustments in their internal workings, and these factors are in fact the primary cause of instrument drift. In addition, there are earth tides which cause periodic variations in gravity which may be as large as ~0.3 mGal. In field operations, these factors cause small changes in gravity readings with time. One deals with these changes by making repeated gravity readings at designated stations at fairly regular intervals of time. One usually assumes that the drift is linear between repeated occupations of the designated stations, and over a few hours, this is usually a valid assumption. The repeated values are used to construct a drift curve which is used to estimate the drift for readings which were made at times between those of the repeated readings. Because one encounters so many different situations in real field operations, it is hard to generalize about how one proceeds. However, the key concern is that no period time should occur which is not spanned by a repeated observation. This is a way of saying that the drift curve must be continuous. If the meter is jarred, a tare (an instantaneous variation in reading) may occur. If one expects this has occurred, simply return to last place a reading was made. If there is a significant difference, a tare has occurred, and a simple constant shift (the difference in readings) is made for all subsequent readings. Tidal Correction
The variations in the gravity field due to Earth tides can be calculated if one makes assumptions about the rigidity of the lithosphere. In fact the rigidity of the lithosphere can be estimated by studying Earth tides. In practice, it is most straightforward to just consider the Earth tides to be part of the drift while being sure that the repeated readings needed to make drift corrections are made every few hours. This approach has the advantage of adding a general element of quality control. Latitude Correction
The International Gravity Formula predicts that gravity increases by about 5000 mGal from the equator to the poles. The rate of this increase varies slightly as a function of latitude (f) but is about 0.8 mGal / km. For a local survey of the gravity field, one can derive the formula for this gradient (1.3049 sin 2f mGal/mile or 0.8108 sin 2f mGal/km) by differentiating the International Gravity Formula with respect to latitude. Then a base station is chosen and all gravity readings are corrected for latitude by multiplying the distance a gravity station is north or south of this base station by this gradient. Stations located closer to the pole than the base station have higher readings just because of their geographic position; thus, the correction would be negative. The correction is positive for stations nearer the equator than the base station. However, a preferable approach is to tie the survey to the global base station network and then use the International Gravity Formula to calculate the expected value of gravity, which will vary with latitude. Thus, the first level calculation of the gravity anomaly at the station (Ganomaly = Gobs - Gt) will have the adjustment for latitude built into the computation. Free Air Correction
In a typical gravity survey, the elevation of the various stations varies considerably (Figure 1) producing significant variations in observed gravity because Newton's law of gravitation predicts that gravity varies with distance from the center of the Earth. The vertical gradient of gravity is about -0.0386 mGal / m. The gravity anomalies we seek to detect (image) are often less than 1 mGal in amplitude so the magnitude of this gradient requires that we have high precision vertical control for the locations of our gravity stations. This requirement was once the major barrier to conducting gravity surveys, because using traditional surveying methods to establish locations is costly and time consuming, and the number of established benchmarks and other accurately surveyed locations in an area is usually small. However, the emergence of the Global Positioning System (GPS) has revolutionized gravity studies from a data acquisition point of view. Thanks to GPS, a land gravity station can be located almost anywhere. However, the care that must be exercised to routinely obtain GPS locations with sub-meter vertical accuracy should not be under estimated.
One aspect of the variation of gravity with elevation is called the Free Air effect. This effect is due to the change of elevation only, as if the stations were suspended in free air not sitting on land. The vertical gradient of gravity is derived by differentiation with respect to Re. Higher order terms are usually dropped yielding gradients that are not a function of latitude or elevation (0.3086 mGal/m or 0.09406 mGal/ft). Once the gravity values have been established and their locations are accurately determined, the Free Air correction can be calculated. This is done by choosing an elevation datum and simply applying the equation below:
Free Air Correction = FAC = 0.3086 h, where h = (elevation - datum elevation).
With Gobs being observed gravity corrected for drift and tides, the Free Air anomaly (FAA) is then defined as:
FAA = Gobs - Gt + FAC Bouguer Correction
The mass of the material between the gravity station and the datum also causes a variation of gravity with elevation (Figure 1). This mass effect causes gravity at higher stations to be greater than at stations with lower elevations and thus partly offsets the Free Air effect. To calculate the effect of this mass, a model of the topography must be constructed and its density must be estimated.
The traditional approach is crude but has been proven to be effective. In this approach, each station is assumed to sit on a slab of material that extends to infinity laterally and to the elevation datum vertically (Figure 1). The formula for the gravitational attraction of this infinite slab is derived by employing a volume integral to calculate its mass. The resulting correction is named for the French geodesist Pierre Bouguer:
Bouguer Correction = BC = 2pgrh, where g is the International gravitational constant, r is the density, and h = (elevation - datum elevation).
As discussed below, the need to estimate density for the calculation of the Bouguer correction is a significant source of uncertainty in gravity studies. With Gobs being observed gravity corrected for drift and tides, the Bouguer anomaly (BA) is then defined as:
BA = Gobs - Gt + FAC - BC
If terrain corrections (see below) are not applied, the term simple Bouguer anomaly is used. If they have, the term complete Bouguer anomaly is used. A second order correction to account for the curvature of the Earth is often added to this calculation. Terrain Correction
Nearby topography (hills and valleys) attracts the mass in the gravimeter (valleys are considered to have negative density with respect to the surrounding rocks) and reduces the observed value of gravity. The terrain correction is the calculated effect of this topography and is always positive (a hill pulls up on the mass in the gravimeter and a valley is a mass deficiency). In mountainous regions, these corrections can be as large as 10's of mGals. The corrections have traditionally been made using Hammer charts (Hammer, 1939) to estimate the topographic relief by dividing it into compartments. There have been a number of refinements to this approach as it has been increasingly computerized, but the basic idea has remained unchanged. However, the increasing availability of digital terrain data is on the verge of revolutionizing the way in which terrain corrections are calculated. There many new approaches being developed, but the general goal is the same. This goal is to construct a detailed terrain model and calculate the gravitational effect of this terrain on individual gravity readings. These approaches can also be considered as having replaced the Bouguer slab approximation with a more exact calculation, because the goal of the Bouguer and topographic corrections is to estimate the gravitational effect of the topography above the elevation datum out to a large radius from the gravity station. It is common for this radius to be 165 km. Eötvös Correction
Technological advances have made it possible to measure gravity in moving vehicles such as boats and aircraft. However, the motion of the gravimeter causes the centrifugal acceleration and thus the gravitational attraction to vary. This variation is linearly related to the east-west component of the velocity (vew) of the gravimeter. The correction for this effect is named for the Hungarian geophysicist R. Eötvös and is positive when the gravimeter is moving westward and negative when it is moving eastward. The navigation data from the survey is used to calculate vew, and the equation for the correction is as follows:
Eötvös correction = EC = 2Wcosf vew, where W is the angular velocity of the Earth's rotation. Isostatic Correction
Isostacy is the process working in the Earth that causes the pressure at some depth (most studies place this depth at 30 to 100 km) to be approximately equal over most regions of the Earth. If this pressure is equal, isostatic balance has been achieved, and we say that the area is compensated. Thus, we think of the mass represented by a mountain range as being compensated by a mass deficiency at depth. The tendency toward isostatic balance causes regional Bouguer gravity anomalies to be substantially negative over mountains and substantially positive over oceanic areas. These large scale anomalies mask anomalies due to shallow (upper crustal) geologic features (e.g., Bechtel et al., 1985). The delineation of upper crustal features is often the goal of gravity studies. Thus, a variety of techniques have been proposed to separate and map the effects of isostatic equilibrium. The isostatic corrections calculated by these techniques attempt to estimate the gravitational effects of the masses that compensate topography and remove them from the Free Air or Bouguer anomaly values. A popular approach is the calculation of the isostatic residual (Simpson et al., 1986). The Role of Density
A knowledge of the density of various rock units is essential in gravity studies for several reasons. In fact, a major limitation in the quantitative interpretation of gravity data is the need to estimate density values and to make simplifying assumptions about the distribution of density with the Earth. The earth is complex and the variations in density in the upper few kilometers of the crust are large. The use of a single average density in Bouguer and terrain corrections is thus a major source of uncertainty in the calculation of values for these corrections. This fact is often overlooked as we worry about making very precise measurements of gravity and then calculate anomaly values whose accuracy is limited by our lack of detailed information on density.
A basic step in the reduction of gravity measurements to interpretable anomaly values is calculation of the Bouguer correction that requires an estimation of density. At any specific gravity station, one can think of the rock mass whose density we seek as being a slab extending from the station to the elevation of lowest gravity reading in the study area (Figure 1). If the lowest station is above the datum (as is usually the case), each station shares a slab which extends from this lowest elevation down to the datum so this portion of the Bouguer correction is a constant shared by all of the stations (Figure 1).
No one density value is truly appropriate, but when using the tradition approach it is necessary to use one value when calculating Bouguer anomaly values. When in doubt, the standard density value for upper crustal rocks is 2.67 gm/cc.
In order to make terrain corrections, a similar density estimation is needed. However in this case, the value sought is the average density of the topography near a particular station. It is normal to use the same value as was used in the Bouguer correction, but this need not necessarily be the case when the topography and geology is complex.
As mentioned in the discussion of the Bouguer correction, modern digital elevation data are making it possible to construct realistic models of topography that include laterally varying density. Although preferable, this approach still requires the estimation of density of the column of rock between the Earth's surface and the reduction datum. From a traditional point of view, this approach represents a merging of the Bouguer and terrain corrections that are then applied to Free Air anomaly values. One can also extend this approach to greater depths and vary the density laterally, and consider it a geologic model of the upper crust that attempts to predict Free Air anomaly values. The Bouguer and terrain corrections then become unnecessary since the topography simply becomes part of the geologic model which is being constructed.
When one begins to construct computer models based on gravity anomalies, densities must be assigned to all of the geologic bodies that make up the model. Here one needs to use all of the data at hand to come up with these density estimates. Geologic mapping, drill hole data, measurements on samples from the field, etc. are examples of information one might use estimate density. Measurements of Density
Density can be measured (or estimated) in many ways. In general, in situ measurements are better because they produce average values for fairly large bodies of rock that are in place. With laboratory measurements, one must always worry about the effects of porosity, temperature, saturating fluids, pressure, and small sample size as factors that might make the values measured unrepresentative of rock in place.
Many tabulations of typical densities for various rock types have been compiled (e.g., Telford et al., 1990). Thus one can simply look up the density value expected for a particular rock type (Table 1).
Samples can be collected during field work and brought back to the laboratory for measurement. The density of cores and cuttings available from wells in the region of interest can be also be measured.
Most wells that have been drilled during the exploration for petroleum, minerals, and water are surveyed by down hole geophysical logging techniques, and these geophysical logs are a good source of density values. Density logs are often available and can be used directly to estimate the density of rock units encountered in the subsurface. However in many areas, sonic logs (seismic velocity) are more common than density logs. In these areas, the Nafe-Drake or a similar relationship between seismic velocity and density (e.g., Barton, 1986) can be used to estimate density values.
The borehole gravity meter is an excellent (but rare) source of density data. This approach is ideal because it infers density from down hole measurements of gravity. These measurements are thus in situ averages based on a sizable volume of rock not just a small sample.
The Nettleton technique (Nettleton, 1939) involves picking a place where the geology is simple and measuring gravity across a topographic feature. One then calculates the Bouguer gravity anomaly profile using a series of density values. If the geology is truly simple, the gravity profile will be flat when the right density value is used in the Bouguer and terrain corrections.
One can also use a group of gravity readings in an area and simply find the density value where the correlation between topography and Bouguer anomaly values disappears. Enhancement of Gravity Anomalies (Filtering)
Gravity and magnetic anomalies whose wavelengths are long relative to the dimensions of the geologic objectives of a particular investigation are called regional anomalies. Because shallow geologic features can have large lateral dimensions, one has to be careful, but regional anomalies are usually thought to reflect the effects of relatively deep features. Anomalies whose wavelengths are similar to the dimensions of the geologic objectives of a particular investigation are called local anomalies. In the processing of gravity data, it is usually preferable to attempt to separate the regional and local anomalies prior to interpretation. The regional anomaly can be estimated employing a variety of analytical techniques. Once this is done, the simple difference between the observed gravity anomalies and the interpreted regional anomaly is called the residual anomaly.
The techniques used to separate regional and local gravity anomalies take many forms and can all be considered as filtering in a general sense (e.g., Blakley, 1996). Many of these techniques are the same as those employed in enhancing traditional remote sensing imagery. The process usually begins with a data set consisting of Bouguer gravity anomaly or magnetic anomaly values, and the first step is to produce an anomaly map such as the one shown in Figure 3. Gridding
The initial step in processing gravity and is the creation of a regular grid from the irregularly spaced data points. This step is required to even create a simple contour map, and in general purpose software, it may not receive the careful attention it deserves since all subsequent results depend on the fidelity of this grid as a representation of the actual data. On land, gravity data tend to be very irregularly spaced with areas of dense data and areas of sparse data. This irregularity is often due to topography in that mountainous areas generally have more difficult access than valleys and plains. It may also be due to difficulty in gaining access to private property and sensitive areas. In the case of marine data, the measurements are dense along the tracks that the ships follow with relatively large gaps between tracks. Airborne and satellite gravity measurements involve complex processing that is beyond the scope of this discussion. However once these data are processed, the remainder of the analysis is similar to that of land and marine data.
There are a number of software packages that have been designed for the processing of gravity data, and several gridding techniques available in these packages. The minimum curvature technique works well and is illustrative of the desire to honor individual data points as much as possible while realizing that gravity has an inherent smoothness due to the behavior of the Earth's gravity field. In this technique, the surface of minimum curvature is fitted to the data points surrounding a particular grid node, and the value on this surface at the node is determined. One can intuitively conclude that the proper grid interval is approximately the mean spacing between readings in an area. A good gridding routine should honor individual gravity values and not produce spurious values in areas of sparse data. Once the gridding is complete, the grid interval (usually 100's of meters) can be thought of as being analogous to the pixel interval in remote sensing imagery. Filtering
The term filtering can be applied to any of the various techniques that attempt to separate anomalies on the basis of their wavelength and/or trend (e.g., Blakely, 1996). The term separate is a good intuitive one because the idea is construct an image (anomaly map) and then use filtering to separate anomalies of interest to the interpreter from other interfering anomalies (see regional versus local anomalies above). In fact, fitting a low order polynomial surface (3rd order is used often) to a grid to approximate the regional is a common practice Then subtracting the values representing this surface from the original grid values creates a residual grid that represents the local anomalies.
In gravity studies, the familiar concepts of high pass, low pass, and bandpass filters are applied in either the frequency or spatial domains. In Figure 4 and Figure 5 for example, successively longer wavelengths have been removed from the Bouguer anomaly map shown in Figure 3. At least to some extent, these maps enhance anomalies due to features in the upper crust at the expense of anomalies due to deep-seated features.
Directional filters are also used to select anomalies based on their trend. In addition, a number of specialized techniques have been developed for the enhancement of maps of anomalies based on the physics of the gravity and magnetic fields and are discussed below. The various approaches to filtering can be sophisticated mathematically, but the choice of filter parameters or design of the convolution operator always involves a degree of subjectivity. It is useful to remember that the basic steps in enhancing a map of gravity anomalies in order to emphasize features in the Earth's crust are: 1) First remove a conservative regional trend from the data. The choice of regional is usually not critical but may greatly help in interpretations (e.g., Simpson et al., 1986). Because the goal is to remove long wavelength anomalies, this step consists of applying a gentle high pass filter. Over most continental areas, Bouguer anomaly values are large negative numbers, thus the usual practice of padding the edges of a grid with zeros prior to applying a Fourier transform and filtering will create large edge effects. One way to avoid this effect is to first remove the mean in the data and gridding an area larger than the image to be displayed. However in areas where large regional anomalies are present, it may be best to fit a low order polynomial surface to the gridded values, and then continue the processing with the residual values with respect to this surface. 2) One can then apply additional filters as needed to remove unwanted wavelengths or trends.
In addition to the usual wavelength filters, a variety of specialized filters have been developed for gravity data that include:
Upward continuation - A process (low pass filter) by which a map simulating the result if the survey had been conducted on a plane at a higher elevation is constructed. This process is based on the physical fact that the further the observation is from the body causing the anomaly, the broader the anomaly. It is mathematically stable because it involves extracting long wavelength anomalies from short wavelength ones.
Downward continuation - A high pass filter by which a map simulating the result if the survey had been conducted on a plane at a lower elevation is constructed. In theory, this process enhances anomalies due to relatively shallow sources. However, care should be taken when applying this process to anything but very clean, densely-sampled data sets, because of the potential for amplifying noise due to mathematical instability.
Vertical derivatives - In this technique, the vertical rate of change of the gravity or magnetic field is estimated (usually the 1st or 2nd derivative). This is a specialized high pass filter, but the units of the resulting image are not milligals or nanoteslas and they cannot be modeled without special manipulations of the modeling software. As in the case of downward continuation, care should be taken when applying this process to anything but very clean data sets because of the potential for amplifying noise. This process has some similarities to non-directional edge enhancement techniques used in the analysis of remote sensing images.
Strike filtering - This technique is directly analogous to the directional filters used in the analysis of remote sensing images. In gravity processing, the goal is to remove the effects of some linear trend with a particular azimuth. For example in much of the central U.S., the ancient processes that formed the Earth's crust created a northeast trending structural fabric that is reflected in gravity and magnetic maps in the area and can obscure other anomalies. Thus, one might want to apply a strike-reject filter which deletes linear anomalies whose trends (azimuths) range from N30oE to N60oE.
Horizontal gradients - In this technique, faults and other abrupt geologic discontinuities (edges) are detected based on the high horizontal gradients that they produce. Simple difference equations are usually employed to calculate the gradients along the rows and columns of the grid. A linear maximum in the gradient is interpreted as a discontinuity such as a fault. These features are easy to extract graphically to be used as an overlay on the original gravity or magnetic map or on products such as Landsat images. Computer Modeling
In most applications of gravity techniques, the data processing and qualitative interpretation of maps is followed by a quantitative interpretation in which a profile (or grid) of anomaly values is modeled by constructing an earth model whose calculated gravitational and/or magnetic effect closely approximates the observed profile (or grid). Modeling of profiles of anomaly values has become common place and should be considered a routine part of any investigation of the subsurface. For example, a model for a profile across Figure 3 is shown in Figure 6. In its simplest form, the process of constructing an earth model is one of trial and error iteration in which one's knowledge of the local geology, data from drill holes, and other data such as seismic surveys are valuable constraints in the process. As the modeling proceeds, one must make choices concerning density and geometry of the bodies of rock that make up the model. In the absence of any constraints (which is rare), the process is subject to considerable ambiguity since there will be many subsurface structural configurations which can fit the observed data. With some constraints, one can usually feel that the process has yielded a very useful interpretation of the subsurface. However, ambiguities will always remain just as they do in all other geophysical techniques aimed at studying the structure of the subsurface.
There are countless published articles on possible mathematical approaches to the modeling. However, for the two dimensional case (i.e. the modeling of profiles drawn perpendicular the structural grain in the area of interest) a very flexible and easy approach is used almost universally. This technique is based on the work of Hubbert (1948), Talwani et al. (1959), and Cady (1980), although many groups have written their own versions of this software with increasingly effective graphical interfaces and output. The original computer program was published by Talwani et al. (1959), and Cady (1980) was among the first to introduce an approximation (called 2 1/2 D) that allows for a certain degree of three dimensionality. In the original formulation of Hubbert (1948), the earth model was composed of bodies of polygonal cross section that extended to infinity in and out of the plane of the profile of gravity readings. In the 2 1/2 D formulation, the bodies can be assigned finite strike-lengths in both directions. Today, anyone can have a 2 1/2 D modeling running on their PC.
The use of three dimensional approaches is not as common as it should be because of the complexity of constructing and manipulating the earth model. However, there are many 3-D approaches available (e. g., Blakely, 1996). As discussed above, a full 3-D calculation of the gravitational attraction of the topography using a modern digital terrain model is the ultimate way to calculate Bouguer and terrain corrections as well as construct earth models. This type of approach will be employed more often in the future as terrain data and the computer software needed becomes more readily available.
Gravity modeling is an ideal field in which to apply formal inverse techniques. This is a fairly complex subject mathematically. However, the idea is to let the computer automatically make the changes in a starting earth model that the interpreter constructs. Thus, the interpreter is saved from tedious "tweaking" of the model to make the observed and calculated values match. In addition, the thinking is that the computer will be unbiased relative to a human. The process can also give some formal estimates of the uncertainties in the interpretation. Inverse modeling packages are readily available and can also run on PCs. References
Blakely, R. J., Potential theory in gravity and magnetic applications. Cambridge University Press, Cambridge, 1996.
Barton, P. J., The relationship between seismic velocity and density in the continental crust- a useful constraint ‌: Geophysical J. Royal Astronomical . Soc., 87: 195-208 (1986).
Bechtel, T.D., D. W. Forsyth, and C. J. Swain, Mechanisms of isostatic compensation in the vicinity of the East African Rift, Kenya: Geophysical J. Royal Astronomical Soc., 90: 445-465 (1987).
Cady, W. J., Calculation of gravity and magnetic anomalies of finite-length right polygonal prisms. Geophysics, 45: 1507-1512 (1980).
Hammer, S., Terrain corrections for gravimeter stations. Geophysics, 4: 184-194 (1939).
Hubbert, M. K., A line integral method for computing gravimetric effects of two-dimensional masses. Geophysics, 13: 215-225 (1948).
Nettleton, L. L., Determination of density for reduction of gravimeter observations. Geophysics, 4: 176-183 (1939).
Robinson, E. S., and C. Coruh, Basic Exploration Geophysics, John Wiley and Sons, New York, 1988.
Simpson, R. W., R. C. Jachens, R. J. Blakely, and R. W. Saltus, A new isostatic map of the conterminous U. S. with a discussion on the significance of isostatic residual anomalies: Journal of Geophysical Research, 91: 8348-8372 (1986).
Talwani, M., J. L. Worzel, and M. Landisman, Rapid computations for two-dimensional bodies with application to the Mendicino submarine fracture zone: Journal of Geophysical Research, 64: 49-59 (1959).
Telford, W. M., L. P. Geldart, and R. E. Sheriff, Applied Geophysics, 2nd ed., Cambridge University Press, Cambridge, 1990, p. 6-61.
Table 1 Table 1: Table of Standard Rock Densities

Typical density values for common rock types.
Volcanic ash 1.8 gm/cc
Salt 2.0 gm/cc
Unconsolidated sediments 2.1 gm/cc
Clastic sedimentary rocks 2.5 gm/cc
Limestone 2.6 gm/cc
Dolomite 2.8 gm/cc
Intrusive granites 2.65 gm/cc
Crystalline upper crust 2.7 gm/cc
Mafic intrusions 2.9 gm/cc
Lower crust 3.0 gm/cc
Upper mantle 3.35 gm/cc


NOTE: The effects of porosity, temperature, saturating fluids, pressure cause these values to vary by at least +/- 0.1 gm/cc.

Enter database



© 1995-2003 PACES. All Rights Reserved

This site designed for
Microsoft Internet Explorer 5+ and small fonts.

Please direct questions and comments to:
PACESwebmaster@geo.utep.eduThis e-mail address is being protected from spambots. You need JavaScript enabled to view it


GETTING STARTED
Introduction
Applications
Gravity Technique
Gravity Measurements
Earth's Gravity Field
Gravity Corrections
Drift Correction
Tidal Correction
Latitude Correction
Air Correction
Bouguer Correction
Terrain Correction
Eötvös Correction
Isostatic Correction
Density
Density Measurements
Anomaly Enhancement
Gridding
Filtering
Computer Modeling
References
Table of Std. Densities

GRAVITY DATA
GeoNet Gravity Database
Getting Started
Dictionary of Terms

OTHER DATA SOURCES

USGS
NOAA
CANADA
BGI