↑ Top
Table of Contents

Using INTREPID Cookbooks (C12)

The INTREPID Cookbooks explain how to solve common geophysical problems using INTREPID tools.

Our senior geophysicists have either prepared or closely supervised the preparation of all explanations, suggestions and worked examples. The Cookbooks are a valuable resource, which combines sound geophysical experience with comprehensive instructions for using INTREPID’s powerful suite of processing and interpretation tools.

Although the suggestions in the Cookbooks are based on the best of our knowledge and experience, they cannot anticipate your data’s idiosyncrasies. After you have studied and perhaps carried out our worked examples, you will need to adapt the procedures to suit your own situation.

The intent here is to distribute a diverse range of actual survey data and practical scenarios, while not overwhelming you with something that is too complex for the beginner. So, explore and see if what you want is here, and then follow the cookbook recommendations to hopefully help you see how to solve your problems. Intrepid Geophysics runs regular training courses on each of the topics featured here, with a lot more detail and practical know how. You are advised to attend a training in your area of interest if you possibly can, as it is impossible to pick up all the skills and techniques available without some interaction with the team.

In this chapter

INTREPID conventions

Parent topic: Using INTREPID Cookbooks (C12)

File names

Generally speaking, you can use any name that is acceptable to your operating system for INTREPID datasets but we recommend that you note the following:

  • We recommend that you do not use spaces in file, folder or vector dataset field names. Use ‘_’ instead.
  • INTREPID uses the folder or vector dataset field names.
  • INTREPID is file-compatible between Windows and Linux. This means that you can create and process the same datasets with either version of INTREPID. There are some small issues to remember, however.
  • Windows file systems are normally case insensitive but case preserving. For example if a dataset is called MyGrid.ers, it will always show as MyGrid.ers. However, if you type mygrid.ers when specifying the dataset, INTREPID locates MyGrid.ers for you.
  • Linux file systems are normally case sensitive and case preserving. This means that you could have two datasets existing side by side, one called MyGrid.ers and the other called mygrid.ers.
  • To minimise risk of confusion, it may be easier to avoid capital letters and spaces in file names.

Differences between Windows and other versions

The examples in the Cookbook illustrate the Windows version of INTREPID. The Linux version operates in almost exactly the same way. Although menus and dialog boxes will look a little different, they will contain the same elements.

Where there is a difference between the Windows and Linux versions, the Cookbook contains a clear explanation. You should easily be able to complete the worked examples using either platform.

About the sample data for the INTREPID Cookbooks

Parent topic: Using INTREPID Cookbooks (C12)

Availability of sample data for exercises

We provide a variety of sample data to help you learn to use INTREPID.

Reinstalling sample data for exercises

While you are learning to use INTREPID you may make changes in the supplied sample datasets. If you want a ‘factory reset’ of the data in Windows, follow these steps:

  1. Make a copy of any results that you want to keep and place it outside the INTREPID installation folder.
  2. Run the INTREPID installer that you downloaded and select the Repair option.

For Linux, extract the required folders from the installation file yourself or contact our technical support service for help.

Location of sample data for Cookbooks

Where install_path is the path of your INTREPID installation, the project directory for the Cookbooks sample data is install_path/sample_data/cookbooks.

For example, if INTREPID is installed in
then you can find the sample data at

For information about installing or reinstalling the sample data, see the relevant section in “About the sample data for the INTREPID Cookbooks” in Using INTREPID Cookbooks (C12).

For a description of INTREPID datasets, see Introduction to the INTREPID database (G20). For more detail, see INTREPID database, file and data structures (R05).

How to run the INTREPID worked example datasets

Parent topic: Using INTREPID Cookbooks (C12)

If you wish to evaluate any of the worked examples, contact us and we will issue you an evaluation licence.

If you do not wish to alter the worked example solutions data provided, you can create separate versions of datasets fields or files by, for example, adding 1 to the original name. For instance, if you are creating your own version of naudy_ns, you could call it naudy_ns1. If you are creating your own version of the phillips field in the northsea..DIR dataset, you could call it phillips1.

Cookbook chapters—structure and symbols

Parent topic: Using INTREPID Cookbooks (C12)

Each of the following chapters contains a combination of background discussion, flowcharts, procedures, worked examples and hints and tips for one or more INTREPID processes.

Worked example format

Each worked example contains a brief introduction and some or all of the following sections:


This section contains a list of the main features of the tools you will use and a brief introduction to the exercise you will carry out.

Steps to follow

This section gives detailed illustrated steps for the exercise that you will carry out.

Additional information and tips

Tip: Tip text contains background information that will help your understanding of INTREPID but is not essential for completing the worked example.

Check points and short cuts in case studies

These worked examples contain several processing stages and in a real situation you would use the output from one process as the input for another.

You may have limited time for completing a worked example and may therefore wish to skip some steps and processes. The worked examples have a number of features to speed your progress:

  • Solution datasets and files provided You never need to have completed a previous process in order to perform one that is of interest;
  • Optional steps Worked examples often contain steps which would be routine for intermediate or advanced users, or which only serve to illustrate progress through the example. We mark these steps as optional, using the symbol appearing to the left of this paragraph. If you omit an optional step you may need to use a solution dataset or file for some subsequent step. Steps where you compare data from before and after a process are also marked as optional.
  • Should you complete this process? Sections with this heading are indicated by the symbol shown at the left of this paragraph. They give advice about the relevance of the processes in the worked example to your requirements.
  • Task specification (.task) files We sometimes provide a task file to perform a process that is part of a worked example. We use the symbol shown to the left of this paragraph wherever you can use a task file. You can use task files to automatically set parameters for a process instead of using menus and dialog boxes, and to automatically perform the task in batch mode.
To use a task file for setting parameters
  1. Choose Open Taskfile... from the File menu of the tool you are using.
  2. Locate and select the task file (they always have a .task extension) in the File Name list.
  3. Choose Open. INTREPID will load the task specifications into the tool.

Symbols for INTREPID dataset types and domains

The Cookbook uses the following symbols to denote dataset types and domains for filters.

Dataset type symbols

This symbol indicates that the process may be used with line datasets.


This symbol indicates that the process may be used with point datasets.


This symbol indicates that the process may be used with grid datasets.


Domain symbols

This symbol indicates that the process occurs in the spatial or time domains.


This symbol indicates that the process occurs in the spectral domain.


Sample data and related cookbooks

Parent topic: Using INTREPID Cookbooks (C12)

We provide complete sets of original data and auxiliary files with the INTREPID cookbooks. The sections in this cookbook overview match the folders in the sample_data/cookbooks folder.

In this overview:

Airborne electrical magnetic data

Parent topic: Sample data and related cookbooks

The cookbook data for this is located in the folder sample_data/cookbooks/AirborneElectricalMagnetic.

Continental scale data

Parent topic: Sample data and related cookbooks

Gravity, magnetic and gamma datasets covering continent-sized regions

The cookbook data for this is located in the folder sample_data/cookbooks/Continental_Scale.



Parent topic: Sample data and related cookbooks

The cookbook data for this is located in the folder sample_data/cookbooks/Geostatistics.


Parent topic: Sample data and related cookbooks

The cookbook data for this is located in subfolders of the folder sample_data/cookbooks/gravity

In this section:

INTREPID gravity methods

Geodetic or plain relative gravity survey data processing is complex. Intrepid, in partnership with GA, developed a 16 stages method for dealing with the field reduction of gravity loop data, that copes with multiple meters, meter readers, one or more Absolute Gravity stations to tie too. It also has a variety of meter drift corrections that can be applied. It shows a map of the design of the gravity survey. Finally, at stage 16, it reduces the redundant / multiple readings to a "Principal Facts" dataset, while estimating precision of most of the survey.

Managing multiple surveys is also something that cannot be left to chance. The original intent has been to leave the Gravity survey loops in ASCII form that is future proof, so that when more observations come in, a re-adjustment of all observations, not just the latest, can be made.

In fact, Australia has some confused versions of regional and national compilations, as duplicates and corrections are applied by different people in different organizations, leading to "pimples" or inconsistencies in the errors in the compiled regional datasets. These become more obvious, as further, newer survey data are added to the historic observations. For a geophysicist, this is something of a night-mare, as which observations should be ignored, becomes a matter of judgement, rather than science.

The Canberra 250000 sheet older compilation is given together with the Goulburn 100000 sheet data. Simplifying viewing both together in 3DExplore, map view starts to show many of the issues that emerge in time. (Differing conventions for Field names, so you can use the Alias feature, to jointly create a grid of both datasets) Then comes Appending/Merging new survey onto the back of a Master survey.


Parent topic: Gravity

The cookbook data for this is located in the folder sample_data/cookbooks/gravity/land.


About the data

Original datasets

These datasets are the original data for the Gravity worked examples. You can find them in folder gravity/land/L_5_RAW




Contains 3 GMLS from the Goulburn region of New South Wales


Has the contents of AGSOWEEK1.DAT and an additional two GMLS from the same region


This is a sample Scintrex gravimeter format import file


This is a sample Scintrex gravimeter format import file




A sample of CG5 gravimeter binary format data


A sample of CG6 gravimeter ASCII format data

Terrain grid datasets

Use this dataset to calculate terrain corrections for the imported AGSO data. You can find the data in folder gravity/land/L_2_GRIDS.


Terrain grid for AGSO gravity data


Terrain from SRTM


An observed Free Air gravity grid for the Iron Ore Brockman study area

There are also imported and processed versions of this data for you to use and examine. You can find the data in folder gravity/land/L_1_DB


This follows the defined AGSO practice of gathering gravity field data, then processing it to reduce the data to principal facts.

Land gravity data examples

Some samples of gravity field data ready for import into the gravity tool of INTREPID via the Import wizard, or directly in batch using the task files supplied.

This wizard is found in the Gravity Corrections tool, when you use File menu option. See “Gravity corrections - Land, Marine, Airborne and Satellite (T54)” in Gravity corrections - Land, Marine, Airborne and Satellite (T54).

There are 4 or more styles of input here

Australian standard ASCII

designed to be future proof and completely self contained. The survey9604 from Cobar, survey9893 from Pilbara and the example of a work in progress progressive use of the tool with AGSO_Week1, then 1 & 2, Goulburn sheet survey 9705

CG3 style of data import

This is designed to use the straight dump from the CG3 meter in ASCII format including its header and line format readings. Extra position data from a GPS is supplied in a couple of free format variations. Note the first record is used to tell the program whether the data is Geodetic or metric and which field is East, North etc. Finally, a separate very small ASCII file to record the tie-in of the survey to an absolute value.

CG5 style of data import

The Scintrex meter has gone fully binary and there is a new database format that is index sequential. As this binary format has not proven popular, we also show use of a straight dump from the CG5 meter in ASCII format including its header and line format readings.

CG6 style of data import

This Scintex format is new to Version 6 Intrepid - thanks to MGL for its supply.

Gravity—Layer cake forward model

Parent topic: Gravity

This is an example of a forward model of gravitational effects over the transition zone from offshore to land, as the MOHO deepens

The cookbook data for this is located in the folder sample_data/cookbooks/gravity/layer_cake_forward_model.

Gravity—Moving Platform

Parent topic: Gravity

The data in this section is for the more traditional LaCoste and Romberg style gravity acquisition on a boat, followed by the technology to combine diverse marine gravity and bathymetry surveys to get a best coherence fit to all available data. This is an exercise for the older ZERO length spring marine instruments.

For more current methods, which use more recent instruments and the new SEA-G tool designed to work with these instruments, see Marine levelling.

The cookbook data for this is located in the folder sample_data/cookbooks/gravity/moving_platform.

We provide a ship-borne gravity dataset—L&R data from meter S133. You can find it in folder gravity/moving_platform/MP_4_Rawdata


Gravity—Simple inversion

Parent topic: Gravity

Murty & Rao produced a profile of gravity data, across a basin extent method that is quick and requires just simple assumptions, to find the indicated depth of the basin.

You can use this method with the datasets in the simple_inversion folder

Reference: Murthy & Rao, short note, Computers & Geosciences Vol15 No7 pp1149-1156 Filter that determines depths to the top of the basement surface below each point of gravity anomalies along a Line profile

For demonstrating the Murthy and Rao simple inversion method, use the data and .task files in this folder:

For details about the method, see “Murthy and Rao method” in Other depth estimation methods (C04)


Gravity—Terrain correction

Parent topic: Gravity

The cookbook data for this is located in the folder sample_data/cookbooks/gravity/terrain_correction.

For demonstrating gravity terrain correction, use these examples.


Gravity—See also tensor and vector methods

Parent topic: Gravity

We also provide examples of tensor and vector gravity methods. See:


Parent topic: Sample data and related cookbooks

The data for Interpretation operations is located in the subfolders of cookbooks/Interpretation

Cookbooks (general interpretation)

In this section


Parent topic: Interpretation

The cookbook data for this is located in the folder sample_data/cookbooks/Interpretation/Diamonds.



Parent topic: Interpretation

The cookbook data for this is located in the folder sample_data/cookbooks/Interpretation/Minerals.


Note: The extensions for magnetic tensors is not covered here.

About the data

You can find example .task files and data in the folders shown in the table.







A aero-magnetic dataset



A magetic field grid derived from the line dataset


First vertical derivative derivative of magnetic field data - units nT/m



separation filtering, to get the deeper sources, and to suppress the near surface sources

Data in other locations



microlevelled magnetic grid


Parent topic: Interpretation

The cookbook data for this is located in the folder sample_data/cookbooks/Interpretation/petroleum.


Magnetic compensation

Parent topic: Sample data and related cookbooks

The cookbook data for this is located in the folder sample_data/cookbooks/Magnetic_Compensation.

Marine levelling

Parent topic: Sample data and related cookbooks

The cookbook data for this is located in the folder sample_data/cookbooks/Marine_Levelling.


For the more historic approach from the original LaCoste methods, see Gravity—Moving Platform.

Quality assurance and quality control (QAQC)

Parent topic: Sample data and related cookbooks

Routine tasks for quality assurance and quality control of magnetic, radiometric and electromagnetic data.

The cookbook data for this is located in the folder sample_data/cookbooks/QAQC.


Parent topic: Sample data and related cookbooks

The cookbook data for this is located in the folder sample_data/cookbooks/Radiometrics.


Synthetic models

Parent topic: Sample data and related cookbooks

Datasets with synthetic data for comparison in modelling and other observed data comparison.

We have now released and made more generally available in INTREPID V6 a very extensive forward modelling language based upon several algorithmic approaches. In particular, all the FACET style codes developed by Horst Holstein and Intrepid over the last 10 years, are now more easily accessible, using a modelling language and the new interactive tool. The code uses a vector-style formulation for its notation, and the derivation of all possible combinations of potential fields and their gradients. The FACET style uses XYZ nodes, and lists of connections of these nodes to define planar facet faces.

The other most important modelling language style, includes the HOT-SPOT location of simple geometry bodies, such as a Shere, Prism, Dyke, Contact, Cylinder. You then add a couple of geometry measures and some properties and request the Forward model.

Importantly, you can now insert all of these synthetic models into any-pre-existing observed geophysics grid. This leads to the famous Kimberlite Farm of vertical cylinders of varying dimensions and susceptibilities, to investigate how easy it is to find a regular shaped body in a background of geology and instrument noise. It is this style of exercise that was used to determine that a Vertical Gravity gradient of 1 Eotvos detection limit would be needed to find diamond pipes reliably.

The cookbook data for this is located in the folder sample_data/cookbooks/SyntheticModels.


Parent topic: Sample data and related cookbooks

We include with INTREPID a cut down set of tensor data from the training course that Intrepid has available. We include field data for both FTG and Falcon. Also we have some model data, so that you can see some simple examples. The important issue of terrain correction can be examined using the Aurizonia dataset.

The earliest public domain dataset for Falcon is from Broken Hill, Australia. We provide this dataset and methods for you to learn about the Falcon workflow, developed within Intrepid Geophysics.

The data and task files for Falcon and FTG gravity gradiometry with worked examples of tensor gridding, denoising, filtering including tensor integration, depths, worming, phase checking of the Falcon signal. It is reasonable to assume that gravity tensor data ends up giving a comparable frequency/wavelengths, as the traditional TMI survey, thus we are seeing a 5 fold increase in the resolving power with these new surveys.

The cookbook data for this is located in the folder sample_data/cookbooks/tensors.


More details about tensor datasets

Parent topic: Tensors

In this section:

Extracting profiles

Demonstration of how to extract a profile of tensors from a grid.

The cookbook data for this is located in the folder sample_data/cookbooks/tensors/Aurizonia.







The FTG data in an INTREPID format


Aurizonia _mitre

Aurizonia_no mitre

SLERP gridding, no denoising



Raw data. Note that the projection is SAD69 / SUTM24


We include a large number of .task files for gridding, interpretation, levelling, processing and terrain



SRTM grid for the terrain correction.

Falcon tensors

The cookbook data for this is located in the folder sample_data/cookbooks/tensors/BrokenHill_Falcon

Broken Hill





Line database with a Falcon tensor field





terrain corrected Falcon grid
Intregrated estimate os Gz from Falcon
Free Air Falcon grid




create a line database
create a Falcon tensor grid



ASCII dump of survey data

More examples of tensor data

Throughout sample_data we include many .task files for every tool that does tensor work.

The INTREPID sample_data/examples/datasets folder include many examples of vector and tensor fields.

M. Model examples

ASCII Grids in SEMI format used to demonstrate how the MITRE tensor filter works on spikes in the data.

An example dataset is sample_data/examples/datasets/TensorGradients/tensor_grid.semi


Parent topic: Sample data and related cookbooks

Mostly airborne gravity vector survey data from Sanders, to illustrate capturing the geophysics field in a variety of vector formats supported by Intrepid, rather than as scalar components. So, import and gridding plus statistics and angular polar diagrams can be explored by you. The primary aim of acquiring vector gradients by our industry, has been to create higher frequency representations of the draped elevation field, via an enhanced gridding method, thus allowing a wider line spacing during survey acquisition.

The cookbook data for this is located in the folder sample_data/cookbooks/VectorField.


For vector data we include a Sanders Geophysics example from Timmins.

In the Cookbooks folder, VectorField/Databases/Sanders

The dataset has a 3 component gravity vector signal measure, which is a good way to show how gravity can be treated more satisfactorily using a vector field type in the database.