↑ Top
Table of Contents

Survey quality assurance and control (C28)

Important Note: This manual is still being written and may not yet contain the depth of information that we plan to include.

For more information please contact our technical support service.

In this cookbook:

Introduction and scope

Parent topic: Survey quality assurance and control (C28)

Quality Control (QC) is the action of verifying the quality of some data that you are collecting or processing. In the context of data surveys it refers to the actions taken during acquisition.

Quality Assurance (QA) is a system for managing your work to yield acceptable quality results. For example, you might have a set of quality-assessing and enhancing procedures that you always apply to your data acquisition and processing (and you keep records of having done so).

The term QAQC refers generally to both the QA system and the QC actions that form part of it.

This chapter has an overview of QAQC that includes further processing after data acquisition, but its practical examples focus on QAQC for data acquisition itself.

Methods excluded from this cookbook:

Sample data and tasks

Parent topic: Survey quality assurance and control (C28)

Location of sample data for Cookbooks

Where install_path is the path of your INTREPID installation, the project directory for the Cookbooks sample data is install_path/sample_data/cookbooks.

For example, if INTREPID is installed in
C:/Intrepid/Intrepid 6.0.4.304babe910c_x64,
then you can find the sample data at
C:/Intrepid/Intrepid 6.0.4.304babe910c_x64/sample_data/cookbooks

For information about installing or reinstalling the sample data, see the relevant section in “About the sample data for the INTREPID Cookbooks” in Using INTREPID Cookbooks (C12).

For a description of INTREPID datasets, see Introduction to the INTREPID database (G20). For more detail, see INTREPID database, file and data structures (R05).

Overview of QAQC and its current trends

Parent topic: Survey quality assurance and control (C28)

QAQC seeks to minimise factors that corrupt geophysical data. The factors arise from a number of sources.

In this section:

Survey flight or cruise factors

Survey quality can be protected by such things as:

  • Flying or cruising to specifications
  • Avoiding turbulence.
  • Ensuring equipment is working and calibrated and working correctly
  • Repeating starting lines of the flight or cruise at the end
  • Checking spacing of readings (usually with a first order time series)
  • De-spiking and trimming the survey and then gridding it to compare with adjacent flights

Example QC methods:

Restrictions on ‘raw’ data

Due to ITAR (International Traffic in Arms Regulatory—see https://en.wikipedia.org/wiki/International_Traffic_in_Arms_Regulations) restrictions the raw data may apply. Falcon and Full Tensor Gradient (FTG) data is often not raw.

For more information please contact our technical support service.

Additional factors for advanced geophysical data types

Recent more sophisticated and powerful techniques and forms of data, such as tensors and AEM (Airborne ElectroMagnetic data gathered by transmitting an electromagnetic signal from a system attached to a plane or helicopter) yield more detailed data, but are more sensitive to quality issues. We need to use appropriate QAQC methods developed from researched techniques. These surveys need more planning and testing before we can be confident of good results.

For example:

  • Tensor and vector data requires the rotational state of equipment to be known
  • AEM requires X and Z calibration (see also Grid data quality)

Post acquisition QAQC

QAQC tasks after acquisition include (depending on the type of data):

  • Magnetic compensation
  • Altitude correction
  • Removing spikes
  • Removing radon
  • Radon correction
  • De-stripping and decorrugation
  • Crossover checking

Terrain has a strong effect on the data and so you need a good terrain dataset to use for this, particularly with airborne gravity and gravity gradiometry.

Grid data quality

Grids are an interpolation between the survey lines and provide a useful contiguous view of the survey area. they do, however omit a lot of detail.

Higher dimension data such as tensors need more precise instrument calibration and a more critical gridding algorithm but can yield a higher density of grid cells

For AEM grids we can check quality by:

  • Comparing early and late time grids to test coherence
  • Performing 2.5D inversion—this can reveal poor compensation for bird motion and poor noise models. All aspects of the components of the B field measured signal are rigorously tested for coherence. This is superior to the traditional one dimensional inversion or CDI production, which are more open to wrong interpretation.

Human error and QAQC specifications

Humans are subject to errors, persuasion and taking the easy path. Not controlling this aspect of our nature can lead to expensive deficiencies in our data. Robust QAQC specifications are a line in the sand to give you the best benefit from your expensive surveys.

Regional and continental scale surveys

There is currently a trend towards regional or continental-scale databases. INTREPID has tools for combining surveys at the continental level and adjusting them for seamless merging. The quality of the results are, however, only as good as that of their components.

The geoscience authorities in some countries, including Australia, require copies of all geophysical surveys. The countries with this practice are able to draw on a better library of surveys and are therefore likely to have superior regional and continental scale surveys.

Quality issues with older surveys

Over time the quality of geophysical surveys has improved. In some older surveys, for example:

  • Gravity data have less accurate locations
  • Radiometric data may have poor or lost calibration of crystals, and the coefficients used may be lost
  • Magnetic data may be without records of the corrections such as diurnals. There may not be enough overlaps between surveys to use as a basis for adjustment. Where there is a desire for continental coverage it is tempting to use poor surveys to fill gaps. The INTREPID Grid Merge tool can help to make the best of a mix of surveys in a continental-scale map.

The current outlook

QC technology applied across the industry is not uniform. Sometimes inappropriate for new data types being acquired. Government contract specifications can help. Also improved software tools being generally available and having trained operators, is an emerging requirement.

In time, consistent, coherent, regional compilations of airborne geophysical data open up a new range of applications for these data. Large regional anomalies can now be better appreciated and interpreted. A significant benefit is the ability to apply quantitative modelling and data processing techniques to large areas. These methods have the potential to provide significant new insights into the geology and more useful continental scale compilations

Reference: FitzGerald, D.J., Quality Control in Airborne Geophysics, Extended Abstracts - 16th SAGA Biennial Conference & Exhibition 2019

QC specifications for aerial surveys

Parent topic: Survey quality assurance and control (C28)

This section has examples of QC specifications for a number of survey types.

In this section:

Aerial surveys generally

Parent topic: QC specifications for aerial surveys

Specification examples:

  • The planned drape height may not be exceeded by more than 15 m over 2 km (this is an absolute measure) and may not exceed 50 metres at any point
  • The ground clearance may not exceed 125% of the nominal clearance over 5 km (this is a relative measure) and may not exceed 150% at any point
  • The line separation may not exceed 125% or be less than 75% of the planned line path for 2 km and not exceed 150% or be lower than 50% at any point

Warning: Safety considerations always prevail over data conformance

Magnetic data

Parent topic: QC specifications for aerial surveys

Specification examples:

  • The diurnal variation may not exceed 5 nT over 5 min over linear background
  • The 4th difference of the magnetic noise may not exceed 0.1 nT

Tip: Some scattered spikes do not affect the results. See the discussion in the detailed instructions: 4th difference noise in a magnetics survey.

Radiometric data

Parent topic: QC specifications for aerial surveys

Specification examples:

  • The survey is delayed when there is rain and the ground is wet
  • Spectral stability of an acceptable level
  • Thorium peak resolution must be below 7%. Usually Th source test is required before each flight. There are two methods:
  • Before each flight, hold welding rods close to the sensor for 5 or 10 minutes and measure Thorium.
  • Examine the sum spectrum of each flight.
  • Calibrations:
  • Pad calibrations, using standard concrete blocks (‘pads’) with known radioactive element concentrations
  • Low level stack—a set of flights at different altitudes to measure height attenuation and set appropriate coefficients,
  • High level stack—a high level flight to establish cosmic and aircraft background radiation,
  • Dynamic calibration range

We are not aware of agreed noise specifications for radiometric surveys.

Gravity—Airborne gravity gradiometry (AGG) data

Parent topic: QC specifications for aerial surveys

AGG includes both Falcon and Full Tensor Gradient (FTG) data.

Most AGG specifications are defined as the standard deviation per line—68% of all values are below the threshold. This may be problematic with very short or very long lines. One noise burst at a short line may make it fail and long stretches of elevated noise may be averaged out in a long line. Our survey specialist uses local standard deviation to identify areas or line parts with elevated noise.

‘Raw’ data is run though the Post Mission Compensation (PMC) process. This is most commonly done using Lockheed Martin (LM) software. Ensure that you use the same version of the software for the whole survey. INTREPID methods for this—For more information please contact our technical support service.

Within each line, the standard deviation of:

  • The (normalised) inline sum (after band-pass filter between 0.033 and 0.1 Hz) may not exceed 15 E (using full tensor gradient),
  • The AB difference for both the pair of UV and the pair of NE accelerometers may never exceed 5 E.

The the AB difference in readings for accelerometers A and B is (A–B) / 2.

  • The vertical accelerations may not exceed 100 mGal.

Our survey specialist runs two additional noise tests:

  • The first is obtained from the 4th difference of the raw tensor channels.
  • The second is derived from the residual of a full tensor noise reduction filter applied to the tensor data.

Thresholds are empirical and the tests are not part of the survey contract. However, they usually highlight the problematic lines. We can then inspect them visually and decide what action to take.

Airborne electrical magnetic data

Parent topic: QC specifications for aerial surveys

Specification examples: For more information please contact our technical support service.

Magnetotelluric data

Parent topic: QC specifications for aerial surveys

Specification examples: For more information please contact our technical support service.

INTREPID tools for QA

Parent topic: Survey quality assurance and control (C28)

We developed many of the INTREPID tools specifically for QA. INTREPID QA processes include:

Other, more general purpose INTREPID tools have an important role to play in QA, such as:

In this cookbook we include some examples of QC operations that combine a number of the INTREPID tools and can be run using a .task file.

If you want to carry out the QAQC operations interactively, the .task files examples in this cookbook contain all of the required settings for the demonstration tasks.

Methods for Quality Assurance

Any discontinuity in field data can be defined as erroneous measurements, as fields must be continuous (Fields follow Laplace’s Equation).

Identification of these discontinuities can be achieved by calculating the 2nd Vertical Derivative of the field. This will enhance the higher frequency data, which will include the noise attributed to collection, and the noise attributed to the gridding process.

The 2nd Vertical Derivative may identify artefacts along a grid’s merged edges, which exist due to insufficient padding used during the process. With sufficient padding the grid, along a row or column, will be harmonic.

The use of the 2nd Vertical Derivative is the preferred method of identifying erroneous measurements or noise, however a Fourier Filter can also be used to also identify discontinuities in the data. While a Fourier series can quickly approximate a continuous a field with a few low frequency components it produces poor results when approximating near discontinuities. Many high frequency components are needed to approximate near discontinuities well.

fourierdiscon.gif

Poorly fitted Fourier series will be indicative of noisy data. Many filters and processes performed on field data are done so in Fourier space, so it is also important that data is continuous, or close to, prior to processing.

Using INTREPID for QC operations—examples

Parent topic: Survey quality assurance and control (C28)

In this section:

Calculating QC data

Parent topic: Using INTREPID for QC operations—examples

Using the Project Manager, examine the dataset in 3D Explore.

Calculate the QC data.

Could this be done with 3D Explore?

qaqc-3dexp-calc.png

4th difference noise in a magnetics survey

Parent topic: Using INTREPID for QC operations—examples

Follow these steps:

  1. Calculate the 4th difference noise from raw and compensated magnetics
  2. Create a flag field to show where the noise exceeds the threshold.
  3. Calculate the line statistics, including the 4th difference noise range.
  4. Calculate the percentage of individual readings that exceed the threshold.
  5. The 4th difference range per line can be large but may originate from only one spike or jump. On the other hand there could be a number of spikes, indicating a general malfunction of the instrument. To check the density of spikes or this we calculate the percentage of readings that exceed the threshold.

  6. Check visually all lines with data that exceed the threshold. Record comments on the results of the visual check in the data—add a comment column using 3D Explore.
qaqc-add-comment-field.png

Example comment ‘Magnetic noise produced by several individual spikes—no degradation of data quality’).

Carry out these steps for all relevant channels, including those that you have introduced through your own methodology and are not in the contract. In this cookbook we demonstrate one or two channels only.

Occasional spikes

Usually there is a maximum value set for the 4th difference. In this case no observations are allowed to exceed the threshold, resulting in data that are 100% within specifications.

If they are widely dispersed, some individual spikes would not affect the final result, even though they exceed the threshold. To allow these harmless spikes to remain and save unnecessary work, we measure their density and fail any line segments that have more than a set number of spikes.

It is important, in any case, to inspect the affected lines visually.

If the instruments are functioning correctly and there are no cultural interferences then a high quality survey would have less than one occasional spike per kilometre.

Individual spikes or jumps can be removed later by processing.

Checking navigation

Parent topic: Using INTREPID for QC operations—examples

Vertical navigation

Check the vertical navigation using drape difference or ground clearance.

See the instructions in Comparing flight path of planned and actual surveys visually.

Horizontal navigation

Horizontal navigation for wide spaced surveys (for example, full tensor gradient) generally only requires a visual check. Pilots are usually accurate to 5 or 10 m.

For narrow spaced surveys we need to check the percentage deviation for an allowed distance.

How to do this with INTREPID—For more information please contact our technical support service.

Diurnal correction

Parent topic: Using INTREPID for QC operations—examples

Standard: May not exceed 5 nT over 5 minutes.

From each individual observation:

  1. Examine 2.5 min into the past and 2.5 min into the future
  2. Interpolate linearly
  3. Measure the deviation from the interpolated line.

The deviation may not be larger than 5 nT

How to do diurnal checks with INTREPID—For more information please contact our technical support service.

Examining noise from turbulence

Parent topic: Using INTREPID for QC operations—examples

You can grid noise channels. For example:

  1. Calculate the local standard deviation of turbulence
  2. Grid the local standard deviation with a small cell size. Turn ‘fill holes’ off.

This gives you a spatial overview of exaggerated turbulence and can indicate bad lines or spatially related elevated areas produced by, for example, a mountain range or coast line, which cannot be avoided. Ideally noise bursts are randomly distributed.

Turbulence notes

There is currently no specification for the local standard deviation of ILS (iterated least squares—see https://en.wikipedia.org/wiki/Iteratively_reweighted_least_squares) noise or turbulence.

The length of the applied window obviously affects the values, and so could ground speed. This is generally non-contractual for the surveyor, but it allows gridding and visualising noise and turbulence. The local standard deviation can be higher than the standard deviation for entire lines and therefore can exceed the threshold set for entire lines. There are two scenarios:

  • Scenario 1— Local elevated turbulence does not show as local ILS (or AB difference) noise bursts. This means that the gravity data are not affected.
  • Scenario 2— Elevated local turbulence does show up in the gravity data. Randomly separated noise bursts may not affect the data if dispersed enough, as for spikes (see Occasional spikes). In case an entire line or a major part of a line is affected a partial re-fly may be needed, despite the line passing nominal specifications. This requires visual inspection as well.

There is also the possibility that persistent high turbulence, say, over a mountain range or ocean-land boundary, cannot be avoided. This may result in degraded data quality in that region.

If you check the turbulence and noise local standard deviation grids you should see spatial consistency. If all elevated values extend over small distances only and are randomly distributed (no spatial consistency) then it won’t affect final data quality.

Radiometrics

Parent topic: Using INTREPID for QC operations—examples

How to do this with INTREPID— For more information please contact our technical support service.

Our survey expert calculates the sum spectrum for each flight and examines it to see whether the peaks are in the right positions.

Other QAQC tasks include:

  • Cosmic and aircraft background
  • Height attenuation
  • Sensitivity coefficients.

Visualising over a Digital Elevation Model

Parent topic: Using INTREPID for QC operations—examples

For more information please contact our technical support service.

Examples of aerial survey QA procedures

Parent topic: Survey quality assurance and control (C28)

In this section:

Example 1—Typical QA outline for ground data

Parent topic: Examples of aerial survey QA procedures

The following steps are for a typical QA procedure for ground data, as used by one of our geophysicists:

  1. Check data for spikes and other spurious values, remove these and replace voids by interpolation.
  2. (For line data, if there appears to be noise in the data) Run de-noising filters over the raw data.
  3. Perform Diurnal Correction.
  4. Remove the IGRF.
  5. (If there are strong line-to-line striations—a large heading effect) Perform zero level correction.
  6. Level the data using a microlevelling technique suitable for ground surveys
  7. Grid the data. Gridding is basically the interpolation between data points or lines. Select the gridding method based on data quality and the objective of the survey. The grid resolution is normally 20%–25% of the line spacing.

Example 2—Quality control schedule and criteria for re-flight

Parent topic: Examples of aerial survey QA procedures

Here is an example of survey flight specifications for an electromagnetic survey by helicopter.

Quality control

  • High level tests — flown each day or flight to determine system background for the days recording sorties.
  • Internal daily QC by our processing specialists will determine reflights as they occur
  • All data in a preliminary form will be made available for review on a daily basis by the client representative.

Re-flight criteria

Lines or sections thereof shall be re-flown at no cost to the customer:

  • Where the flight path of the helicopter deviates from the planned flight path by more than 40 m over a distance of 2 km or greater,
  • Where line average of the residual (high pass) standard deviation of the last electromagnetic channel is greater than 20 mV for the entire line,
  • Where the terrain clearance is exceeded by 10 m for distances over 2 km, except where such lines breach air regulations, or, in the opinion of the pilot, put aircraft and crew at risk,
  • Where the noise envelope of the magnetic records exceeds 0.1 nT for more than 1 km, the affected segments of the data will be reflown crossing at least two tie lines,
  • Where the magnetic diurnal variation greater than 10 nT in 10 minutes occurs, either on flight lines or tie lines,
  • Where the along line sampling exceeds 12m for distances over 2km.

Comparing flight path of planned and actual surveys visually

Parent topic: Survey quality assurance and control (C28)

Aerial surveys have a plan but, due to weather conditions and the normal movements of the aircraft, the flight path deviates slightly from that plan. INTREPID allows for these deviations, but, also, you may want to check that they are within your expectations of the survey.

Using 3D Explore you can visualise the deviations and the plan together and check the positional accuracy of the flight.

This task in this section compares the path of an airborne survey versus with that of its plan—the notional survey altitude of the survey with the actual flying height.

Note that this example shows a method of visually reviewing a survey, not a method of calculating statistics or flagging data that is outside specifications. For this, add a new field and apply a formula in it to show that data. For more information please contact our technical support service.

The .task file

Here is the .task file for the example:

# # Example of simple planned airborne survey versus realised survey # the notional altitude of the survey, vs the actual flying height # some lines were not flown on the eastern side. # there is a field that captures planned line distances - this can be checked with the survey distance tool # Intrepid3DExplore { mapView { titleText: “Planned airborne survey versus Actual flown” datasetGroupConfig { datasetConfig { visible: true opacity: 100.0 legendConfig { show: false customTitle: “line” isHorizontal: false length: 0.2 thickness: 0.03 format: 2 numberOfLabels: 5 foreground: Foreground labelFontSize: 10 titleFontSize: 12 xLocation: 0.5 yLocation: 0.12 } pointLabels { show: false fontSize: 16 foreground: Foreground field: -1 } lineNumbers { show: true fontSize: 5 foreground: Foreground } filename: “${cookbook}/QAQC/Q1_DB/Planned_survey..DIR” lutMode: IndependentLUT lutConfig { name: “pseudo_pits256” userDataBounds { minimum: 81010.0 maximum: 89080.0 } visualBounds { minimum: 81010.0 maximum: 89080.0 } type: HistoLut } signalName: “line” signalBand: 0 signalQuery: SIGNAL_MAGNITUDE xcol: “x” ycol: “y” zcol: “planned_elevation” } datasetConfig { visible: true opacity: 100.0 legendConfig { show: false customTitle: “Line” isHorizontal: false length: 0.2 thickness: 0.03 format: 2 numberOfLabels: 5 foreground: Foreground labelFontSize: 10 titleFontSize: 12 xLocation: 0.5 yLocation: 0.12 } pointLabels { show: false fontSize: 16 foreground: Foreground field: -1 } lineNumbers { show: true fontSize: 5 foreground: Red } filename: “${cookbook}/QAQC/Q1_DB/Actual_survey..DIR” lutMode: IndependentLUT lutConfig { name: “pseudo_pits256” userDataBounds { minimum: 81010.0 maximum: 89080.0 } visualBounds { minimum: 81010.0 maximum: 89080.0 } type: HistoLut } signalName: “Line” signalBand: 0 signalQuery: SIGNAL_MAGNITUDE xcol: “x” ycol: “y” zcol: “altitude” } gridAxes { show: true scale: 40.0 autoscale: true hideOverlappingLabels: false } Scale: 5.0 Offset: 0.0 } title { show: true fontSize: 16 foreground: Foreground xLocation: 0.5 yLocation: 0.95 } } }

The .task file loads the survey plan and the actual survey results into 3D Explore, where you can pan, zoom and rotate the display to closely examine the deviations. In this image the colours indicate deviations from the survey plan.

Locations of files for this example:

  • The .task file: {install_path}/sample_data/cookbooks/QAQC/Q2_TASKS/PlannedvsActualsurvey.task
  • The survey and survey plan datasets
    {install_path}/sample_data/cookbooks/QAQC/Q1_DB

Results shown in 3D Explore

Acquired data below the planned altitude

qaqc-planned-and-actual-survey.png

Acquired data above the planned altitude

qaqc-planned-and-actual-survey-flip.png

Removing data points that are too close to each other

Parent topic: Survey quality assurance and control (C28)

In this example an airborne survey has some flight lines that are too close to each other for good quality gridding. The closeness causes ‘tares’ that detract from the final result. You can use methods shown in this task to clean up parts of a survey that are ‘over-constrained’ and m ay show errors that are not significant.

We identify and compare close line segments and delete the segment of lowest quality. To assess quality we use a statistical function to estimate the level of noise in the line segments that are too close. Based on that we select the segment to keep and the one to delete.

We create a new data field with data type DT_R4, give it a default value and then calculate the 4th difference for every observation, using the function diff4(raw_mag). Non-zero values indicate noise—the further from zero the greater the noise level.

**** Says it changes X/Y alias to E-clip and N-clip but it doesn’t

The task file below uses the Spreadsheet Editor tool (see Spreadsheet Editor (T15)) and then the Clip lines tool (see Clip lines (T20)) to carry out this process.

# example of an airborne survey where some of the flight lines get too close to each other # when gridded, the output contains “tares” that detract # use this tool to clean up parts of a survey that are “over constrained” and show errors. # Extended function to guide which line is better quality # changes X/Y alias IntrepidTask { FileManager { Action: CopyTable Input: “${tutorial}/Intrepid_datasets/EBA_DBs/ebagoola_S..DIR” Output: “./ebagoola_S..DIR” } } IntrepidTask { dbedit { Action { Type: OpenField;# actually opens the database with all the fields Name: “./ebagoola_S..DIR” } # create a new data field, specifying a field name, data type and initial value Action { Type: CreateField Name: “diff4” Dtype: DT_R4 GroupBy: false Initial: “ndiff4(raw_mag)”# calcuate the fourth difference for every observation, non-zero indicates more noise } Action { Type: CreateField Name: “Quality” Dtype: DT_R4 GroupBy: true Initial: “Quality = 1.0 / groupstddev(diff4)” } } } IntrepidTask { ClipLine { Xin: “ebagoola_S..DIR/x”; Yin: “ebagoola_S..DIR/y”; LineType: “ebagoola_S..DIR/linetype”; Quality: “ebagoola_S..DIR/Quality”;# the pre-computed best estimate of the noise, higher number means a better line Xout: “ebagoola_S..DIR/E_Clip”; Yout: “ebagoola_S..DIR/N_Clip”; Vout: “ebagoola_S..DIR/Vout”;# the quality field for this line MinimumSeparation: 200.0; MinimumSegmentLength: 50; } }

The following screen images show the line data before and after the correction.

qaqc-eba-orig-xy.png
qaqc-eba-clip-xy.png

You can find this example task file here {install_path}/sample_data/cookbooks/QAQC/Q2_TASKS/clip_line_data_using_quality_ebagoola.task

Multi-task QA example

Parent topic: Survey quality assurance and control (C28)

In this example we show how you can automate a sequence of QC operations using a single .task file.

The example task file that we discuss here is available in the sample data that we supply with INTREPID:

{install_path}/sample_data/cookbooks/QAQC/Q2_TASKS
/P1292_raw_mag.task

This .task file demonstrates many of the routine tasks for QC of magnetic, radiometric and digital elevation model data.

Note that this .task file does not refer to any sample data supplied with INTREPID. We have included it as an example of a more complex QC operation. After reading this section and studying the .task file, you could adapt it to process your own survey

This .task file is designed to run in INTREPID 5.6.2 and later

Authors:

  • Philip Heath created the original sequence in 2016 for the PACE Gawler Craton magnetic and radiometrics digital elevation model survey QA.
  • Des Fitzgerald converted it to a .task file in 2017.

Overview

The .task file prepares the data as follows

  1. Imports raw data to an INTREPID dataset.
  2. Exports raw magnetic data to an ESRI shapefile.
  3. Performs a projection conversion.
  4. Calculates a survey distance report file.
  5. Creates all the fields needed for the QC process.
  6. Calculate statistics—short, long and line.
  7. Creates a .kml file.

The .task file

# This task file undertakes many of the routine tasks that need doing to undertake QAQC on mag, rad and dem data. # It imports aseg-gdf2 data, exports an esri shapefile, performs a projection conversion, calculates a survey # distance file and long and short stat files. It also creates a kml file, and grids. # It creates all the fields needed for the QA process # Constucted December 2016 by Philip Heath for the PACE Gawler Craton Mag Rad DEM Survey QA # converted to a task file, Des fitzgerald 2017 # There are 7 sections: Raw Elevation, Raw Magnetics, Raw Radiometrics, Final Elevation, Final Magnetics, Final Radiometrics, and Grids. # There is a quick explanation of each process along the way. # This file was tested in and is designed to run in Intrepid 5.6.2 and later # Please familiarise yourself with this file before using. You may need to make minor changes. # Things you might have to change: # - input file names (Processes 1, 12, 23, 34, 55, 66) # - the DDF files (Processes 1, 12, 23, 34, 55, 66) # NB, there is also a protobuf DDF syntax!! # - the line numbers to define the linetypes are correct (Process 5, Action 12; Pro16, Act11; P27, A11; P48, A11; P59, A11; P70, A11) # - the grid cell size and origin (Processes 77 to 87) # - if there’s a section/process you don’t need to use I recommend (using notepad++), use right-alt, shift and the mouse to select the first row of a portion # it out # This file is designed to run from the C:/UserData/Documents/Gawler_QC directory # To use this file: # Right click on this file in Intrepid and select ‘Run Job File’ # DEFINE PARAMETERS # Usage: fmanager -batch P1292_raw_mag.task # name: “automatic Raw magnetics QA / QC data” description: “It imports aseg-gdf2 data, exports an esri shapefile, performs a projection conversion, calculates a survey” “distance file and long and short stat files. It also creates a kml file, and grids.” “It creates all the fields needed for the QA process”; Repeat { survey_path: [“J:/proj/gap_proj/magnetic_radiometric_surveys/2017_isa_region/QC/28Aug2017”]; Intrepid_db: [“J:/proj/gap_proj/magnetic_radiometric_surveys/2017_isa_region/QC/28Aug2017/raw_mag/Intrepid/P1292_raw_mag..DIR”]; Project_number: [“P1292”]; DatasetID: [“28Aug2017”]; Input_DAT_file: [“RAWMAG.dat”]; DDF_file [“P1292_raw_mag_3.ddf”]# actually, there is a protobuf definition for ddf files as well, which is more complete! #################################################################### # PROCESS 54: IMPORT RAW MAG DATA IntrepidTask { Import { Input: “$survey_path/raw_mag/data/$Input_DAT_file”; Output: “$Intrepid_db”; Format: ASCIICOLUMNS; AsciiColumns { FixedLength: false; SkipRecords: 0; StopOnError: true; ReportDiagnostics: true; DDF: “$survey_path/../DDF_files/$(DDF_file)”; } } } #################################################################### # PROCESS 55: MAKE A DATASET ID FOLDER IN THE GIS DIRECTORY # IntrepidTask { FileManager { Input: “$Intrepid_db”; Action: Command;# not yet implemented Script: “mkdir”; Args: “$survey_path/../../gis/$DatasetID”; Pause: false; } } #################################################################### # PROCESS 56: EXPORT RAW MAG TO AN ESRI SHAPE FILE IntrepidTask { ExportDB { Input: “$Intrepid_db”; Output: “$survey_path/../../gis/$DatasetID/$Project_number_$DatasetID_raw_mag”; Format: ARCSHAPE; UseNulls: true; ArcShape { OnePerGroup: false; GroupBy: true; NonGroupBy: false; AsGeodetic: false; } } } #################################################################### # PROCESS 57: RAW MAG PROJECTION CONVERSION TO easting_gda94 AND northing_gda94 ### NOTE ### CHANGE ACCORDING TO PROJECT IntrepidTask { ProjConv { XIN: “$Intrepid_db/longitude_gda94”; YIN: “$Intrepid_db/latitude_gda94”; XOUT: “$Intrepid_db/X_long”; YOUT: “$Intrepid_db/Y_lat”; OutputProjection { map_projection: “MGA54”; spatial_datum: “GDA94”; } UpdateSurveyInfo: true; } } #################################################################### # PROCESS 58: CREATE SURVEY DISTANCE FILE FOR RAW MAG IntrepidTask { SurveyDistance { DataBase: “$Intrepid_db”; ReportFile: “$survey_path/raw_mag/statistics/$Project_number_$DatasetID_raw_mag_survey_dist.rpt”; Accurate: true; Altitude: 0.0; } } #################################################################### # PROCESS 59: CREATING THE QAQC FIELDS TO THE RAW MAG INTREPID DATASET IntrepidTask { dbedit { Action { Type: OpenField; Name: “$Intrepid_db”; } # PROCESS 59: ACTION 1: CREATE A FID_Diff FIELD Action { Type: CreateField; Name: “FID_Diff”; Dtype: DT_R4; GroupBy: false; } # PROCESS 59: ACTION 2: CREATE A ground_speed FIELD Action { Type: CreateField; Name: “ground_speed”; Dtype: DT_R4; GroupBy: false; } # PROCESS 59: ACTION 3: CREATE A linetype FIELD Action { Type: CreateField; Name: “linetype”; Dtype: DT_I2; GroupBy: true; Initial: “2”; } # PROCESS 59: ACTION 4: CREATE A line_repeat FIELD Action { Type: CreateField; Name: “line_repeat”; Dtype: DT_I2; GroupBy: true; Initial: “0”; } # PROCESS 59: ACTION 5: CREATE A Diff_X FIELD Action { Type: CreateField; Name: “Diff_X”; Dtype: DT_R4; GroupBy: false; } # PROCESS 59: ACTION 6: CREATE A Diff_Y FIELD Action { Type: CreateField; Name: “Diff_Y”; Dtype: DT_R4; GroupBy: false; } # PROCESS 59: ACTION 7: CREATE A east_diff FIELD Action { Type: CreateField; Name: “east_diff”; Dtype: DT_R4; GroupBy: false; } # PROCESS 59: ACTION 8: CREATE A north_diff FIELD Action { Type: CreateField; Name: “north_diff”; Dtype: DT_R4; GroupBy: false; } # PROCESS 59: ACTION 9: POPULATE THE FID_Diff FIELD Action { Type: Replace; ThenAction: “FID_Diff = diff(FID)”; } ### NOTE ### CHANGE ACCORDING TO FID SAMPLE INTERVAL # PROCESS 59: ACTION 10: POPULATE THE ground_speed FIELD Action { Type: Replace; ThenAction: “ground_speed = sqrt( sqr(diff(easting_gda94))+sqr(diff(northing_gda94)))/((FID_diff)/1)”; } ### NOTE ### CHANGE ACCORDING TO LINE NUMBERING SYSTEM # PROCESS 59: ACTION 11: POPULATE THE linetype FIELD Action { Type: Replace; IfCondition: “LINE > 70000 “; ThenAction: “linetype = 4”;# tie line } # PROCESS 59: ACTION 12: POPULATE THE line_repeat FIELD Action { Type: Replace; IfCondition: “(LINE/10) - Int((LINE/10))!= 0”; ThenAction: “line_repeat = 1”; } # PROCESS 59: ACTION 13: POPULATE THE Diff_X FIELD Action { Type: Replace; ThenAction: “Diff_X = easting_gda94 - X_long”; } # PROCESS 59: ACTION 14: POPULATE THE Diff_Y FIELD Action { Type: Replace; ThenAction: “Diff_Y = northing_gda94 - Y_lat”; } # PROCESS 59: ACTION 15: POPULATE THE east_diff FIELD Action { Type: Replace; IfCondition: “linetype == 2”; ThenAction: “east_diff = abs(diff(easting_gda94))”; ElseAction: “east_diff = Null”;# remove coords for a tie line } # PROCESS 59: ACTION 16: POPULATE THE north_diff FIELD Action { Type: Replace; IfCondition: “linetype == 4”; ThenAction: “north_diff = abs(diff(northing_gda94))”; ElseAction: “north_diff = Null”;# remove coords for a tie line } } } #################################################################### # PROCESS 60: ALIAS THE LINETYPE IN THE RAW MAG INTREPID DATASET IntrepidTask { FileManager { Input: “$Intrepid_db”; Action: EditSurveyInfo; Alias_Code: FA_LINE_TYPE Alias_String: “linetype”; } } #################################################################### # PROCESS 63: STATISTICS # PROCESS 63: ACTION 1: CALCULATE SHORT STATISTICS FOR THE RAW MAG DATA IntrepidTask { FileManager { #Action: Command; # not yet implemented #Script: “fmanager -short $Intrepid_db”; #Pause: false; Action: Statistics; Input: “$Intrepid_db”; } } # PROCESS 63: ACTION 2: RENAME AND MOVE THE SHORT STATS FILE IntrepidTask { FileManager { Action: Rename; Input: “$survey_path/raw_mag/Intrepid/jobs/fmanager.rpt”; Output: “$survey_path/raw_mag/statistics/$Project_number_raw_mag_short.rpt”; } } # PROCESS 63: ACTION 3: CALCULATE LONG STATISTICS FOR THE RAW MAG DATA IntrepidTask { FileManager { #Action: Command; # not yet implemented #Script: “fmanager -long $Intrepid_db”; #Args: “ “; #Pause: false; Action: Statistics; Input: “$Intrepid_db”; } } # PROCESS 63: ACTION 4: RENAME AND MOVE THE LONG STATS FILE IntrepidTask { FileManager { Action: Rename; Input: “$survey_path/raw_mag/Intrepid/jobs/fmanager.rpt”; Output: “$survey_path/raw_mag/statistics/$Project_number_raw_mag_long.rpt”; } } ## PROCESS 63: ACTION 5: CALCULATE Line STATS STATISTICS FOR THE RAW MAG DATA # #IntrepidTask { # FileManager { # Action: Command; # not yet implemented # Script: “fmanager.exe -lstats $Intrepid_db”; # Pause: false; # Action: LineStatistics; # Input: “$Intrepid_db”; # } #} ## PROCESS 63: ACTION 6: RENAME AND MOVE THE LSTATS STATS FILE IntrepidTask { FileManager { Action: Rename; Input: “$survey_path/raw_mag/Intrepid/jobs/fmanager.rpt”; Output: “$survey_path/raw_mag/statistics/$Project_number_raw_mag_lstats.rpt”; } } #################################################################### # PROCESS 65: CREATE A KML FILE FOR THE RAW MAG DATA IntrepidTask { ExportDB { Input: “$Intrepid_db”; Output: “$survey_path/../../gis/$DatasetID/$Project_number_$DatasetID_raw_mag.kml”; Format: KML; UseNulls: true;# do we really know the standard KML null value? } } }