Select to view content in your preferred language

Need Help Automation of Kriging using Model builder or python - Reg.

20711
53
09-12-2013 05:58 AM
AnushuyaRamakrishnan
New Contributor
I am using arcmap 10.2.

My project deals with the prediction of PM and Ozone concentrations for 2004-2006 (1098 days) using known concentrations of them at
99 grids throughout the county.

I am trying to predict the unknown concentrations at mother's residences using ordinary kriging.

This is what I am doing:

1. Use Geostatistical Wizard
2. Select kriging cokriging
3. Input the Data layers and specifications
4. Select Finish
5. Read the model parameters and click OK
6. Generate Kriged predictions
7. Right click on Kriged data and convert to raster
8. Use spatial analyst tool Extraction
9. Select "Extract values to points"
10. Generate Extracted values of kriged predictions
11. Select conversion tool
12. Choose convert From Raster to Ascii
13. Generate the Ascii table.
14. Alternatively use Sample tool and input multiple rasters to generate an output table.

My question:

1. I need some guidance for automating this kriging process using model builder or python?

2. Is there any way to directly copy and paste the raster values to excel?

Can any one provide me a lead into this
0 Kudos
53 Replies
AnushuyaRamakrishnan
New Contributor
Hi Eric,

I have few questions:

What is the source file called kriging.layer mentioned in the example? Does it refer to the layer file containing all the days of observation of data? or does it refer to one day of the Z field?

How do we get the Z field (containing PM and ozone data for different days of observation)  into the model source= ..... script.

Can we generate the XML files you mentioned for a bunch of days and loop them for kriging process, or are we required to change the input and generate the xml files for Range, sill and nugget one day at a time.

Just wanted a clarification on this.

Thank you

Anushuya
0 Kudos
EricKrause
Esri Regular Contributor
The Create Geostatistical Layer tool takes a model source as input.  This model source can be a geostatistical layer (either a layer in ArcMap or saved as a .lyr file on disk) or an XML model source.  For your purposes, the XML will be easier to use.  The tool reads all the interpolation parameters from the model source (type of kriging, nugget, range, sill, transformations, etc) and applies them to a new dataset.  This is useful for something like temperature data taken daily.  You can build the model for one day and easily apply that model to each subsequent day.

However, if you're using different kinds of data (temperature, elevation, pollution, etc), you don't want to keep using the same model over and over because the interpolation parameters will not fit different types of data.  So, you need to tell the tool to recalculate all these parameters (explained below) for each new dataset. 

You only need to manually create one XML file source.  You can do this with the Geostatistical Wizard.  Open the Wizard, choose Ordinary kriging, and give it a dataset.  When you click Finish, the Method Report screen will pop up that shows all the parameters that you used.  Click "Save..." and save the XML file in a convenient location.  You then need to open the XML file with a text editor (Cooktop is a useful and free XML editor, but you can do it with Notepad too).  Inside the XML, you'll see all of the parameters, and most of them will have auto = "false" after them.  This auto flag tells the tool whether to recalculate that parameter or keep it fixed when it is used as a model source.  For every parameter that you want to be recalculated, you need to change the flag to auto = "true" and save the XML.  You only need to do this once, and you'll keep reusing this same model source.

You can then set up a loop in Python to iterate through your datasets.  For each dataset, you'll use the XML as the model source in Create Geostatistical Layer.  This will generate a geostatistical layer for each dataset, where the interpolation parameters have been recalculated, and you can use these layers to create the ASCII files that you described in your first post.

The difficulty in automating Ordinary/Simple kriging is one of the many reasons we made Empirical Bayesian Kriging as a geoprocessing tool.
0 Kudos
AnushuyaRamakrishnan
New Contributor
Hi Eric,

Thank you for the detailed clarification.

I absolutely agree with you. EBK is a sophisticated model inbuilt in the geostatistical wizard and is very easy to automate.

Hopefully my advisors will be in favor of EBK, we are yet to reach a decision on EBK vs ordinary kriging.

Thanks Eric

Anushuya
0 Kudos
EricKrause
Esri Regular Contributor
You might be able to convince them with crossvalidation and validation.  EBK typically gives better results, and this is how you can demonstrate it.  Take a couple of your datasets and interpolate them with Ordinary Kriging and EBK and compare the results.
0 Kudos
AnushuyaRamakrishnan
New Contributor
Hi Eric,

Thank you for the suggestion.  I will certainly try that.

EBK is relatively new and has not been reported in scientific literature for prediction of pollutant concentrations in adverse birth outcomes studies that we are involved in.

When we send our manuscript for review, there will be questions on the validity of the model and the applicability of it in related studies.

As ordinary kriging as an established model reported in a lot of manuscripts, use of it in the study will eliminate question on the validity of it.

Thanks

Anushuya
0 Kudos
AnushuyaRamakrishnan
New Contributor
HI Eric,

I am still not clear about the flow of python script for ordinary kriging.



# Process: Kriging
arcpy.gp.Kriging_sa(merge1_csv_Events_lyr, "Y04Mar02", Kriging_PM04Mar2, "Spherical 0.003512 0.339420 4.530740 0.517570", "3.5120478474024E-03", "VARIABLE 12",
                    Output_variance_of_prediction_raster)

(merge_1_csv_Events_lyr contains PM data for all the days of the study; Y04Mar02 is the Z field for one day)

I attach the XML file that I generated for one day of observation as ZIP file.

Do we built a model in model builder and give that model name as input for xmlpath.

What is happening in these steps:

xmlPath = "/model[@name = 'Kriging']/model[@name = 'Variogram']/value[@name = 'Range']"
newValue = 0
outModel = "C:/gapyexamples/output/outModel.xml"
model source="C:\\Users\\aramakrishnan\\Desktop\\Grid combo1\\merge1.csv Events"
xmlPath= "/model[@name = 'Kriging']/model[@name = 'Variogram']/model[@name = 'VariogramModel']/value[@name = 'Sill']"
newValue = 1
outModel = "C:/gapyexamples/output/outModel.xml"
model source="C:\\Users\\aramakrishnan\\Desktop\\Grid combo1\\merge1.csv Events"
xmlPath= "/model[@name = 'Kriging']/model[@name = 'Variogram']/value[@name = 'Nugget']/@auto;\
newValue = 2
outModel = "C:/gapyexamples/output/outModel.xml"

For this step what is the inlayer? Do we carry out this step after calling in range,sill and nugget values.

Execute CreateGeostatisticalLayer
arcpy.GACreateGeostatisticalLayer_ga(inLayer, inData, outLayer)

Please help me to understand this better.

Thank you

Anushuya
0 Kudos
EricKrause
Esri Regular Contributor
I see that you're using the Kriging tool from Spatial Analyst.  This tool is not the same as the kriging methods implemented in the Geostatistical Wizard.  If you want to use that tool, it's simple to automate, but it doesn't have advanced options like detrending, transformations, anisotropy, etc.  The workflow I'm describing is about how to automate kriging from the Geostatistical Wizard.

I'll talk more about the workflow, but I need to know what version of ArcGIS you have.
0 Kudos
AnushuyaRamakrishnan
New Contributor
I am using arcmap 10.2

Thanks

Anushuya
0 Kudos
EricKrause
Esri Regular Contributor
Ok, that makes things easier.  We created the GeostatisticalDatasets arcpy class for 10.2, and it will be useful for you. 

Here is the general workflow (notice that you don't need to use any XPATH commands):
1.  Manually create an XML model source and change all the auto flags to "true".  (You already did this)
2.  Here is a quick shell of the Python code (replace the file paths and names with your data):

newDataset = arcpy.GeostatisticalDatasets("C:\\temp\\myXML.xml")
newDataset.dataset1 = "C:\\temp\\myFeatureClass1.shp"
newDataset.dataset1Field = "myField1"
arcpy.GACreateGeostatisticalLayer_ga("C:\\temp\\myXML.xml", newDataset, "outGALayer1")


In the above code, "myXML.xml" is the XML file that you created in step 1.  "newDataset" is the GeostatisticalDatasets object that controls the datasets and fields that you want to interpolate.  The second line sets the dataset1 property to one of your feature classes, and the third line sets the field.  The fourth line creates the new geostatistical layer by using myXML as the model source and the newDataset object as the data.  It then outputs a geostatistical layer called "outGALayer1".  Again, you only need to make the XML file once, and you don't need any XPATH code.

3.  You need to iterate this so that it cycles through the different datasets and fields.  This will create a geostatistical layer for each dataset where the interpolation parameters have been recalculated for each new dataset.
0 Kudos
XIANWANG
New Contributor

Hi, I am reading these information, which is very useful for me. But I am using version 10. There is no arcpy.GeostatisticalDatasets in version 10, right? What I should do for these general workflow:

  1. newDataset = arcpy.GeostatisticalDatasets("C:\\temp\\myXML.xml") 
  2. newDataset.dataset1 = "C:\\temp\\myFeatureClass1.shp" 
  3. newDataset.dataset1Field = "myField1" 
  4. arcpy.GACreateGeostatisticalLayer_ga("C:\\temp\\myXML.xml", newDataset, "outGALayer1")

Thanks very much!!!!

0 Kudos