|
POST
|
You can index the model with something like: /model[@name='Kriging']/model[@name='Variogram']/model[@name='VariogramModel'][0]/value[@name='Range'] Putting the index number in brackets after the model name will tell it which one to go to. If you don't supply an index number, it will use the first one it finds. An index of "0" refers to the first element; "1" refers to the second, "2" to the third, etc.
... View more
02-01-2013
10:54 AM
|
0
|
0
|
1148
|
|
POST
|
I'm sorry, I messed up. If you only change the auto flags for the major and minor ranges, it will calculate them assuming the other parameters are correct. Since the other parameters were calculated for the very first dataset, they will not match new datasets, and the calculation of the major/minor range will get corrupted. You'll need tell the model to recalculate everything before querying the ranges. I've attached an .zip file that contains an XML file that has all the flags changed in order to properly recalculate the anisotropy parameters. Use this file as the model source in Create Geostatistical Layer. Use the same XML path codes as above to query the major/minor ranges with Get Model Parameter. Also, if you want to query the angle of the major range, use this XML path code in Get Model Parameter: /model[@name='Kriging']/model[@name='Variogram']/model[@name='VariogramModel']/value[@name='Direction'] Sorry for the mess up, but this workflow should work now.
... View more
02-01-2013
09:01 AM
|
0
|
0
|
1148
|
|
POST
|
You need to do something similar to the steps in this topic. After you create a geostatistical layer using kriging with anisotropy turned on, you need to call the Set Model Parameter gp tool with your geostatistical layer as the model source. You need to set the "auto" flag for the major and minor ranges to "true". The code you'll need is this: Parameter XML path: /model[@name='Kriging']/model[@name='Variogram']/model[@name='VariogramModel']/value[@name='Range']/@auto; /model[@name='Kriging']/model[@name='Variogram']/model[@name='VariogramModel']/value[@name = 'MinorRange']/@auto Parameter value: true; true Save the output xml file in a convenient location. Now you need to use the Create Geostatistical Layer gp tool. Use the xml file you saved above as the model source, and give it a new dataset. Run the tool, and you'll get a new geostatistical layer where the major and minor ranges have been recalculated. You can then query the major and minor ranges by running Get Model Parameter gp tool twice. For the model source, give it the geostatistical layer that you created from Create Geostatistical Layer, and use the following XML path code: /model[@name='Kriging']/model[@name='Variogram']/model[@name='VariogramModel']/value[@name='Range'] and /model[@name='Kriging']/model[@name='Variogram']/model[@name='VariogramModel']/value[@name = 'MinorRange'] The first bit of code will return the major range. The second will return the minor range. You only need to do Set Model Parameter once, and you can iterate through your datasets and keep re-using that xml file in Create Geostatistical Layer. For each new geostatistical layer, use Get Model Parameter to query the major and minor ranges. I hope that was clear. Let me know if you run into problems.
... View more
01-30-2013
12:38 PM
|
0
|
0
|
1148
|
|
POST
|
With no nugget, kriging is an exact interpolator. If you use GA Layer to Points and predict back to the input point locations, you should get perfect predictions. Specifically, the input point will get a weight of 1 and all other neighbors will get a weight of 0. In crossvalidation, you are throwing out the input point before predicting back to that input point location, so it can't just assign it a weight of 1 (because it has been thrown out of the dataset). That is why an exact interpolator will still have crossvalidation errors.
... View more
01-28-2013
06:26 AM
|
0
|
0
|
501
|
|
POST
|
Yes, you would want to compare the RMSE between EBK and IDW. A large root-mean-square-standardized usually indicates the model is unstable. The most common reason for this is because the Gaussian semivariogram is very unstable if the nugget is very small, compared to the sill. Note that Stable with parameter=2 and K-Bessel with parameter=10 both correspond to the Gaussian semivariogram (it's a special case of both). EDIT: Oh, I understand what you were asking. It doesn't make much sense to compare RMS and average standard error from different models, but it is useful to compare them within the same model because if the difference between them is large, it indicates that the model may have problems.
... View more
01-10-2013
12:58 PM
|
0
|
0
|
774
|
|
POST
|
I think the tool you're looking for is Central Feature. It finds the feature with the smallest cumulative distance to all the other features.
... View more
01-10-2013
07:04 AM
|
0
|
0
|
1143
|
|
POST
|
What happens when you try to export the geostatistical layer to a raster?
... View more
01-10-2013
06:30 AM
|
0
|
0
|
1700
|
|
POST
|
Yes, you can change the extent of a geostatistical layer. Right-click the layer in ArcMap's Table of Contents and choose "Properties." Go to the Extent tab and specify the new extent.
... View more
01-09-2013
10:28 AM
|
0
|
0
|
2693
|
|
POST
|
Glad I could help. Feel free to ask more questions if anything else comes up.
... View more
01-07-2013
12:26 PM
|
0
|
0
|
2693
|
|
POST
|
I've looked into why the Voronoi map is ignoring the top-right and lower-left cells. The problem is that when you have gridded points, the Delaunay triangulation is not unique. Since we define polygon neighbors at the triangulation step (the first step of creating the Voronoi polygons), our algorithm drops the top-right and lower-left neighbors. A different but analogous implementation would drop the upper-left and lower-right polygons. We could fix this by defining neighbors after the polygons are created, but this would slow down the tool. We'll have to think whether this hit in performance is worth it, especially considering that Focal Statistics is specifically built to deal with gridded data.
... View more
01-03-2013
09:02 AM
|
0
|
0
|
2693
|
|
POST
|
I'll look into why the top-right and lower-left cells are being ignored, but after asking around, I think the tool you want to use is Focal Statistics. It gives lots of options for defining cell neighbors, and you can calculate the standard deviation of these neighbors. And I'm not surprised at the fairly weak R^2 between standard deviation and entropy. Because entropy works with classified values (rather than raw values), the entropy map tends to be smoother. Entropy also has a maximum, but standard deviation has no maximum. So, two cells can have the same entropy value but still have very different standard deviations. If you look at your scatterplot, you can even see this; there are clear vertical columns that all have the same entropy, but the standard deviations vary a lot. This variance is what is pulling down the R^2.
... View more
01-02-2013
06:35 AM
|
0
|
0
|
2693
|
|
POST
|
The steps you've outlined are correct. Unfortunately, there's no way to make a continuous OLS surface. You'll need to make separate rasters for each resolution you want to test, but you only need to krig on the residuals once. You can export the interpolated residual surface to any cell size and extent that you want. I now understand what you're trying to do with the correlation coefficient. However, I don't think a correlation coefficient will work here because if you want to correlate a single point to the mean of its neighbors, you'll only be able to calculate a single coefficient for the entire surface (since you need repeated samples), so it won't help you in deciding which particular locations should be given preference. The first thing that comes to mind is the Voronoi Map tool. It's an interactive graphical tool, and if you use Standard Deviation, Entropy, or Interquartile Range, you'll get an estimate of the local variability. A small local variability indicates that the predictions are more constant in that area, so they might be good candidates for new sites because the area can be better represented by a single value. Note that you'll need to convert your rasters to points to run the tool.
... View more
12-28-2012
08:01 AM
|
0
|
0
|
5114
|
|
POST
|
Also, we heavily suggest using a projected coordinate system rather than a Lat-Long GCS. Distance calculations get badly distorted when using Lat-Long, and this distortion will get propagated through all the kriging calculations.
... View more
12-26-2012
10:33 AM
|
0
|
0
|
5114
|
|
POST
|
Also, if you're going to do kriging on the residuals of your MLR model, recalculate the model without using Lat as a covariate. Otherwise you'll be "double-counting" (for lack of a better phrase) the spatial location.
... View more
12-26-2012
10:25 AM
|
0
|
0
|
5114
|
|
POST
|
If you have a good MLR model already, I wouldn't try to use the covariates as cokriging variables. If you want to try it anyway, in the Geostatistical Wizard, when you choose kriging on the first page, you can enter up to four datasets. The first one you enter is the variable you will interpolate, and the three additional datasets will be used as cokriging variables. "Kriging" is often called "residual kriging," and there's a reason for this: you always perform kriging on the residuals of some model. This model can be almost anything, but "regression kriging," "kriging with external drift," "universal kriging," and "linear mixed model kriging" all generally refer to the simultaneous estimation of the covariate coefficients and the kriging parameters. However, you may find success with doing a sequential estimation: first calculate the coefficients using your MLR model. Then calculate the residuals, and perform Simple kriging on these residuals (you should use Simple kriging instead of Ordinary kriging because you know that the mean of the residuals is 0). Then add these interpolated residuals back into the MLR predictions. You'll lose some power because you are sequentially calculating parameters (rather than simultaneously estimating them), but you should still get defensible results. As for comparing the values in one point to the average of the neighboring points, the Semivariogram/Covariance cloud is probably the best way to visualize this, but the result is a graph rather than a single correlation coefficient. If you really need to calculate the correlation coefficient (and you're ok with ignoring spatial correlation in the analysis), we have a tool called Neighborhood Selection that selects the neighbors of an input (x,y) location (use the same neighborhood parameters that you used in kriging). It will probably take a lot of work, but I'm sure you can write a Python script that will do what you're trying to do. I've never personally done this, so I don't want to try to outline an algorithm, but I'm sure all the tools are there to accomplish this task.
... View more
12-26-2012
10:22 AM
|
0
|
0
|
5114
|
| Title | Kudos | Posted |
|---|---|---|
| 2 | 01-16-2025 04:52 AM | |
| 1 | 10-02-2024 06:45 AM | |
| 2 | 08-23-2024 09:18 AM | |
| 1 | 07-19-2024 07:09 AM | |
| 1 | 08-21-2012 09:47 AM |
| Online Status |
Offline
|
| Date Last Visited |
a week ago
|