|
POST
|
Subset Features is used to split the data into "training" and "test" subsets. You will build the interpolation model as normal using the training subset using whichever interpolation method and parameters you decide. You'll then run GA Layer To Points tool and predict/validate to the test subset. Specify the field with the measured values in the test subset, and run the tool. The output will be a feature class with all of the usual validation statistics for each individual feature. The Predicted and Error fields will always appear, but some models will also create Standard Error, Standardized Error, and Normal Value fields. You can then create Scatter Plot charts using these fields. While they are not created automatically like they are for cross validation, they can all be created by simple scatter plots: Predicted: Use the field of measured values and the Predicted field. Error: Use the field of measured values and the Error field. Standardized Error: Field of measured values and Standardized Error field. Normal QQ Plot: Normal Value and Standardized Error fields. The reason these do not appear in a pop-up window like the cross validation results is that this pop-up is a property of geostatistical layers. Feature classes cannot display them.
... View more
04-27-2020
07:55 AM
|
2
|
3
|
2207
|
|
POST
|
Hi again, Regarding the degree of GPI, we do allow as high as degree 9, but as you said, we rarely recommend more than 3. Results very often get unstable and unpredictable when using high-degree polynomials. The curves will change direction one less than their degree (ex, quadratic curves bend once, cubic curves bend twice, etc), so a 9-degree polynomial will bend 8 times, and these bends can be unpredictable and not representative of the data. If it looks like you need a degree higher than 3, we usually recommend using Kernel Interpolation or Local Polynomial Interpolation. It is usually better to build low-degree polynomials locally than it is to build high-degree polynomials globally. However, these are just general recommendations. If there's something about your data where a high-degree global polynomial works best, you can of course use it. -Eric
... View more
04-27-2020
07:40 AM
|
1
|
0
|
1760
|
|
POST
|
Hi Bankim, Are you performing Diffusion Interpolation using the Geostatistical Wizard or the geoprocessing tool? If it's the Wizard, it will not honor geoprocessing environments. If you have a layer that you created in the Wizard, and you want to change its extent, you can use the Create Geostatistical Layer geoprocessing tool. Provide the old layer, provide the datasets used to create the layer, and give the new layer a name. This tool will honor geoprocessing environments, so you can set the extent environment, and the new layer will be identical to the old layer except with the new extent. Thanks, Eric
... View more
04-08-2020
07:07 AM
|
3
|
2
|
1760
|
|
POST
|
Hi again Tim, I talked with support services and took a closer look at your data. While the contouring artifacts are pretty severe in the area of the screenshot, they are due to the issues that I highlighted in my previous post. The reason they are so severe in this area is due to the features with OID 42 and 24. OID 42 has one of the largest measured values in the dataset, and OID 24 has one of the smallest, but they are only ~18.5 feet apart from each other. Since IDW is an exact interpolation method, the surface must pass through the points exactly, so the values of the layer must rapidly change over a very short distance. This distance is actually shorter than the separation distance of the background grid that is used for the contouring, and that is why the contours are struggling to respect the values of the measured points. I would highly recommend treating the contours of the geostatistical layer as a quick preview of the surface. For analytically robust contours, you should export the geostatistical layer to a raster and contour the raster. Thanks again for your feedback, and sorry it took so long for you to get a resolution to your problem. -Eric
... View more
02-21-2020
08:27 AM
|
2
|
0
|
2624
|
|
POST
|
Hi Tim, sorry for the long delay, I am only seeing this question now. This is going to be a bit of a long answer, but hopefully it will clear up what is happening. To understand this behavior, you need to understand a bit about what a geostatistical layer is. Don't think of it like a feature or raster layer; think of it like a function. It is entirely in-memory and performs on-the-fly calculations at given coordinates. The layer contains references to the input points and the interpolation parameters, and whenever it needs to calculate a value at a location, it calls a function that calculates the value (for example, the Identify tool actually calls the function at the location you click and displays the result of the calculation in the pop-up). In this sense, the geostatistical layer does not even know its own values except at locations where it has already calculated them, which are stored as cache files. The actual contours that you see are generated by contouring a coarse triangular grid behind the scenes, and the contour lines will have lots of imprecision due to the coarseness of the grid. This will often result in predicted values that are on the wrong side of the contour lines, as you're seeing. Everything about geostatistical layers is optimized for performance and model investigation rather than cartographic correctness. The layers draw very quickly, and the on-the-fly calculations are what allows the layer to, for example, display cross validation results automatically (since the geostatistical layer contains all the references that are needed to perform cross validation). The contours are only meant to provide a preview of what the surface actually looks like, and they are not intended to be perfectly cartographically correct. When exporting the geostatistical layer to a raster, the function is called at the center of every raster cell and persisted to a raster dataset, so that is the recommended way to display the final result. Using the Contour tool on the exported raster will provide the most analytically correct contour lines, and it is what I would do if I were in your situation. In the Appearance tab of the geostatistical layer (in the Ribbon above), there is an option for "Presentation." This will use a finer grid for the contours. This will not resolve all problems with values on the wrong sides of the contour lines, but it will reduce them. The presentation option also appears in the GA Layer To Contour tool.
... View more
01-24-2020
03:02 PM
|
1
|
2
|
2624
|
|
POST
|
There is no nugget for the cross-covariance (it's not wrong to say that nugget = 0). The Major Range is shared between all three models (the semivariogram for the primary, semivariogram for the secondary, and cross-covariance between them). The indices for the Partial Sill indicate the model. Partial Sill [0][0] is for the primary dataset, Partial Sill [1][1] is for the secondary dataset, and Partial Sill [0][1] is for the cross-covariance. For your model, the cross-covariance is: Type = Gaussian Nugget = 0 Range = 150.5006 Partial Sill = -0.001909066 The sill is the partial sill plus the nugget, so for your model, you can just call it the sill instead of the partial sill, since the nugget is 0.
... View more
09-09-2019
10:42 AM
|
2
|
0
|
2854
|
|
POST
|
Hi Ramon, A negative cross-covariance means that the primary and secondary datasets are negatively spatially correlated. Kriging assumes that each dataset is positively autocorrelated with itself, but the two datasets are allowed to be negatively correlated with each other. This means that areas where one dataset has a large value, the other dataset tends to have a small value (and vice versa), and kriging is able to use this information to improve the predicted values of the primary dataset. -Eric
... View more
09-09-2019
08:37 AM
|
0
|
2
|
2854
|
|
POST
|
Anyone reading this topic and considering performing a similar methodology should also read the following topic, it contains a lot of relevant information: https://community.esri.com/thread/239276-adding-or-removing-new-sites-to-assess-changes-in-the-kriging-standard-error
... View more
08-28-2019
09:39 PM
|
0
|
0
|
1292
|
|
POST
|
If you employ any kind of transformation (Normal Score, logarithmic, etc), then the standard errors and the predictions will be dependent, and you will get different standard errors for different values you assign to the new point. There is no perfect methodology for assigning a value to the new points, but interpolating the value from the original measured points is what is done in practice (this is what Densify Sampling Network does automatically). Using GA Layer To Points just ensures that the value will be justifiable no matter the kriging model. The issue about predictions and standard errors being independent can definitely be confusing, as you'll often see statements along the lines of "the predicted values of ordinary kriging are independent of the standard errors," with no qualifying statements or hints that there is more to the story. While the statement is technically true, it is very easy to misunderstand due to terminology. You might read that statement and assume that you can use Ordinary Kriging with a logarithmic transformation, and the standard errors will be independent of the predictions, but they won't be. The problem is that the actual name for ordinary kriging with a log transformation is "Ordinary Lognormal Kriging", not just "Ordinary Kriging." So the previous statement about predictions and standard errors being independent for ordinary kriging models was not meant to apply to ordinary lognormal kriging models.
... View more
08-28-2019
09:20 PM
|
0
|
0
|
1747
|
|
POST
|
Also, if you want to experiment with the location of the new points rather than have them selected at the location of the largest standard error, just create the new point (or multiple new points) anywhere you want in place of running Densify Sampling Network. You should then use GA Layer To Points to predict the value of the new points. Merge these values into the original data before recalculating the new standard errors. You alluded to wanting to do something like this in this post: Optimising monitoring networks using Kriging Similarly, if you want to add, say, 5 new points at a time between calculating the average standard error, just specify 5 new points in Densify Sampling Network, and follow the same workflow.
... View more
08-28-2019
01:25 PM
|
0
|
2
|
1747
|
|
POST
|
Hi Simon, There are probably multiple ways to do this, but if I were going to do it myself, I would do the following: Make sure the Output Type of your kriging model is set to Standard Error of Prediction. Export the geostatistical layer to a raster using GA Layer To Rasters (or GA Layer To Grid in ArcMap), and use Get Raster Properties tool to find the average value of the standard error raster. This will be the first value in your network density graph. Using your kriging model, run Densify Sampling Network, and choose to only create a single new point. This will create a new point at the location of the largest standard error. You can use the "Input weight raster" parameter to define your study area so that the new point will not be created outside of it (give a weight of 0 or NoData to all cells outside of the study area). Merge the output of Densify Sampling Network and the original data that was used to create the kriging model into a new dataset, and map the "Value" field into the field that you used to interpolate (this value and the StdErr field come by interpolating the value of the new location). This creates a new dataset with all of the original measurements and one new value. Use Create Geostatsitical Layer to create a new geostatistical layer for the merged dataset containing the newly created point. Provide the original kriging model as the model source. This will create a new geostatistical layer with the new dataset that uses the same parameters as the original kriging model. Export the new geostatistical layer to a standard error raster. Calculate the new average standard error with Get Raster Properties, and write the average to your network density graph. Repeat the last 5 steps (starting with Densify Sampling Network) as many times as you need in order to get the average standard error beneath whatever threshold you need. Make sure in the Create Geostatsitical Layer step to use the original model (without any of the invented points) for every iteration; do not use the model from the previous iteration. The workflow is a bit long, but it can be automated. It boils down to sequentially creating a new point at the location of the largest standard error within the study area, merging its predicted value into the original dataset, then recalculating the new average standard error, then repeating the process until the average standard error is smaller than some value you specify. Please let me know if any of this is unclear or if you have any other questions. -Eric
... View more
08-28-2019
01:12 PM
|
1
|
3
|
1747
|
|
POST
|
Hi Simon, It is indeed strange that your standard errors are increasing as the network densifies. In general, this shouldn't happen, but I can think of at least one situation where it could happen. Predictions and standard errors in kriging are only independent for a fixed mean function (either trend or a constant) and if no transformation is applied. The default kriging model in the Geostatistical Wizard is Simple Kriging with a normal score transformation, so there will generally be dependence between the predicted value and the standard error. Therefore, when you add a new fake point and assign it a value, the resulting standard errors will be depend on that value. I could imagine the standard errors growing larger and larger if the fake value was not given much thought (and especially if the normal score transformation tries to recalculate itself for the invented value). Try performing your workflow again, but turn off the transformation on the second page of the Geostatistical Wizard (this will make the predictions and standard errors independent). I think you should then see the standard errors decrease for denser networks. -Eric
... View more
08-28-2019
11:06 AM
|
0
|
0
|
1292
|
|
POST
|
When performing crossvalidation, the point is hidden from the calculation, so it is displaying the prediction and standard error using all of the remaining points (it does this for each point). GA Layer To Points instead calculates the prediction and standard error using all of the points. Since GA Layer To Points has information that crossvalidation doesn't have (the actual measured value at the location), you should expect to see smaller standard errors from GA Layer To Points.
... View more
08-27-2019
08:06 AM
|
1
|
0
|
709
|
|
POST
|
Hi, I just noticed that you were asking about creating a Voronoi Map from polygons rather than points. If you just need the geometry of the Voronoi Map and do not need the local statistics, you can do the following: Convert the polygons to point centroids using "Feature To Point" geoprocessing tool. Use the centroids as input to "Create Thiessen Polygons" tool. Optionally, clip the Thiessen polygons to a boundary using the "Clip" tool. This will give the same polygons as the Voronoi Map ESDA tool in ArcMap. Again, it will not have the local statistics, but it will create the same polygons.
... View more
08-23-2019
10:49 AM
|
0
|
1
|
5715
|
|
POST
|
Hi Liliana, Unfortunately, the Voronoi Map tool is not available in ArcGIS Pro. Most Exploratory Spatial Data Analysis (ESDA) tools from ArcMap are available as charts in ArcGIS Pro, but the Voronoi Map is not one of them. -Eric Krause
... View more
08-23-2019
09:30 AM
|
0
|
3
|
5715
|
| Title | Kudos | Posted |
|---|---|---|
| 2 | 01-16-2025 04:52 AM | |
| 1 | 10-02-2024 06:45 AM | |
| 2 | 08-23-2024 09:18 AM | |
| 1 | 07-19-2024 07:09 AM | |
| 1 | 08-21-2012 09:47 AM |
| Online Status |
Offline
|
| Date Last Visited |
a week ago
|