POST
|
@brghtwk The main idea behind the tool is to investigate the impact of slightly changing the semivariogram parameters of an existing model (often called a "sensitivity analysis"). Since you must choose specific values for semivariogram parameters, it is reassuring if you get nearly the same results by using slightly different semivariogram parameters. If the predictions change a lot for small changes of the semivariogram parameters, then your results may only reflect the arbitrary parameter choices, rather than an accurate representation of the underlying process. The tool tests this by adding random noise to the semivariogram parameters (range, nugget, sill, etc) and recomputing predictions. You must provide the initial model (a geostatistical layer) using the Geostatistical Wizard. You'll define the semivariogram model (Spherical, Exponential, etc) along with initial parameter values. The Semivariogram Sensitivity tool then adds the noise to the parameters.
... View more
01-31-2023
07:56 AM
|
0
|
1
|
797
|
POST
|
Hi @soutomiguel, In 2D, Empirical Bayesian Kriging will filter some measurement uncertainties by assuming the nugget effect is entirely measurement error, but it assumes that the measurement error is the same for every feature. If you want to using different measurement errors for different features in EBK in 2D, it is simpler to do with the EBK Regression Prediction tool. The tool requires at least one raster as an explanatory variable, but if you give a constant value, it is equivalent to not using an explanatory variable. Use the Create Constant Raster tool, and set the constant value to the mean of the elevation values. Hope that helps! -Eric
... View more
12-09-2022
10:27 AM
|
0
|
0
|
644
|
POST
|
@MouYi Apologies for the delay. Here is the citation for the paper: A.Gribov, K.Krivoruchko, J.M. Ver Hoef, "Modeling the Semivariogram: New Approach, Methods Comparison and Case Study", in: T. C. Coburn, J. M. Yarus, R. L. Chambers (Eds.), Stochastic modeling and geostatistics: Principles, methods, and case studies, Vol. II, The American Association of Petroleum Geologists, pp. 45–57, 2006. An updated link is available on the author's website: https://sites.google.com/site/agribov
... View more
09-23-2022
02:48 PM
|
0
|
0
|
757
|
POST
|
@PamButler There was some instability yesterday (they failed to download a couple times before finally succeeding), but it seems to be resolved today. Please try to download them again, thank you!
... View more
08-11-2022
12:43 PM
|
0
|
0
|
622
|
POST
|
@JustinLee Great question! Forest-based Classification and Regression does not make any normal distribution assumption about the data. Generally speaking, outliers and extreme values will be most problematic for the model. Ideally, you'll have a roughly even spread of values between the minimum and maximum, but there's no requirement that the distribution be bell-shaped.
... View more
05-26-2022
10:32 AM
|
1
|
0
|
844
|
POST
|
Glad you were able to resolve the problem! Voxel layers use GPU processing for rendering. Restarting the computer and/or updating graphics drivers tends to resolve these kinds drawing issues.
... View more
05-26-2022
09:48 AM
|
0
|
0
|
1045
|
POST
|
Sorry for the late reply, but by default, the voxel layer stretches up from a minimum height. You can change this to use raw z-coordinates in the Elevation tab of the layer property page (right click voxel layer -> Properties).
... View more
05-23-2022
12:20 PM
|
0
|
0
|
541
|
POST
|
@ttgrdias If all of the input data values are positive (no negatives or zeros), then the Log Empirical transformation ensures all predictions will be positive.
... View more
05-03-2022
06:04 AM
|
0
|
0
|
1488
|
POST
|
Hi @Lacin_Ibrahim, The Visualize Space Time Cube in 2D tool will re-create the most recent result of Emerging Hot Spot Analysis for the analysis variable. If you rerun EHSA on the same variable, the Visualize STC in 2D tool will start creating the output of the most recent EHSA tool run. Please let me know if you have any other questions or need any clarifications. -Eric
... View more
04-25-2022
06:51 AM
|
1
|
2
|
1020
|
POST
|
Hi Elijah, I'll try with an example of Inverse Distance Weighting with five points: p1, p2, p3, p4, and p5. Each of these points have a location and a measured value. Cross validation would start by removing p1. It would then use p2, p3, p4, and p5 to predict the value of p1. In IDW, this means taking the weighted average of the values of p2 to p5 (weighted by inverse distance). This will result in some prediction (called the cross validation prediction) that can be compared to the measured value of p1. Next, p2 would be removed, and p1, p3, p4, and p5 would be used to predict to the location of p2 (note that p1 is added back to the dataset after being cross validated). The same is done for p3, p4, and p5, each using the other four points. This would produce five cross validation errors that would be used to calculate, among other things, the root mean square error of the IDW model. But when actually making the prediction surface (after cross validation), all points are used to make the predictions. The surface also predicts values everywhere, including at the input point locations. So, what will it predict at, say, the location of p3? The prediction is the weighted average of all the points p1, p2, p3, p4, and p5, weighted by the inverse distance to p3. But the distance from p3 to itself is zero, which gives the value of p3 a weight of infinity. This forces the predicted value to be exactly equal to the measured value at p3. This is what makes IDW an "exact" interpolation method. Please let me know if that still is not clear. -Eric
... View more
04-14-2022
01:30 PM
|
0
|
1
|
1107
|
POST
|
@Elijah Each horizontal line of points in the graph appears to have the same Measured value on the y-axis (or very close to equal). However, they each have a different Predicted value on the x-axis. Having many repeated values in the field you used to interpolate would produce this kind of graph. This isn't necessarily a problem, but you should look into it and confirm that the repeated values are expected in your data.
... View more
03-08-2022
10:07 AM
|
0
|
0
|
663
|
BLOG
|
@ttgrdias Unless you have a physical reason to think the EIF is some specific value (say, a value estimated from ocean or wind currents), I would personally let the software estimate the inflation factor. Regarding configuring parameters to improve cross validation results, any parameter (search neighborhood or otherwise) can potentially improve the results. In my experience (and this is just a general statement), as long as you use at least 10-15 neighbors total (EBK3D uses between 12 and 24 neighbors by default), you won't see a lot of improvement by including more neighbors in the search neighborhood. Again, in my experience, spending extra time configuring the values of the Subset size, Order of trend removal, and Elevation inflation factor parameters often provide the best model improvement for 3D interpolation.
... View more
01-26-2022
09:22 AM
|
1
|
0
|
2192
|
BLOG
|
@ttgrdias When you optimize the EIF, it will use the value that minimizes the Root-Mean-Square cross validation error (RMSE), keeping all other parameters fixed. In other words, this is the inflation factor that allows the model to most accurately predict back to the input point locations. Since the optimization is data-driven and only minimizes a single number (the RMSE), it can be sensitive to things like outliers, value distributions, and spatial configurations of the points. If these properties are not consistent across all of your datasets, you should generally expect them to estimate different EIF values.
... View more
01-26-2022
07:00 AM
|
0
|
0
|
2197
|
POST
|
Hi @ArnieWaddell1 The graph is showing the distribution each field in the cross validation Table (change the field with the Field pulldown) using a kernel density estimation. Think of it like a smoothed histogram of the Measured values, Predicted values, or Errors. Overlaying the Measured and Predicted fields is the most important use of the Distribution tab. This lets you compare the distribution of true values and the cross validated predictions. Ideally, the two should have very similar distributions, and big deviations might indicate problems in the interpolation model. -Eric
... View more
01-06-2022
02:17 PM
|
0
|
0
|
665
|
POST
|
Hi @Alexandra_Br, please try the links again. They were using "http" addresses, and I have updated them to "https". Please let me know if you still have any issues accessing them.
... View more
12-08-2021
02:04 PM
|
0
|
1
|
4438
|
Title | Kudos | Posted |
---|---|---|
1 | 10-02-2024 06:45 AM | |
2 | 08-23-2024 09:18 AM | |
1 | 07-19-2024 07:09 AM | |
1 | 08-21-2012 09:47 AM | |
1 | 07-05-2024 09:08 AM |
Online Status |
Offline
|
Date Last Visited |
yesterday
|