POST
|
@jyothisril Many of the datasets do have undefined coordinate systems, and I unfortunately do not know the original spatial references. Many are likely custom coordinate systems of small study areas.
... View more
05-03-2023
11:05 AM
|
0
|
1
|
644
|
BLOG
|
@giancarlociotoli This is a very good question that the Geostatistical Analyst team spent quite a lot of time thinking about and debating. We came to the conclusion that this tool should only be used for predictive purposes, and it was not suitable for explanatory purposes. This is why explanatory variable coefficients and PCA loadings are not provided by the tool. Without going into too much detail, problems arise with EBKRP's subset mixing methodology. Different subsets will perform PCA independently, and their loadings are often wildly different, even for the same explanatory variables. Within a single subset, this is not a problem, but there is no clear way to meaningfully aggregate different loadings in areas of transition between different subsets. EBKRP mixes only the final predictive distributions across subsets produces, and this produces stable predictions. However, that does not imply that mixing individual components of the models produces stable estimates of a mixed component. In our experimentation, we found that attempting to mix components produced unstable coefficients and uninterpretable loadings, while still leaving stable predictions. Because of this, we only recommend it for predictive purposes, not explanatory purposes (this is also why the word "Prediction" is explicitly in the name of the tool). - Eric Krause
... View more
04-03-2023
12:29 PM
|
1
|
0
|
1986
|
POST
|
@RyanSnead Sorry for being very late with this reply, but this can be accomplished with the Neighborhood Summary Statistics tool in the Spatial Statistics toolbox.
... View more
03-13-2023
12:50 PM
|
0
|
0
|
621
|
POST
|
@brghtwk The main idea behind the tool is to investigate the impact of slightly changing the semivariogram parameters of an existing model (often called a "sensitivity analysis"). Since you must choose specific values for semivariogram parameters, it is reassuring if you get nearly the same results by using slightly different semivariogram parameters. If the predictions change a lot for small changes of the semivariogram parameters, then your results may only reflect the arbitrary parameter choices, rather than an accurate representation of the underlying process. The tool tests this by adding random noise to the semivariogram parameters (range, nugget, sill, etc) and recomputing predictions. You must provide the initial model (a geostatistical layer) using the Geostatistical Wizard. You'll define the semivariogram model (Spherical, Exponential, etc) along with initial parameter values. The Semivariogram Sensitivity tool then adds the noise to the parameters.
... View more
01-31-2023
07:56 AM
|
0
|
1
|
913
|
POST
|
Hi @soutomiguel, In 2D, Empirical Bayesian Kriging will filter some measurement uncertainties by assuming the nugget effect is entirely measurement error, but it assumes that the measurement error is the same for every feature. If you want to using different measurement errors for different features in EBK in 2D, it is simpler to do with the EBK Regression Prediction tool. The tool requires at least one raster as an explanatory variable, but if you give a constant value, it is equivalent to not using an explanatory variable. Use the Create Constant Raster tool, and set the constant value to the mean of the elevation values. Hope that helps! -Eric
... View more
12-09-2022
10:27 AM
|
0
|
0
|
738
|
POST
|
@MouYi Apologies for the delay. Here is the citation for the paper: A.Gribov, K.Krivoruchko, J.M. Ver Hoef, "Modeling the Semivariogram: New Approach, Methods Comparison and Case Study", in: T. C. Coburn, J. M. Yarus, R. L. Chambers (Eds.), Stochastic modeling and geostatistics: Principles, methods, and case studies, Vol. II, The American Association of Petroleum Geologists, pp. 45–57, 2006. An updated link is available on the author's website: https://sites.google.com/site/agribov
... View more
09-23-2022
02:48 PM
|
0
|
0
|
882
|
POST
|
@PamButler There was some instability yesterday (they failed to download a couple times before finally succeeding), but it seems to be resolved today. Please try to download them again, thank you!
... View more
08-11-2022
12:43 PM
|
0
|
0
|
719
|
POST
|
@JustinLee Great question! Forest-based Classification and Regression does not make any normal distribution assumption about the data. Generally speaking, outliers and extreme values will be most problematic for the model. Ideally, you'll have a roughly even spread of values between the minimum and maximum, but there's no requirement that the distribution be bell-shaped.
... View more
05-26-2022
10:32 AM
|
1
|
0
|
939
|
POST
|
Glad you were able to resolve the problem! Voxel layers use GPU processing for rendering. Restarting the computer and/or updating graphics drivers tends to resolve these kinds drawing issues.
... View more
05-26-2022
09:48 AM
|
0
|
0
|
1188
|
POST
|
Sorry for the late reply, but by default, the voxel layer stretches up from a minimum height. You can change this to use raw z-coordinates in the Elevation tab of the layer property page (right click voxel layer -> Properties).
... View more
05-23-2022
12:20 PM
|
0
|
0
|
637
|
POST
|
@ttgrdias If all of the input data values are positive (no negatives or zeros), then the Log Empirical transformation ensures all predictions will be positive.
... View more
05-03-2022
06:04 AM
|
0
|
0
|
1670
|
POST
|
Hi @Lacin_Ibrahim, The Visualize Space Time Cube in 2D tool will re-create the most recent result of Emerging Hot Spot Analysis for the analysis variable. If you rerun EHSA on the same variable, the Visualize STC in 2D tool will start creating the output of the most recent EHSA tool run. Please let me know if you have any other questions or need any clarifications. -Eric
... View more
04-25-2022
06:51 AM
|
1
|
2
|
1116
|
POST
|
Hi Elijah, I'll try with an example of Inverse Distance Weighting with five points: p1, p2, p3, p4, and p5. Each of these points have a location and a measured value. Cross validation would start by removing p1. It would then use p2, p3, p4, and p5 to predict the value of p1. In IDW, this means taking the weighted average of the values of p2 to p5 (weighted by inverse distance). This will result in some prediction (called the cross validation prediction) that can be compared to the measured value of p1. Next, p2 would be removed, and p1, p3, p4, and p5 would be used to predict to the location of p2 (note that p1 is added back to the dataset after being cross validated). The same is done for p3, p4, and p5, each using the other four points. This would produce five cross validation errors that would be used to calculate, among other things, the root mean square error of the IDW model. But when actually making the prediction surface (after cross validation), all points are used to make the predictions. The surface also predicts values everywhere, including at the input point locations. So, what will it predict at, say, the location of p3? The prediction is the weighted average of all the points p1, p2, p3, p4, and p5, weighted by the inverse distance to p3. But the distance from p3 to itself is zero, which gives the value of p3 a weight of infinity. This forces the predicted value to be exactly equal to the measured value at p3. This is what makes IDW an "exact" interpolation method. Please let me know if that still is not clear. -Eric
... View more
04-14-2022
01:30 PM
|
0
|
1
|
1254
|
POST
|
@Elijah Each horizontal line of points in the graph appears to have the same Measured value on the y-axis (or very close to equal). However, they each have a different Predicted value on the x-axis. Having many repeated values in the field you used to interpolate would produce this kind of graph. This isn't necessarily a problem, but you should look into it and confirm that the repeated values are expected in your data.
... View more
03-08-2022
10:07 AM
|
0
|
0
|
726
|
BLOG
|
@ttgrdias Unless you have a physical reason to think the EIF is some specific value (say, a value estimated from ocean or wind currents), I would personally let the software estimate the inflation factor. Regarding configuring parameters to improve cross validation results, any parameter (search neighborhood or otherwise) can potentially improve the results. In my experience (and this is just a general statement), as long as you use at least 10-15 neighbors total (EBK3D uses between 12 and 24 neighbors by default), you won't see a lot of improvement by including more neighbors in the search neighborhood. Again, in my experience, spending extra time configuring the values of the Subset size, Order of trend removal, and Elevation inflation factor parameters often provide the best model improvement for 3D interpolation.
... View more
01-26-2022
09:22 AM
|
1
|
0
|
2373
|
Title | Kudos | Posted |
---|---|---|
2 | 01-16-2025 04:52 AM | |
1 | 10-02-2024 06:45 AM | |
2 | 08-23-2024 09:18 AM | |
1 | 07-19-2024 07:09 AM | |
1 | 08-21-2012 09:47 AM |
Online Status |
Offline
|
Date Last Visited |
Tuesday
|