|
POST
|
Part of the confusion is that in principle, GWR doesn't require the weights to be assigned in any particular way. So textbooks usually just give generic formulas that can apply to any weighting scheme you want. Though as the name Geographically Weighted Regression suggests, the weight is almost always some function of geographic distance between the prediction location and the neighboring features (where closer neighbors get higher weights and, thus, more influence on the model). Kernel functions are the most common way to assign these weights, where the weight decreases with distance according to one of many possible kernels: https://en.wikipedia.org/wiki/Kernel_(statistics) In ArcGIS Pro, the "Local Weighting Scheme" parameter lets you choose between Bisquare and Gaussian kernel functions. In the very last image you posted, the blue cone around the prediction location is visualization of the kernel. Imagine the height of that cone being the weight assigned to a neighbor. Features close to the middle get the highest weight, and it decreases to zero after a certain radius around the prediction location.
... View more
10-25-2023
01:18 PM
|
1
|
1
|
1138
|
|
POST
|
It might be possible to use geostatistics, but I suspect it would be better to use a classification workflow. Please look into the "Forest-based Classification and Regression" tool.
... View more
09-18-2023
09:22 AM
|
0
|
0
|
852
|
|
POST
|
@jyothisril Many of the datasets do have undefined coordinate systems, and I unfortunately do not know the original spatial references. Many are likely custom coordinate systems of small study areas.
... View more
05-03-2023
11:05 AM
|
0
|
1
|
909
|
|
BLOG
|
@giancarlociotoli This is a very good question that the Geostatistical Analyst team spent quite a lot of time thinking about and debating. We came to the conclusion that this tool should only be used for predictive purposes, and it was not suitable for explanatory purposes. This is why explanatory variable coefficients and PCA loadings are not provided by the tool. Without going into too much detail, problems arise with EBKRP's subset mixing methodology. Different subsets will perform PCA independently, and their loadings are often wildly different, even for the same explanatory variables. Within a single subset, this is not a problem, but there is no clear way to meaningfully aggregate different loadings in areas of transition between different subsets. EBKRP mixes only the final predictive distributions across subsets produces, and this produces stable predictions. However, that does not imply that mixing individual components of the models produces stable estimates of a mixed component. In our experimentation, we found that attempting to mix components produced unstable coefficients and uninterpretable loadings, while still leaving stable predictions. Because of this, we only recommend it for predictive purposes, not explanatory purposes (this is also why the word "Prediction" is explicitly in the name of the tool). - Eric Krause
... View more
04-03-2023
12:29 PM
|
1
|
0
|
2498
|
|
POST
|
@RyanSnead Sorry for being very late with this reply, but this can be accomplished with the Neighborhood Summary Statistics tool in the Spatial Statistics toolbox.
... View more
03-13-2023
12:50 PM
|
0
|
0
|
994
|
|
POST
|
@brghtwk The main idea behind the tool is to investigate the impact of slightly changing the semivariogram parameters of an existing model (often called a "sensitivity analysis"). Since you must choose specific values for semivariogram parameters, it is reassuring if you get nearly the same results by using slightly different semivariogram parameters. If the predictions change a lot for small changes of the semivariogram parameters, then your results may only reflect the arbitrary parameter choices, rather than an accurate representation of the underlying process. The tool tests this by adding random noise to the semivariogram parameters (range, nugget, sill, etc) and recomputing predictions. You must provide the initial model (a geostatistical layer) using the Geostatistical Wizard. You'll define the semivariogram model (Spherical, Exponential, etc) along with initial parameter values. The Semivariogram Sensitivity tool then adds the noise to the parameters.
... View more
01-31-2023
07:56 AM
|
0
|
1
|
1130
|
|
POST
|
Hi @soutomiguel, In 2D, Empirical Bayesian Kriging will filter some measurement uncertainties by assuming the nugget effect is entirely measurement error, but it assumes that the measurement error is the same for every feature. If you want to using different measurement errors for different features in EBK in 2D, it is simpler to do with the EBK Regression Prediction tool. The tool requires at least one raster as an explanatory variable, but if you give a constant value, it is equivalent to not using an explanatory variable. Use the Create Constant Raster tool, and set the constant value to the mean of the elevation values. Hope that helps! -Eric
... View more
12-09-2022
10:27 AM
|
0
|
0
|
991
|
|
POST
|
@MouYi Apologies for the delay. Here is the citation for the paper: A.Gribov, K.Krivoruchko, J.M. Ver Hoef, "Modeling the Semivariogram: New Approach, Methods Comparison and Case Study", in: T. C. Coburn, J. M. Yarus, R. L. Chambers (Eds.), Stochastic modeling and geostatistics: Principles, methods, and case studies, Vol. II, The American Association of Petroleum Geologists, pp. 45–57, 2006. An updated link is available on the author's website: https://sites.google.com/site/agribov
... View more
09-23-2022
02:48 PM
|
0
|
0
|
1236
|
|
POST
|
@PamButler There was some instability yesterday (they failed to download a couple times before finally succeeding), but it seems to be resolved today. Please try to download them again, thank you!
... View more
08-11-2022
12:43 PM
|
0
|
0
|
984
|
|
POST
|
@JustinLee Great question! Forest-based Classification and Regression does not make any normal distribution assumption about the data. Generally speaking, outliers and extreme values will be most problematic for the model. Ideally, you'll have a roughly even spread of values between the minimum and maximum, but there's no requirement that the distribution be bell-shaped.
... View more
05-26-2022
10:32 AM
|
1
|
0
|
1197
|
|
POST
|
Glad you were able to resolve the problem! Voxel layers use GPU processing for rendering. Restarting the computer and/or updating graphics drivers tends to resolve these kinds drawing issues.
... View more
05-26-2022
09:48 AM
|
0
|
0
|
1632
|
|
POST
|
Sorry for the late reply, but by default, the voxel layer stretches up from a minimum height. You can change this to use raw z-coordinates in the Elevation tab of the layer property page (right click voxel layer -> Properties).
... View more
05-23-2022
12:20 PM
|
0
|
0
|
890
|
|
POST
|
@ttgrdias If all of the input data values are positive (no negatives or zeros), then the Log Empirical transformation ensures all predictions will be positive.
... View more
05-03-2022
06:04 AM
|
0
|
0
|
2251
|
|
POST
|
Hi @Lacin_Ibrahim, The Visualize Space Time Cube in 2D tool will re-create the most recent result of Emerging Hot Spot Analysis for the analysis variable. If you rerun EHSA on the same variable, the Visualize STC in 2D tool will start creating the output of the most recent EHSA tool run. Please let me know if you have any other questions or need any clarifications. -Eric
... View more
04-25-2022
06:51 AM
|
1
|
2
|
1416
|
|
POST
|
Hi Elijah, I'll try with an example of Inverse Distance Weighting with five points: p1, p2, p3, p4, and p5. Each of these points have a location and a measured value. Cross validation would start by removing p1. It would then use p2, p3, p4, and p5 to predict the value of p1. In IDW, this means taking the weighted average of the values of p2 to p5 (weighted by inverse distance). This will result in some prediction (called the cross validation prediction) that can be compared to the measured value of p1. Next, p2 would be removed, and p1, p3, p4, and p5 would be used to predict to the location of p2 (note that p1 is added back to the dataset after being cross validated). The same is done for p3, p4, and p5, each using the other four points. This would produce five cross validation errors that would be used to calculate, among other things, the root mean square error of the IDW model. But when actually making the prediction surface (after cross validation), all points are used to make the predictions. The surface also predicts values everywhere, including at the input point locations. So, what will it predict at, say, the location of p3? The prediction is the weighted average of all the points p1, p2, p3, p4, and p5, weighted by the inverse distance to p3. But the distance from p3 to itself is zero, which gives the value of p3 a weight of infinity. This forces the predicted value to be exactly equal to the measured value at p3. This is what makes IDW an "exact" interpolation method. Please let me know if that still is not clear. -Eric
... View more
04-14-2022
01:30 PM
|
0
|
1
|
1703
|
| Title | Kudos | Posted |
|---|---|---|
| 2 | 01-16-2025 04:52 AM | |
| 1 | 10-02-2024 06:45 AM | |
| 2 | 08-23-2024 09:18 AM | |
| 1 | 07-19-2024 07:09 AM | |
| 1 | 08-21-2012 09:47 AM |
| Online Status |
Online
|
| Date Last Visited |
Tuesday
|