|
POST
|
I should have been more clear about this, but the GWR model as a whole does not have a condition number. However, every local regression has one. It could be the case that some locations have large condition numbers (meaning that the coefficients in that area are unstable and unreliable) but have low condition numbers in another area, meaning that the coefficients are more reliable and precise. I'm also not completely clear what you mean by rerunning GWR multiple times. If you rerun it with the same data, you should get the same coefficients each time. The condition number is more related to whether you should trust the values of the coefficients.
... View more
12-01-2023
11:27 AM
|
0
|
1
|
2848
|
|
POST
|
I've heard variations of that phrasing various times, and I don't think it's wrong, but I'd argue there are better ways to conceptualize the condition number. It's more about the stability of the estimated coefficients for a given set of explanatory variable values. The coefficient are estimated by inverting a matrix of data values, and the condition number measures how sensitive the coefficients are to small changes in the data values. For low condition numbers, you can alter/remove some of the data, and the coefficients should not drastically change (in other words, the estimated coefficients are stable). But for matrices with very large condition numbers, even small changes to the data values can wildly change the estimated coefficients (meaning that the estimated coefficients are not stable/reliable). This is a bit easier to understand using simple numbers rather than matrices. Inverting a matrix with a large condition number is equivalent to finding the inverse of a number that is very close to 0. For example, the inverse of 0.001 is 1,000, and the inverse of 0.0001 is 10,000. Even though 0.001 and 0.0001 are very close in absolute value (they're both close to 0), their inverses are very different in absolute value (1000 vs 10000). To put it another way, for values very close to 0, the inverse is very sensitive to small changes of the number. This stability of the inverse is what condition numbers measure for matrices rather than single numbers. I hope that helps, and let me know if any of that was not clear. There are also many resources available to learn about condition numbers, as they are usually taught in Linear Algebra courses rather than geography or statistics.
... View more
11-30-2023
01:55 PM
|
0
|
3
|
2861
|
|
POST
|
Part of the confusion is that in principle, GWR doesn't require the weights to be assigned in any particular way. So textbooks usually just give generic formulas that can apply to any weighting scheme you want. Though as the name Geographically Weighted Regression suggests, the weight is almost always some function of geographic distance between the prediction location and the neighboring features (where closer neighbors get higher weights and, thus, more influence on the model). Kernel functions are the most common way to assign these weights, where the weight decreases with distance according to one of many possible kernels: https://en.wikipedia.org/wiki/Kernel_(statistics) In ArcGIS Pro, the "Local Weighting Scheme" parameter lets you choose between Bisquare and Gaussian kernel functions. In the very last image you posted, the blue cone around the prediction location is visualization of the kernel. Imagine the height of that cone being the weight assigned to a neighbor. Features close to the middle get the highest weight, and it decreases to zero after a certain radius around the prediction location.
... View more
10-25-2023
01:18 PM
|
1
|
1
|
1730
|
|
POST
|
It might be possible to use geostatistics, but I suspect it would be better to use a classification workflow. Please look into the "Forest-based Classification and Regression" tool.
... View more
09-18-2023
09:22 AM
|
0
|
0
|
1083
|
|
POST
|
@jyothisril Many of the datasets do have undefined coordinate systems, and I unfortunately do not know the original spatial references. Many are likely custom coordinate systems of small study areas.
... View more
05-03-2023
11:05 AM
|
0
|
1
|
1493
|
|
BLOG
|
@giancarlociotoli This is a very good question that the Geostatistical Analyst team spent quite a lot of time thinking about and debating. We came to the conclusion that this tool should only be used for predictive purposes, and it was not suitable for explanatory purposes. This is why explanatory variable coefficients and PCA loadings are not provided by the tool. Without going into too much detail, problems arise with EBKRP's subset mixing methodology. Different subsets will perform PCA independently, and their loadings are often wildly different, even for the same explanatory variables. Within a single subset, this is not a problem, but there is no clear way to meaningfully aggregate different loadings in areas of transition between different subsets. EBKRP mixes only the final predictive distributions across subsets produces, and this produces stable predictions. However, that does not imply that mixing individual components of the models produces stable estimates of a mixed component. In our experimentation, we found that attempting to mix components produced unstable coefficients and uninterpretable loadings, while still leaving stable predictions. Because of this, we only recommend it for predictive purposes, not explanatory purposes (this is also why the word "Prediction" is explicitly in the name of the tool). - Eric Krause
... View more
04-03-2023
12:29 PM
|
1
|
0
|
3398
|
|
POST
|
@RyanSnead Sorry for being very late with this reply, but this can be accomplished with the Neighborhood Summary Statistics tool in the Spatial Statistics toolbox.
... View more
03-13-2023
12:50 PM
|
0
|
0
|
1318
|
|
POST
|
@brghtwk The main idea behind the tool is to investigate the impact of slightly changing the semivariogram parameters of an existing model (often called a "sensitivity analysis"). Since you must choose specific values for semivariogram parameters, it is reassuring if you get nearly the same results by using slightly different semivariogram parameters. If the predictions change a lot for small changes of the semivariogram parameters, then your results may only reflect the arbitrary parameter choices, rather than an accurate representation of the underlying process. The tool tests this by adding random noise to the semivariogram parameters (range, nugget, sill, etc) and recomputing predictions. You must provide the initial model (a geostatistical layer) using the Geostatistical Wizard. You'll define the semivariogram model (Spherical, Exponential, etc) along with initial parameter values. The Semivariogram Sensitivity tool then adds the noise to the parameters.
... View more
01-31-2023
07:56 AM
|
0
|
1
|
1433
|
|
POST
|
Hi @soutomiguel, In 2D, Empirical Bayesian Kriging will filter some measurement uncertainties by assuming the nugget effect is entirely measurement error, but it assumes that the measurement error is the same for every feature. If you want to using different measurement errors for different features in EBK in 2D, it is simpler to do with the EBK Regression Prediction tool. The tool requires at least one raster as an explanatory variable, but if you give a constant value, it is equivalent to not using an explanatory variable. Use the Create Constant Raster tool, and set the constant value to the mean of the elevation values. Hope that helps! -Eric
... View more
12-09-2022
10:27 AM
|
0
|
0
|
1402
|
|
POST
|
@MouYi Apologies for the delay. Here is the citation for the paper: A.Gribov, K.Krivoruchko, J.M. Ver Hoef, "Modeling the Semivariogram: New Approach, Methods Comparison and Case Study", in: T. C. Coburn, J. M. Yarus, R. L. Chambers (Eds.), Stochastic modeling and geostatistics: Principles, methods, and case studies, Vol. II, The American Association of Petroleum Geologists, pp. 45–57, 2006. An updated link is available on the author's website: https://sites.google.com/site/agribov
... View more
09-23-2022
02:48 PM
|
0
|
0
|
1872
|
|
POST
|
@PamButler There was some instability yesterday (they failed to download a couple times before finally succeeding), but it seems to be resolved today. Please try to download them again, thank you!
... View more
08-11-2022
12:43 PM
|
0
|
0
|
1568
|
|
POST
|
@JustinLee Great question! Forest-based Classification and Regression does not make any normal distribution assumption about the data. Generally speaking, outliers and extreme values will be most problematic for the model. Ideally, you'll have a roughly even spread of values between the minimum and maximum, but there's no requirement that the distribution be bell-shaped.
... View more
05-26-2022
10:32 AM
|
1
|
0
|
1500
|
|
POST
|
Glad you were able to resolve the problem! Voxel layers use GPU processing for rendering. Restarting the computer and/or updating graphics drivers tends to resolve these kinds drawing issues.
... View more
05-26-2022
09:48 AM
|
0
|
0
|
2012
|
|
POST
|
Sorry for the late reply, but by default, the voxel layer stretches up from a minimum height. You can change this to use raw z-coordinates in the Elevation tab of the layer property page (right click voxel layer -> Properties).
... View more
05-23-2022
12:20 PM
|
0
|
0
|
1190
|
|
POST
|
@ttgrdias If all of the input data values are positive (no negatives or zeros), then the Log Empirical transformation ensures all predictions will be positive.
... View more
05-03-2022
06:04 AM
|
0
|
0
|
3112
|
| Title | Kudos | Posted |
|---|---|---|
| 2 | 01-16-2025 04:52 AM | |
| 1 | 10-02-2024 06:45 AM | |
| 2 | 08-23-2024 09:18 AM | |
| 1 | 07-19-2024 07:09 AM | |
| 1 | 08-21-2012 09:47 AM |
| Online Status |
Offline
|
| Date Last Visited |
02-25-2026
06:39 PM
|