POST
|
Hi @EugenioJairEscobarSánchez, There are a few things to keep in mind. First, the GWR tool that you are using in ArcMap is not that same one that is only available in ArcGIS Pro. The newer GWR was designed to match the implementation of the GWR4 software (as it was at the time), and has been tested to give matching results. Second, there is not one single "correct" bandwidth, and there are a variety of methods to estimate it that agree/disagree with each other to varying extents. I am not certain the method that ArcMap's GWR tool uses to optimize the bandwidth, but I can verify that Fatheringham (the "father" of GWR) approved of the methodology and created at least one tutorial using the ArcMap tool: https://gwr.maynoothuniversity.ie/wp-content/uploads/2016/01/GWR_Tutorial.pdf
... View more
02-12-2025
08:35 AM
|
0
|
0
|
186
|
POST
|
Hi @GarethNash, I'm only seeing this thread now. The "Custom 3D points" option will produce a netCDF file with predictions at the specified location, but it isn't configured in a way where it can be represented as a voxel layer. Only the gridded points option can produce a voxel layer.
... View more
01-24-2025
01:35 PM
|
0
|
1
|
989
|
POST
|
Apologies, the above reply was from me, but I accidentally posted from the wrong account.
... View more
01-16-2025
04:52 AM
|
2
|
0
|
350
|
IDEA
|
The idea has been implemented in ArcGIS Pro 3.4 as the Directional Trend tool, available in the Geostatistical Analyst toolbox (Utilities toolset) and Spatial Statistics toolbox (Measuring Geographic Distributions toolset). The tool re-creates the polynomial trend lines on the X-Z axis as a scatter plot chart on a feature layer. The tool is available at all license levels (a Geostatistical Analyst license is not required).
... View more
11-07-2024
02:17 PM
|
0
|
0
|
1145
|
POST
|
Hi @MasoodShaikh,
Kriging is an "inexact" interpolation method, meaning that the prediction surface does not pass perfectly through the values of the input points and instead has a tendency to smooth predictions (meaning that the range of predictions is usually more narrow than the range of the original data values). The weaker the autocorrelation and the more noisy the data, the more it tends to smooth. For your data, it's likely that you have many locations where high values are very close to low values, and the kriging surface effectively smooths over the highs and lows.
There are a couple things you can do. First, you can use an interpolation method like Radial Basis Functions (aka splines) or Inverse Distance Weighting that will always honor the range of the input data values.
Second, you can disable the Nugget effect of kriging, which will force it to honor the input data range. You can do this on the semivariogram page of the Geostatistical Wizard by changing the “Model Nugget” option to “Disable”. However, not using a nugget effect can often create strange artifacts in the output, so it is generally not recommended. If you do this, pay close attention to strange behavior in the resulting surface.
-Eric
... View more
10-04-2024
06:39 AM
|
0
|
0
|
369
|
POST
|
Hi @GarethNash,
Geostatistical layers are in-memory layers and not stored on disk, so they will be gone if you restart the kernel. However, they can be saved as layer files with the Save To Layer File geoprocessing tool, so they can be exported later.
-Eric
... View more
10-02-2024
06:45 AM
|
1
|
1
|
706
|
POST
|
I mean the second. And, yes, if the new dataset has fundamentally different properties than the first, it will be quite inaccurate. This is only appropriate in cases like daily measurements of the same data, where the changes in the data values will be relatively small, and you're willing to sacrifice a small amount of accuracy in order to not have to manually perform kriging in the Geostatistical Wizard every day.
Kriging is never particularly safe to do in an automated environment, but if it's something you really need to do, we recommend using Empirical Bayesian Kriging for it. It is available as a geoprocessing tool, so setting up automation is relatively simple.
... View more
09-27-2024
06:06 AM
|
0
|
1
|
476
|
POST
|
Hi @JV_, EBK 3D is not an exact interpolation method, meaning that the range predicted values is usually more narrow than the range of the original values. This is also true for all kriging methods, 2D and 3D. The difference in the Symbology range is because geostatistical layers build their class breaks based on the input data values, and voxel layers (or rasters in 2D) build their symbology on the actual values of the voxel/raster. Usually, the range of the output will be more narrow than the range of the input (called "smoothing"). For a longer explanation, please see this old blog post. It is from ArcMap and talks about rasters, but the reasoning is the same as EBK 3D and the range of values of the voxel layers: https://www.esri.com/arcgis-blog/products/product/analytics/understanding-geostatistical-analyst-layers/ If you have access to ArcGIS Pro 3.2 or later, you can use the IDW 3D tool which will honor the minimum and maximum of the original data. The range of the voxel layer will still change a small amount because the calculations are made at the 3D center of every voxel, so unless you smallest/largest values align with the exact center of a voxel, the range will be slightly more narrow than the original points. Please let me know if you have any other questions or need any clarifications. -Eric Krause
... View more
08-23-2024
09:18 AM
|
2
|
0
|
613
|
POST
|
Hi @MatthewPoppleton, Yes, that equation from the ArcGIS 9 documentation is the formula for the K-Bessel semivariogram that is used for all instances of K-Bessel in Geostatistical Analyst. As you said, the detrended version performs a first-order trend removal, then applies the K-Bessel formula to the detrended values. Also, the K-Bessel semivariogram is often called the "Matern" semivariogram in other geostatistical literature. You might be able to find more information searching for that keyword instead. -Eric
... View more
07-19-2024
07:09 AM
|
1
|
1
|
732
|
POST
|
Hi @MWmep013, The reason for the discrepancy is that the GWR tool was reimplemented in ArcGIS Pro 2.3, and the previous version (equivalent to ArcMap) was deprecated. Among other things, the newer version uses a different and more common formula for global and local R-squared and optimizes bandwidths differently. The newer version follows the design and formulas of the GWR4 software (not from Esri). While you will not find it in the Geoprocessing pane, the deprecated version can still be used through arcpy (for example, in a Python Notebook or the Python Window) with arcpy.stats.GeographicallyWeightedRegression(). You can see the documentation for the deprecated version here: https://pro.arcgis.com/en/pro-app/latest/tool-reference/spatial-statistics/geographically-weighted-regression.htm Using the deprecated version should provide the same results as the ArcMap version. As for why using Distance Band lowers the R-squared, I am not certain, but it likely has something to do with your particular data. Please let me know if you have any other questions. -Eric
... View more
07-05-2024
09:08 AM
|
1
|
1
|
1447
|
POST
|
Hi @ZhichengZhong, ArcGIS does not have a GTWR tool, but in principle, including time should not alter the question of if/how to test for significant explanatory variables. While the GWR tools does provide significance results for local models, it's understandable that other softwares and publications do not. The problem is that GW(T)R isn't really a single model: it is a collection of local models that are each estimated at the locations of the input features. Further, these models are correlated with each other, as they often share the same features in their neighborhood. This can create problems related to multiple hypothesis testing where you are effectively testing N times the number of explanatory variables, and you should definitely be cautious in interpreting any particular p-value when performing so many hypothesis tests. This is why, generally, explanatory variables are chosen using global models like OLS, and statistics like R-squared and AIC are used to determine how much better GW(T)R does compared to the global model.
... View more
06-24-2024
07:21 AM
|
1
|
0
|
1634
|
POST
|
Hi @af2k24, I don't believe there is any way to do this. If I'm understanding correctly, you're wanting to use the coefficients from the original model and use them for the new rasters. However, EBK Regression Prediction will rebuild the coefficients for any new rasters, estimated from the input features that you provide; you can't save the coefficients are reuse them, unfortunately. The only thing that comes to mind is trying to forecast the input point values to 2070-2099 and use them along with the forecasted rasters to get a prediction surface for 2070-2099. Though that is obviously easier said than done, especially if you do not have historical point data to build a forecast model. -Eric
... View more
06-12-2024
08:41 AM
|
0
|
0
|
497
|
POST
|
Hi @EToon, Unfortunately, as you found, this is not going to work. The iterators in ModelBuilder are designed to work with Feature Class and Field type parameters, but the Input Datasets in Create Geostatistical Layer tool is a custom parameter type (called a Geostatistical Value Table), where the input dataset(s) and the field(s) are contained in a single parameter. This is because different model sources require different fields. For your case, only a dataset and field are required, but if you had performed cokriging with two datasets, for example, you would need to provide two feature classes and two fields. Other model sources would require other combinations of fields. Do your datasets and fields happen to have consistent names? Something like data1, data2, data3, etc? If so, this should be relatively simple to do in Python (I can help with this). But if they all have completely different names, you would need to type each out individually, which probably would not save much time from just doing it manually. Sorry for the bad news, but I don't know any way around this. -Eric
... View more
06-05-2024
08:29 AM
|
0
|
1
|
666
|
IDEA
|
This has been included in the product plan for ArcGIS Pro 3.4. The reimplementation will be a geoprocessing tool that creates a customized scatter plot chart on a feature layer that displays the projected scatterplot and trend line of the XZ plane. In ArcGIS Pro 3.3, you can create this scatter plot manually with customized Arcade code using these steps: On a feature layer, create a scatter plot chart by right-clicking the layer -> Create Chart -> Scatter Plot. In the Chart Properties pane, for the "Y-axis Number", provide the analysis field. In the "X-axis Number", click the "Set an expression" button to the right of the pulldown menu. Paste the Arcade code at the end of this post into the "Expression" code block (make sure that the "Language" at the top is set to "Arcade"). Change the second line of the code to any desired direction. The direction is provided as degrees clockwise from North. For example, 0 is north, 90 is east, 180 is south, and 270 is west. Click OK. The directional trend scatter plot will be displayed in the Chart pane. You can click "Set an expression" button again and change the direction, and the scatter plot will update to show the trend in the new direction. To show the polynomial trend line in the scatter plot, check the "Show trend line" checkbox in the Chart Properties pane, choose "Polynomial" from the dropdown, and provide a desired "Trend Order". // Input direction as clockwise degrees from north
var angleFromNorth = 0;
// Convert direction to counterclockwise radians from east
var adjustedAngleDegrees = 90 - angleFromNorth;
var adjustedAngleDegrees = adjustedAngleDegrees%360;
var angleInRadians = adjustedAngleDegrees * PI / 180;
// Return x-coordinate of rotated coordinate system
return Centroid($feature).X * Cos(angleInRadians) + Centroid($feature).Y * Sin(angleInRadians)
... View more
05-23-2024
07:02 PM
|
0
|
0
|
1708
|
POST
|
Hi @NakkyEkeanyanwu, I think the major confusion is that Dimension Reduction is not selecting a subset of the variables that you provide. Instead, it uses all variables to construct new "components" and each component is a weighted sum of all the variables. As a very simple example, let's say you have four variables (A, B, C, and D) and you want to create one component (reducing the dimension from four to one), the component might looks something like this (I am making up these coefficients): Component = 0.7*A + 0.2*B + 0.6*C - 0.1*D In essence, the component uses all variables, and the weights (the coefficients) indicate how "important" that particular variable is in the component. These coefficients are the eigenvector of the component, and the associated eigenvalue indicates how much of the total variability of the four variables is captured in the component. Frequently, a large percent of the total variability of all variables can be captured in just a few components, and this is what drives things like the Broken stick and Bartlett's test methods. They try to find a compromise between minimizing the number of components and maximizing the amount of variability that is captured by the components. Determining how many components to create is the most difficult part of Principal Component Analysis, so various methodologies are performed to help you decide. In an ideal case, you see some components account for a large percent of variance (PCTVAR field), then a sudden drop in the percent. However, for your data, I don't really see this; the variability captured by each component seems to drop steadily, and I think this is why Bartlett's method is recommending using a large number of components. However, using 7 components certainly seems justifiable here as well. Really, you could justify any number between 3 and 28. Regarding only 28 components explaining 100% of the variance, this means that two of the variables you provided are redundant, that their information is fully accounted for by other variables. If I'm reading your screenshots correctly, you use total population as a variable, and you also use the populations of particular subgroups. If the populations of the subgroups add up to the total population (or very close to it), then there is redundancy since the total population is captured by the sum of the populations of the subgroups. I suspect this is happening for two variables, resulting in 28 components that account for all variability. Please let me know if you have any other questions.
... View more
04-30-2024
08:56 AM
|
0
|
0
|
1271
|
Title | Kudos | Posted |
---|---|---|
2 | 01-16-2025 04:52 AM | |
1 | 10-02-2024 06:45 AM | |
2 | 08-23-2024 09:18 AM | |
1 | 07-19-2024 07:09 AM | |
1 | 08-21-2012 09:47 AM |
Online Status |
Offline
|
Date Last Visited |
2 weeks ago
|