|
POST
|
The Trend Analysis ESDA tool has a horizontal and vertical slide bar that allows you to rotate the graph. As you move these sliders, you'll see the projected points (the green and blue points) change accordingly, and the trend lines will change to fit the new projected points. By aligning the graph at different angles, you can see how the trend changes in different directions. When you use trend removal in the Geostatistical Wizard, these directional trends will be automatically detected using local polynomial interpolation, and it will do its best to remove them before fitting the semivariogram. One caveat is that it is often difficult (if not impossible) to differentiate trend, autocorrelation, and anisotropy. They can all present themselves in ways that look identical. Does that answer your question?
... View more
11-23-2011
05:40 AM
|
0
|
0
|
2649
|
|
POST
|
The optimal model parameters completely depend on your data. There isn't any one technique that is guaranteed to work. Have you tried the Optimize Model button at the top? Try using K-Bessel or Stable semivariogram types, then press the optimize model button. Also, try changing Variable to semivariogram and optimize the model again. If that doesn't work, you may want to try to remove trends (the option appears on the Wizard page right before the semivariogram).
... View more
11-21-2011
05:26 AM
|
0
|
0
|
2649
|
|
POST
|
Use the Neighborhood Selection tool, and give it the same searching neighborhood parameters you used in the IDW model, and use the point feature class you used in IDW for the input points. For the x and y coordinates, use the center of the raster cell. This will make a layer of the points that were used as neighbors for that cell. If you're just wanting to do IDW with inverse distance squared as the weight, this is what IDW does by default.
... View more
11-18-2011
09:44 AM
|
0
|
0
|
801
|
|
POST
|
We do not have that option. We decided long ago that the best way to proceed with kriging is to start with ESDA, then proceed to variography. Outliers should be detected and removed in the ESDA step.
... View more
11-17-2011
07:24 AM
|
0
|
0
|
1004
|
|
POST
|
For bimodal data like this, I suggest using Simple kriging, then apply a Normal Score Transformation. Change the "Approximation method" to Gaussian Kernels. Also, if your data is clustered, consider cell declustering before the normal score transformation.
... View more
11-16-2011
06:41 AM
|
0
|
0
|
2649
|
|
POST
|
The x-axis units are the map units. So, if your data is in meters, the x-axis is in meters. If the scale is in the form: (Distance) x 10^(-2), then this means than if you see .12 on the x-axis, it corresponds to 12 meters because 12 x 10^(-2) = .12.
... View more
11-11-2011
06:06 AM
|
0
|
0
|
1932
|
|
POST
|
The optimize power is finding the power value that results in the lowest root-mean-square. You can emulate this by iterating through the power value in your script. With each iteration, use the Cross Validation tool to find the RMS, and see which value results in the lowest RMS.
... View more
11-10-2011
07:01 AM
|
0
|
0
|
801
|
|
POST
|
Good to hear. If anything comes up, feel free to ask more questions.
... View more
11-07-2011
08:23 AM
|
0
|
0
|
1666
|
|
POST
|
We have a tool to do exactly what you need. It's called Densify Sampling Network, and it's in the Geostatistical Analyst toolbox, under Sampling Network Design toolset. You will use your kriging layer as input, then specify the number of new locations for lowering the overall prediction standard errors. It will create a point feature class that determines the best new locations for monitoring sites. However, this technique attempts to minimize the overall standard errors; it won't give preferences for particular locations (like cities). If you're only interested in low standard errors near cities of interest, there's another technique you can try. The trick with this technique is that the standard errors only depend on the locations of the data points, not the data values themselves (unless you applied a transformation). In other words, if you add a new point and give it a value of 1, the standard error surface will be the same as using a value of 1 million (this is obviously not true for the prediction surface). So, you can create artificial new points and test how they will affect the standard error surface. The easiest way to do this is to copy your original point feature class, then append new points near cities that you're interested in (you can give them any data value you want). Then use the Create Geostatistical Layer tool. Give the tool the original kriging layer you created with the original points, then provide the appended point feature class for the input dataset(s). Run the tool, then look at the standard error surface and decide if it's acceptable. But remember, this last paragraph will only work if you did not apply a transformation when doing the kriging. If you did, then the standard errors actually do depend on the data values, so you can't just make up new data values. As for your idea, I don't think it will work. At least, I don't see how it would work, but maybe I'm just missing something.
... View more
11-04-2011
09:04 AM
|
0
|
0
|
1666
|
|
POST
|
Good questions. For your first question, the simplest explanation is that these weights are what the kriging equations indicate is the best linear unbiased predictor for the new location, but I realize that isn't very helpful conceptually. Probably the best way to think about this phenomenon is that a point that receives a negative weight does not have useful or unique information. In other words, the information that the point provides is already contained in other other neighboring points. For your second question, I think the problem is that you're treating semivariances and standard errors as the same thing (ie, one is a square of the other), but they're actually very different. Semivariances are just squared differences between pairs of measured values that are binned together and averaged by distance. The prediction standard errors come from the kriging equations, which require a semivariogram in order to calculate. So the semivariances and the prediction standard errors are related, but the relationship is complicated and indirect. If your goal is to get all the prediction standard errors under 20cm, adjusting the range (the x-axis of the semivariogram) and other semivariogram parameters generally is not going to work. To reduce standard errors, you need to take more samples in the areas of high standard errors. If you're seeing the semivariances take on roughly the same values as the prediction standard errors of the output, it's probably just a coincidence. There's no mathematical reason that they would be the same. Hope that helps. Feel free to ask more questions if that wasn't clear.
... View more
11-04-2011
07:04 AM
|
0
|
0
|
1666
|
|
POST
|
This is an deceptively complicated question. Let's try to narrow things down. Are you trying to predict the proportion of households (in each block group) that will receive this benefit in 2012? And you're trying to predict this proportion from data taken in 2010? Assuming this is correct, you need to build some kind of model for making these predictions. The statistical errors of this model will dictate the appropriate level of confidence (doubling the standard error is a common method). As for how you would build this model, I really don't have any good recommendations. You need to talk to a statistician that specializes in population dynamics over time.
... View more
10-17-2011
11:45 AM
|
0
|
0
|
431
|
|
POST
|
As for which is better, it's really a judgement call. Personally, I still like the model on the left because both the root-mean-square and average standard error are lower than the model on the right. A large difference between the RMS and the average standard error can indicate model problems, but a root-mean-square standardized of .85 indicates that the problem is not severe in this case. And the one point on the x-axis of the LPI model is also concerning. When we change the graphic, we'll find an example where a lower RMS clearly does not imply a better model.
... View more
10-07-2011
08:57 AM
|
0
|
0
|
1909
|
|
POST
|
For block predictions, it's just an areal average. Block standard errors are more complicated than that, and I think that's what your colleague is getting at. GA Layer to Grid performs block predictions from a kriging model you've already made. It does not calculate block standard errors, which is the big difference from true "block kriging". True block kriging tries to reduce calculations by incorporating the areal units into the kriging step, and it handles the standard errors for the polygons differently (the calculation of block standard errors is not a simple areal average of the point standard errors). Sorry for the terminology issues; I probably should have consistently said "block prediction" rather than "block kriging." Again, we decided not to implement full block kriging because geostatistical simulations do the same job a lot better.
... View more
10-07-2011
08:50 AM
|
0
|
0
|
687
|
|
POST
|
So you already have a polygon feature class with a field indicating the slope? If that's the case, then right-click the layer in ArcMap, and open properties. Under the Symbolization tab, click "Quantities" on the left, then "Graduated colors." For the Value field, give it the field containing slopes, then select a Color Ramp that you like. Then click Classify. Change Classes to 1 and Method to Manual. On the right, click on the Break Value, and type 15. Click Ok. This should symbolize all polygons below 15% slope as one color and all polygons above 15% as another color. Does that answer your question?
... View more
10-07-2011
07:14 AM
|
0
|
0
|
744
|
|
POST
|
Good to hear you found the tool. I highly recommend using it to aggregate predictions to polygons. When researchers talk about block kriging, they're usually talking about things that are a lot more complicated than just averaging to polygons. They usually are talking about analyzing how the point-to-point semivariogram compares to the polygon-to-polygon semivariogram after doing areal averaging of the discretized surface. But if you just want to aggregate kriging predictions to polygons, you don't really need to worry about all that.
... View more
10-07-2011
07:00 AM
|
0
|
0
|
687
|
| Title | Kudos | Posted |
|---|---|---|
| 2 | 01-16-2025 04:52 AM | |
| 1 | 10-02-2024 06:45 AM | |
| 2 | 08-23-2024 09:18 AM | |
| 1 | 07-19-2024 07:09 AM | |
| 1 | 08-21-2012 09:47 AM |
| Online Status |
Offline
|
| Date Last Visited |
10-28-2025
06:12 AM
|