|
POST
|
If you do your trend removal well, you shouldn't see a big difference between Universal and Ordinary/Simple kriging. As a general rule, we suggest using Simple kriging because it supports Normal Score Transformations. The difference between Ordinary/Simple and Universal is subtle and a bit confusing. The trend removal page is essentially just changing your input data by removing the trend that you fit. Then the semivariogram is fit to the detrended data. The difference between Ordinary/Simple and Universal is that Universal goes back and re-fits a global trend model to the data that has already been detrended (it has to do this for the Universal kriging equations to work). If you did your trend removal well, there should be very little global trend left to re-estimate, and Ordinary/Simple and Universal kriging should give very similar results.
... View more
08-06-2012
01:21 PM
|
0
|
0
|
7379
|
|
POST
|
I've experienced this slow-down when doing data frame clipping. The issue for me was that the polygon had way too many vertices. If you use the Simplify Polygon tool on the clipping polygon, it should start working quickly again.
... View more
08-06-2012
07:39 AM
|
0
|
0
|
4292
|
|
POST
|
We do not have spatio-temporal kriging in ArcGIS, so you won't be able to model the temporal correlation directly in the kriging. However, we have the Densify Sampling Network tool that suggests new locations of monitoring stations based on (among other things) kriging standard errors. The tool also can take an optional weight raster where you can weight certain locations higher than others. Though I don't have any specific recommendations about how you would do it, you may want to try to model this weight raster based on temporal trends in the data.
... View more
07-30-2012
10:19 AM
|
0
|
0
|
537
|
|
POST
|
Once you have the fishnet polygons, you can use the Subset Features gp tool to get a random sample of a particular size. Note: Subset Features will make a new feature class of the randomly sampled polygons. Depending what you want to do with the sample, this may be an advantage or a disadvantage of the tool.
... View more
07-16-2012
01:11 PM
|
0
|
0
|
2093
|
|
POST
|
Unless you have a good reason to prefer a particular model (for example, expert knowledge about the physics of these depths), a general rule is to choose the model that gives the lowest root-mean-square. However, you need to also check the other model diagnostics too, particularly the RMS-Standardized and the Normal QQPlot. Try lots of models with different variogram models, and choose the one that looks best. From my experience, a combination that works quite often is Simple kriging with a Normal Score Transformation and K-Bessel semivariogram.
... View more
07-10-2012
08:49 PM
|
0
|
0
|
840
|
|
POST
|
The software will automatically ignore polygons with missing data. If the polygons are coded correctly, you shouldn't have to do anything. Also, be careful with extrapolation. Areal interpolation is based on simple kriging, so it is inherently bad at extrapolation. As you move away from the source polygons, the predictions will converge to the mean value of the polygons. Read the two help topics we have to areal interpolation. The workflow topic shows a Rate (Binomial) example, but the workflow is basically the same for all three data types: http://resources.arcgis.com/en/help/main/10.1/index.html#//0031000000q8000000 http://resources.arcgis.com/en/help/main/10.1/index.html#/Using_areal_interpolation_to_perform_polygon_to_polygon_predictions/0031000000qm000000/
... View more
07-10-2012
07:20 AM
|
0
|
0
|
1360
|
|
POST
|
The formula for Binomial (Rate) areal interpolation is in this paper (it's free to download): http://www.sciencedirect.com/science/article/pii/S1878029611000053 It accounts for varying population sizes by doing a correction to the empirical semivariogram so that polygons with larger populations exert more influence on the model.
... View more
07-09-2012
08:31 AM
|
0
|
0
|
1360
|
|
POST
|
The best analysis method depends on a lot of things, particularly what kind of data you have. My first thought is Areal Interpolation, which is available now in ArcGIS 10.1. However, if you have a lot of covariates, you'll probably have better luck with a regression model.
... View more
07-05-2012
12:00 PM
|
0
|
0
|
460
|
|
POST
|
These are the two courses designed to help you learn interpolation with Geostatistical Analyst (Dan already posted the first one): http://training.esri.com/gateway/index.cfm?fa=catalog.webCourseDetail&courseid=2052 http://training.esri.com/gateway/index.cfm?fa=catalog.webcoursedetail&courseid=2128 If you do those two courses in that order, you should have a pretty good idea how to perform interpolation in ArcGIS. There's only so much we can teach in 3 hour web courses, but we got in quite a lot. You might also find this one useful: http://training.esri.com/gateway/index.cfm?fa=catalog.webCourseDetail&courseid=2053
... View more
06-14-2012
08:22 AM
|
0
|
0
|
350
|
|
POST
|
Aye, this is embarrassing. It's been so long since I used Indicator/Probability kriging that I forgot exactly how they work. For binary data, it doesn't make sense to do Probability kriging, and there's a bit more to Indicator kriging than I remembered. Set the Threshold value to 0.5, then after creating your surface, areas with a predicted value below 0.5 should be classified as a "0" and any value above 0.5 should be classified as a "1". You can do this by symbolizing the geostatistical layer, or you can convert it to raster and use the Con tool in Spatial Analyst.
... View more
06-11-2012
01:52 PM
|
0
|
0
|
1424
|
|
POST
|
If you've done the Geostatistical Analyst tutorial, you shouldn't have any problems performing Indicator or Probability kriging. The process is the same. As for kriging on ordinal data, I know there has been research done in this area, but I'm not up-to-date, and we don't have that capability in Geostatistical Analyst.
... View more
06-11-2012
10:37 AM
|
0
|
0
|
1424
|
|
POST
|
Indicator kriging and Probability kriging are both designed to work with binary data. Choose kriging and supply your binary dataset on the first page of the Geostatistical Wizard. On the second page, choose Indicator or Probability from the kriging types, then proceed as usual. The prediction surface will represent the probability that the binary variable is a 1. http://help.arcgis.com/en/arcgisdesktop/10.0/help/index.html#//00310000004n000000.htm http://help.arcgis.com/en/arcgisdesktop/10.0/help/index.html#/Understanding_probability_kriging/00310000004r000000/
... View more
06-11-2012
09:30 AM
|
0
|
0
|
1424
|
|
POST
|
http://getthegistofit.blogspot.com/2012/04/pollution-exposure-risk-in-washington.html Give that a read. They start with Kernel Density, then they do post-processing on the results. I can't really comment on the legitimacy of the methodology, but it's a good place to start.
... View more
06-07-2012
01:26 PM
|
0
|
0
|
908
|
|
POST
|
If you just want a rough measure of relative pollution exposure, consider using the Kernel Density tool in Spatial Analyst. Sum up the pollution released from each location, and use it as the Population field. You won't be able to actually estimate the total exposure at a given location, but you'll be able to say that certain locations are more exposed than others. As for estimating the actual amount of exposure, I don't know how you would do that. You would probably need to talk to a physicist.
... View more
06-07-2012
01:22 PM
|
0
|
0
|
908
|
|
POST
|
You need to ask yourself what an interpolation would mean for your data. If you used your data to predict a value at an unmeasured location, how would you interpret the value of the prediction? If your data is about the amount of pollution released, then the interpretation of the prediction would be "the amount of pollution released at this location over the year." But what if there is no factory at the new location? In the case of pollution release, it's only being released from particular locations. It doesn't really make sense to interpolate a variable that only occurs at discrete points on the map. If you want to make a map of pollution levels, your data needs to be random samples of pollution levels, not measurements of pollution release from discrete locations. There may be physics models that can predict pollution levels from data about pollution release, but ordinary kriging is not the way to do this.
... View more
06-07-2012
11:49 AM
|
0
|
0
|
908
|
| Title | Kudos | Posted |
|---|---|---|
| 2 | 01-16-2025 04:52 AM | |
| 1 | 10-02-2024 06:45 AM | |
| 2 | 08-23-2024 09:18 AM | |
| 1 | 07-19-2024 07:09 AM | |
| 1 | 08-21-2012 09:47 AM |
| Online Status |
Offline
|
| Date Last Visited |
Wednesday
|