I have been using the Gaussian Geostatistical Simulation (GGS) tool to incorporate uncertainty into my final Krige products. The data I'm looking at are trend values of particulate matter, which tend to be highly variable in areas with wildfires such as the Northwest. The first figure shows the initial Simple Krige product along with all of my site values (n = 160). Each site has its own uncertainty, so I have been using conditional GGSs to incorporate that uncertainty (per the conversation here: Kriging input with uncertainties). The second figure shows the mean output after 500 GGSs. What I'm trying to understand is: why do the maximum Krige values shift away from the 0.97 & 0.68 sites (in Idaho and Montana) after running the GGSs? Also, why does Figure 2 show a "hot spot" in a region (over northern Nevada) with almost no data points?
I've also attached the standard deviation values at each site (Figure1_standardeviation), if that could be of help.
Thank you.
there seems to be a cluster of 0.42 ish standard deviation values in that region. Any reasons? That area isn't as poorly represented as other areas.
That area has a higher standard deviation due to wildfires in the region and sporadic sampling intervals (every 3 days). Not every site sees every wildfire and they may be missed entirely if samples are not taken on that day. Therefore, trends in that region tend to have higher standard deviation/uncertainty.
I'll add to the original post that these are trend values. Sorry if there was any confusion. Thanks for the help!
Crystal
You are supplying Input Conditioning Features to the GGS tool?
-Steve
Yes, I supply the same variable used in the original simple Krige as the Conditioning Features and then add the standard deviation values to the Conditioning Measurement Error field.