I have been using the Gaussian Geostatistical Simulation (GGS) tool to incorporate uncertainty into my final Krige products. The data I'm looking at are trend values of particulate matter, which tend to be highly variable in areas with wildfires such as the Northwest. The first figure shows the initial Simple Krige product along with all of my site values (n = 160). Each site has its own uncertainty, so I have been using conditional GGSs to incorporate that uncertainty (per the conversation here: Kriging input with uncertainties). The second figure shows the mean output after 500 GGSs. What I'm trying to understand is: why do the maximum Krige values shift away from the 0.97 & 0.68 sites (in Idaho and Montana) after running the GGSs? Also, why does Figure 2 show a "hot spot" in a region (over northern Nevada) with almost no data points?
I've also attached the standard deviation values at each site (Figure1_standardeviation), if that could be of help.