Hello!
For my masters thesis, I'm using the High/Low Clustering (Getis-Ord General G) tool to analyze similarity in spatial value distribution between a raster showing accumulation of anomalously-low freezing temperatures sustained during Spring 2007, and a timeseries of MODIS EVI raster from the 2007 growing season showing vegetative productivity before and after the freeze. These two rasters have been combined into a timeseries of "association indexes" (two sets of timeseries, actually, one for freeze vs. average EVI, and another for freeze vs. 2007 EVI anomaly) and I have applied the Getis-Ord tool to each of these rasters.
The statistics returned for each of these rasters are Observed General G, Expected General G, Variance, Z-Score, and p-value. In all the literature I've been able to obtain about Getis-Ord, which includes Lauren's explanations from the old forum, as well as the Resource Center documentation on the tool published by ESRI, and the original Getis-Ord article from 1992, I haven't been able to find anything that discusses the meaning of the General G statistic. All I've been able to find is that you use the Z-Score and p-value to assess the degree of clustering vs. dispersion for high/low values. And my results make sense when interpreted using only the Z-Score, despite the observed and expected General G values returned were 0.000004 and 0.000005 through the whole set, but with Z-scores ranging from -5 to 40.
So, my question is, what exactly does the General G statistic mean? I know it typically ranges from 0-1, but what can I understand directly from the General G statistic itself, that I cannot interpret indirectly from its associated Z-Score?
Thanks very much to anyone who can offer any help or insight! It is kindly appreciated!
Karl