I'm interpolating series of bathymetric points using kriging, and then exporting my data to raster. I'm struggling to determine an appropriate output resolution that is supported by the characteristics of my input point data. Does anyone have any advice? Two factors I'd like to take into account are 1. the density of the input points (such that increased density = increased resolution, and vice versa) and 2. the 'complexity' of the seafloor (or other real-world surface), perhaps calculated as the variance of the z-value of input points (in this case depth). In this regard, a less complex surface (low variance) could be interpolated at a higher resolution because we'd have more confidence that z-values will remain consistent, and a more complex surface (high variance) would be interpolated at a lower resolution to avoid introducing false precision.
RE factor 1, I read a great paper - Hengl, T. 2006. Finding the right pixel size. Computers & Geoscience. 32: 1283-1298 - that suggested, "...the grid resolution should be at most half the average spacing between the closest point pairs...". I've been working off this calculation, but would welcome other ideas.
RE factor 2, Any ideas for how to use input point variance as a scale factor for resolution size? Has anyone heard of this being done before?
I'd be glad to hear of any other factors I should take into consideration. Thank you.