I am trying to do an uncertainty analysis for some bathymetric survey data. We have both single-beam and multi-beam bathymetry, and I am looking to thin down the points brought in from an .xyz file from 0.25 m, to 5 m, to 10 m, etc. Basically, I am looking to plot divergence from point density. So, I have been playing around, and am not finding what seems like should be a fairly simple operation. Basically, I want to take mean values of a group of points (spaced about 25 cm) within a 5x5 m cell (for about 150 cells, created as polygons with a fishnet), and so on and so forth, for about 80 survey sites for both survey types.
What I have for my test site is the raw survey data as a feature class (I used a Python script that imports a folder full of .xyz files, and outputs them as point feature classes in a file geodatabase), a fishnet of 5x5 m polygons clipped to the survey extent, and a feature-to-point layer, where a point is created in the center of each polygon. I am thinking that there should be some way to populate the z-value field that I added to the feature-to-point layer with the mean value of the raw points within each polygon.
Thanks a bunch,