I'm trying to understand how these two are useful for kernel density surfaces.
I think the method of calculating the search radius is important, not just the radius used
describes the default search radius in terms of the standard distance of the point pattern. If you use too small a distance, then you can get somewhat different patterns, ie.
This approach to calculating a default radius generally avoids the "ring around the points" phenomenon that often occured with sparse datasets.
The cell size, is going to be how the result is represented. If the cell size is too coarse (ie you use the default 1/250 of the width/height max) then you may not like the appearance of your results and/or you may be overgeneralizing the actual pattern. There is no 'definitive' rule, You need to consider how sparse your point pattern is (ie number of points and whether they are distributed widely or clustered into groupings. If your observation points are highly clustered, don't expect the results for the intervening cluster space to be very meaningful.
Retrieving data ...