I recently watched the video on "performing proper density analysis" and overall the tutorial is quite good, introducing good systematic procedures for conducting this type of analysis. However, I have some serious concerns over the methodology proposed for selecting the correct distance bandwidth. The idea of incremental autocorrelation is great and well supported in the literature, although it is often done using a correlogram. The problem I have is related to aggregating data to represent your random field as counts.
In order to apply the proposed method you have to apply some sort of spatial clustering approach that will be inherently distance based. In doing this you are conditioning the autocorrelation of the data to the distance criteria used to cluster the points and the resulting counts represent this distance relationship. In addition, these aggregated counts do not represent a true random field. In attempting to understand the spatial structure of the data, this becomes a chicken-and-egg argument. You cannot cluster your data and expect an unbiased estimate of the spatial structure. Moran's-I does have assumptions that can be violated. The first and foremost is that the values that you are testing represent a "real number random field". By representing the values that are being tested as counts resulting from an aggregation you are most certainly violating this assumption. Ripley (1991) goes as far as stating that if you do not have a continuous random field that the Moran's-I is invalid. I however, admit that I am still up in the air over that one. But I do know that aggregated count data does not represent a random field variable and is not appropriate for the Moran's-I. ESRI should consider adding something like the F-hat statistic that can be used on unmarked data. An alternative for testing distance bandwidths on unmarked data that is available in the Spatial Statistics Toolbox is the Ripley's-K statistic.
Ripley, B.D. (1991) Statistical Inference for Spatial Process. Cambridge University Press.