I can't speak to how ESRI has Nearest Neighbor Index (NNI) implemented but the standard convention of the statistic does not create a set of random points to test against. Like many point pattern statistics, the NNI uses an expected as the NULL random process that is based on the number of observations per area unit of the extent. Thus, the sensitivity to changing the extent. The NNI can also be biased if the point process is inhomogenious or is poisson distributed. Needless to say, from a spatial process standpoint, there are obvious limitations to the NNI but it useful, in some cases, as an exploratory statistic. Although, I would be cautious drawing inference from it.
The NNI, without boundary correction, is calculated as follows:
D = sum( nndist(x) ) / N
Expected = 0.5 / SQRT(N / A)
NNI = D / Expected
Where; N=Number of observations, nndist=Matrix of nearest neighbor distances, D=Average nearest neighbor distance, A=Area of analysis extent, Expected=expected NULL of point process.
I have no idea how or if the ESRI implementation performs boundary correction.
So the answer to your question: "Why if you repeat the same analysis over and over again would you get the same numbers? Isn't part of the equation a random distribution of same numbers of points that would seemingly change the output numbers?"
There is no stochasticity introduced to the statistic through a randomization process and if everything stays fixed you should get exactly the same answer every time.