I hope the answer does not require Geostatistical Analyst!

To calculate these statistical values you have to first sort the values by ascending order.

Q2 is the mean or the total number of values divided by two and it gives you the position of the value in terms of n. so if you had 100 values then median is the (value at position 50) plus the (value at position 51) divided by 2. If you had 101 values the median would be the value at position 51.

I have several hundred shape files with values that I want to apply this calculation to.

I would like to use a combination of python and model builder to calculate these, calculate and upper and lower "fence" and then find the values that are above and below these "fences".

I had the work flow all worked out! until I tried to create a new sequential number field based on the ascending sort.

I found this code:

def autoIncrement(start=0,step=1): i=start while 1: yield i i+=step incrementCursor = arcpy.UpdateCursor(table_name) #There is no guarantee of order here incrementer = autoIncrement(10,2) for row in incrementCursor: row.setValue(field, incrementer.next()) #Note use of next method incrementCursor.updateRow(row)

I changed table_name to "Name of shape file" with target field

I changed field to 'name of the field' in said shape file

selected the column in the open attribute table

When I pressed enter at the end of (row) it just went down one line..I pressed it again

and it went back to >>>

none of the rows were updated. I thought the cursor is what you have selected in your map document.

so I need to

1. Tell each shape file to sort the target values in ascending order (used sort tool with iterater in model builder)

2. create a script that populates a new field with sequential numbers based on the new sort.

Any help on this is much appreciated!

Dan

the trick is to read the column of data, like Darren Wiens pointed out in his searchcursor approach (you can use plain NumPy is you are using python 2.7 or SciPy if you have python 3.4 installed) .

In pure python/nNumpy/SciPy there is FeatureClassToNumPy or TableToNumPy array method ... just specifying the FC or tbl that you are using, the field or fields that you want. It is read into memory and you process it in numpy or scipy (scipy isn't needed for most things).... then you do one of two things.

Comparison

My preference is to use the join approach. If I want to skip the join and make a permanent FC or table, I just read the whole FC/table in, specify the field for processing, do the work, then send it back out using NumPyArrayToFeatureclass or NumPyArrayToTable..

I haven't used a searchcursor, updatecursor or insertcursor in quite a while.

So the choice is yours, sorting data on the OIDNAME or FID name is no sweat in numpy (or SciPy), so all your work can be unravelled back to the original order. I use this approach regularly to produce shapefiles sorted by geometric properties for some of the work I do.

The tools are there, esri has silently without fanfare given access to do a lot of things in vector and raster world that require a higher license level simply because you can do the work in numpy or SciPy and bring it back into ArcMap. The nice thing is, once you have an array, you can work in a variety of other environments or languages which offer different capabilities.

So the choice is yours... you can use a maximized combination of arcpy and numpy (ie via cursors) or a minimized combination of arcpy/numpy with most of the emphasis on numpy

Good luck... have a look at Darren's approach should you have immediate needs.

PS

NumPy Repository is a place I house musings and works related to array work. If you have a background already in that kind of work, it may be of interest to you at some stage (it assumes a certain level of python familiarity and works presented require python 3.4+... which you can get by installing ArcGIS PRO or by using an Anaconda distribution which includes the whole stack)