Skip navigation
All People > Dan_Patterson > Py... blog
1 2 3 4 5 Previous Next

Py... blog

119 posts

Scipy ndimage morphology … I am betting that is the first thought that popped into your head.  A new set of tools for the raster arsenal.

But wait!  Their terminology is different than that used by ArcGIS Pro or even ArcMap.

What tools are there?  Lots of filtering tools, but some of the interesting ones are below

from scipy import ndimage as nd
dir(nd)[... snip ...
'affine_transform', ... , 'center_of_mass', 'convolve', 'convolve1d', 'correlate',
'correlate1d', ... 'distance_transform_edt', ... 'filters', ... 'generic_filter',
... 'geometric_transform', ... 'histogram', 'imread', 'interpolation', ... 'label',
'labeled_comprehension', ... 'measurements', ... 'morphology', ...  'rotate', 'shift',
'sobel',...]

 

I previously covered distance_transform_edt which performs the equivalent of Euclidean Distance and Euclidean Allocation in this post

Euclidean distance, allocation and other stuff...  

 

Let's begin with a raster constructed from 3x3 windows with a repeating pattern and repeats of the pattern.

You can construct your own rasters using numpy, then apply a NumPyArrayToRaster to it.  I have covered this before, but here is the code to produce the raster

def _make_arr_():
    """Make an array with repeating patterns
    """

    a = np.arange(1, 10).reshape(3, 3)
    aa = np.repeat(np.repeat(a, 3, axis=1), 3, axis=0)
    aa = np.tile(aa, 3)
    aa = np.vstack((aa, aa))
    return aa

 

---- The raster (image) ----

 

---- (1) Expand ----

Expand the zone 1 values by 2 distance units

 

---- (2) Shrink ----

Shrink the zone 1 values by 1 distance unit.

 

---- (3) Regiongroup ----

Produce unique regions from the zones.  observe that the zone 1 values in the original are given unique values first. Since there were 6 zone 1s, they are numbered from 1 to 6.  Zone 2 gets numbered from 7 to 12.  and that pattern of renumbering repeats.

 

---- (4) Aggregate ----

Aggregation of the input array using the mode produces a spatial aggregation retaining the original values.  Our original 3x3 kernels are now reduced to a 1x1 size which has 3 times the width and height (9x area).

---- (5) Some functions to perform the above ----

 

In these examples buff_dist is referring to 'cells'.

For the aggregation, there are a number of options as shown in the header. 

Generally integer data should be restricted to the mode, min, max, range and central (value) since the median and mean upscale to floating point values.  This of course can be accommodated by using the python statistics package median_low and median_high functions:

 

statistics — Mathematical statistics functions — Python 3.7.1rc1 documentation 

 

So think of a function that you want.  Filtering is a snap since you can 'stride' an array using any kernel you want using plain numpy or using the builtin functions from scipy.

 

Enough for now...

 

def expand_(a, val=1, mask_vals=0, buff_dist=1):
    """Expand/buffer a raster by a distance
    """

    if isinstance(val, (list, tuple)):
        m = np.isin(a, val, invert=True).astype('int')
    else:
        m = np.where(a==val, 0, 1)
    dist, idx = nd.distance_transform_edt(m, return_distances=True,
                                          return_indices=True)
    alloc = a[tuple(idx)]
    a0 = np.where(dist<=buff_dist, alloc, a)  #0)
    return a0


def shrink_(a, val=1, mask_vals=0, buff_dist=1):
    """Expand/buffer a raster by a distance
    """

    if isinstance(val, (list, tuple)):
        m = np.isin(a, val, invert=False).astype('int')
    else:
        m = np.where(a==val, 1, 0)
    dist, idx = nd.distance_transform_edt(m, return_distances=True,
                                          return_indices=True)
    alloc = a[tuple(idx)]
    m = np.logical_and(dist>0, dist<=buff_dist)
    a0 = np.where(m, alloc, a)  #0)
    return a0


def regions_(a, cross=True):
    """currently testing regiongroup
    np.unique will return values in ascending order

    """

    if (a.ndim != 2) or (a.dtype.kind != 'i'):
        msg = "\nA 2D array of integers is required, you provided\n{}"
        print(msg.format(a))
        return a
    if cross:
        struct = np.array([[0,1,0], [1,1,1], [0,1,0]])
    else:
        struct = np.array([[1,1,1], [1,1,1], [1,1,1]])
    #
    u = np.unique(a)
    out = np.zeros_like(a, dtype=a.dtype)
    details = []
    is_first = True
    for i in u:
        z = np.where(a==i, 1, 0)
        s, n = nd.label(z, structure=struct)
        details.append([i, n])
        m = np.logical_and(out==0, s!=0)
        if is_first:
            out = np.where(m, s, out)
            is_first = False
            n_ = n
        else:
            out = np.where(m, s+n_, out)
            n_ += n
    details = np.array(details)
    details = np.c_[(details, np.cumsum(details[:,1]))]
    return out, details


def aggreg_(a, win=(3,3), agg_type='mode'):
    """Aggregate an array using a specified window size and an aggregation
    type

    Parameters
    ----------
    a : array
        2D array to perform the aggregation
    win : tuple/list
        the shape of the window to construct the blocks
    agg_type : string aggregation type
        max, mean, median, min, mode, range, sum, central

    """

    blocks = block(a, win=win)
    out = []
    for bl in blocks:
        b_arr = []
        for b in bl:
            if agg_type == 'mode':
                uni = np.unique(b).tolist()
                cnts = [len(np.nonzero(b==u)[0]) for u in uni]
                idx = cnts.index(max(cnts))
                b_arr.append(uni[idx])
            elif agg_type == 'max':
                b_arr.append(np.nanmax(b))
            elif agg_type == 'mean':
                b_arr.append(np.nanmean(b))
            elif agg_type == 'median':
                b_arr.append(np.nanmedian(b))
            elif agg_type == 'min':
                b_arr.append(np.nanmin(b))
            elif agg_type == 'range':
                b_arr.append((np.nanmax(b) - np.nanmin(b)))
            elif agg_type == 'sum':
                b_arr.append(np.nansum(b))
            elif agg_type == 'central':
                b_arr.append(b.shape[0]//2, b.shape[1]//2)
            else:
                tweet("\naggregation type not found {}".format(agg_type))
                b_arr.append(b.shape[0]//2, b.shape[1]//2)
        out.append(b_arr)
    out = np.array(out)
    return out

Your standard fare in raster world

 

Fill in some holes … 'buffer' a line... allocate space to stuff in space... identify unique regions

Fancy names.  Nibble, Euclidean distance, Euclidean allocation, Regiongroup

 

---- (1) The task ----

Start with a raster or array.  Fancy labels in the top left, some random-ish color scheme with values noted in the middle.

Now, zero ( 0 ) we will say is nodata.  The other numbers represent some class value. 

 

Fill in the 'gaps'... aka, nodata, with the value of a cell with data based on the closest distance.  Euclidean, crow flies, for our purposes, but it need not be. 

Go! 

What did you get for cells A04, H01 and H02?  What about cell D07 and H08?

 

 

---- (1) The reveal ----

Let's see how we did

A04 - 2   can't be 1 because the diagonal to '1' is 1.414*cell width, so 2 it is

H01 - 2   could have been a 2 or a 3, because both are 3 cells away from cells with values

H02 - 3   no brainer

D07 - 3   could have been 3 or 4, but 3 wins

H08 -3    3 is 2 cells away and 1 and 5 are a greater distance on an angle

 

---- (3) The distances ----

Pretty boring and obvious... but for completeness.

 

I don't do maps, but the dominant colors are 0 or root 2 as you can see from the spiffy ArcGIS Pro symbology tab.So no big surprises

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

---- (4) Allocation around linear features ----

 

Yes, that is possible too, sort of like a variable buffer, trade area, but currently just a simple Euclidean spatial allocation.

 

 

---- (5) The references ----

 

 Euclidean distance

Euclidean Allocation

NumPy/SciPy/  plus Arcpy stuff solution is what I used

 

import sys
import numpy as np
from scipy import ndimage as nd
from arcpy.geoprocessing import env
from arcpy.arcobjects import Point
from arcgisscripting import NumPyArrayToRaster, RasterToNumPyArray

env.overwriteOutput = True

def euclid_calc(a, mask_vals=0, dist=True, alloc=True):
    """Calculate the euclidean distance and/or allocation

    Parameters:
    a : array
        numpy float or integer array
    mask_vals : number, list or tuple
        If a single number is provided, a `mask` will be created using it.  A
        list or tuple of values can be used to provide multiple value masking.
    dist : boolean
        True, the distance of the closest non-masked value to the masked cell
    alloc : boolean
        True, the value of the closest non-masked value to the masked cell
    """

    if not dist:
        dist = None
    if not alloc:
        alloc = None
    m = np.isin(a, mask_vals).astype('int')
    dist, idx = nd.distance_transform_edt(m, return_distances=True,
                                          return_indices=True)
    alloc = a[tuple(idx)]
    return dist, alloc

# ----------------------------------------------------------------------
# .... final code section
if len(sys.argv) == 1:
    testing = True
    r_in = r"C:\GIS\A_Tools_scripts\Raster_tools\Data\poly.tif"
    a = RasterToNumPyArray(r_in)
    dist, alloc = euclid_calc(a, mask_vals=0, dist=True, alloc=True)
else:
    testing = False
    r_in = sys.argv[1]
    r_out = sys.argv[2]
    LLx = sys.argv[3]
    LLy = sys.argv[4]
    cell_sze = sys.argv[5]
    a = RasterToNumPyArray(r_in)
    dist, alloc = euclid_calc(a, mask_vals=0, dist=True, alloc=True)
    #
    # specify a, dist or alloc below... this could be a tool choice if needed
    #
    r0 = NumPyArrayToRaster(a, Point(LLx, LLy), cell_sze) 
    r0.save(r_out)

 

Of course, when run from within the amazing Spyder python IDE, you can simply skip saving to output raster if needed, or try various other options from with the scipy slate other than the distance_transform_edt

 

 

 

On to other "special analyst" tools soon.

Striding - Sliding - Moving - Rolling

 

One of my favorite topics.

 

It was an innocuous question

 

 How to count the adjacent cells that have a different value in a raster dataset? 

 

Obviously! Focal statistics in the Spatial Analyst extension. 

There has to be one of them … not focal mean, median, min, max, std dev, var, range... that leaves majority, minority and variety, - close, but no vape.  The catch was, that Mark needed to know the number of cells in the immediate neighborhood that differed in value from the core/focal cell.  As such, variety wouldn't cut it since variety since all 9 cells in a 3x3 window are considered without comparison to the focal cell.  I will address the final puzzle at the end, but let us begin with some basics.

 

Striding function basics

Begin with a basic array with 4 rows and 5 columns.  This is a raster of course because I could save it out to esri grid or tif format.

Now if I begin shifting a 3x3 window over the top of the raster we begin to form the data in lines 9 and on.  But how did I arrive at?

 

Steps

    • pick a nodata value... in this case I have decided that 0 represents nodata.
    • convert the nodata values to "not a number... nan"
    • pad the raster by 1 cell using a constant value (nan) on all 4 sides
    • slide beginning at the top left of the padded array.  In this case a 3x3 moving window was used.  The 'window' is stepped 1 column at a time until the end of the first row is reached, then a step down a row follows this .

 

 

Figure 1

Striding a simple array/raster
Original array...
-shape (1, 4, 5), ndim 3
  .  0  0  1  0  2 
  .  0  1  1  2  0 
  .  3  0  3  0  4 
  .  3  3  4  5  5 


Strided array...
-shape (4, 5, 3, 3), ndim 4
-------------------------
-(0, + (5, 3, 3)
  .  nan  nan  nan    nan  nan  nan    nan  nan  nan    nan  nan  nan    nan  nan  nan 
  .  nan  nan  nan    nan  nan    1    nan    1  nan      1  nan    2    nan    2  nan 
  .  nan  nan    1    nan    1    1      1    1    2      1    2  nan      2  nan  nan 
-------------------------
-(1, + (5, 3, 3)
  .  nan  nan  nan    nan  nan    1    nan    1  nan      1  nan    2    nan    2  nan 
  .  nan  nan    1    nan    1    1      1    1    2      1    2  nan      2  nan  nan 
  .  nan    3  nan      3  nan    3    nan    3  nan      3  nan    4    nan    4  nan 
-------------------------
-(2, + (5, 3, 3)
  .  nan  nan    1    nan    1    1      1    1    2      1    2  nan      2  nan  nan 
  .  nan    3  nan      3  nan    3    nan    3  nan      3  nan    4    nan    4  nan 
  .  nan    3    3      3    3    4      3    4    5      4    5    5      5    5  nan 
-------------------------
-(3, + (5, 3, 3)
  .  nan    3  nan      3  nan    3    nan    3  nan      3  nan    4    nan    4  nan 
  .  nan    3    3      3    3    4      3    4    5      4    5    5      5    5  nan 
  .  nan  nan  nan    nan  nan  nan    nan  nan  nan    nan  nan  nan    nan  nan  nan 

 

-----------

How about in pictoral form

Some people don't work well with numbers so have a gander.  

Remember, we are simply sliding a 3x3 window over by 1 column until it hits the end of the row, then it drops one row and repeats.  nan in both situations is Not A Number.  There is Nan, NaT (not a time) but no Nai (not an integer).  Integers require you temporarily upscale the data to floats, process, then downgrade... or use masked arrays or masked operations.

 

----------->      Sliding over one column at a time    ---------->

Shift down a row

Again

And Again

 

---------

Some Examples of Focal Statistics

Here are some examples... see if you can do the mental moving math.

You can make 2 choices when doing focal statistics.

  • if the focal cell is nan, process the surrounding cells for the statistics.
  • if nan is focal cell is nan... assign nan to the result

Both options have their uses, for example to 'smooth' out data getting rid of nodata speckles, the first option would be chosen.  In the situations that you want to preserve locale observations, you would use the second option.

 

A hard one (sort of)

The difference of the surround cells from the core cell accounting for nodata and assigning nodata if the focal cell is nodata.  In the original array, nodata was 0 and in the output -1 is used.

 

An easy one (the maximum)

The sample code that does the focal maximum.  The padding and striding function can be found on the toolbox on the 

ArcGIS Code Sharing site. 

 

The link is ….. Raster Tools: Focal and Local Statistics 

 

If you have any other raster functionality that involves multidimensional arrays/rasters that you need implemented, send me an email and I will add them to the "Special Analyst" toolset.

The usual.... Another distance question.  This one is a little bit different.  Either the standard distance (a statistical measure in the Spatial Statistics toolset) or a distance matrix and its parameters were needed.

Why not do both without the extension or an advanced license. 

Totally supported since python and numpy and the tools to work with them are provided for you.

Note code savvy????  You can always purchase what you need.

 

Let's start with the scenario....

You can see the points that are within the polygons.  Objective! either determine the distance matrix of the points contained within each polygon AND/OR calculate the standard distance similarly.

 

: -----------------------------------------------------------------------------------------------------------------------------------------------------

In short... 

Here is the workflow.

  1. Make your imports.  Arraytools is stuff I have written (on GitHub in sortof organized form)
  2. Select your input featureclass which is created by 'Intersect' ing your points and the polygon.  That way you can bring over your attributes into the resultant point file.
  3. Make a structured array from the featureclass just bringing over the X, Y and an identifier field.
  4. Sort the result in step 3 by the identifier field... this just makes splitting it easier.
  5. Split the array into subarrays for processing.
  6. Lines 10 -19 are just for formatting the output.
  7. Line 21 and on, is the processing steps.
    1. for each subarray, get the X,Y coordinates (line 23)
    2. calculate the center of the point distribution
    3. calculate the variance for X and Y
    4. Use the above to determine the Standard Distance (1 std level, multiply by 2 or 3 for 2 std)
    5. Determine the interpoint distance matrix (line 28) returning an array with the diagonal set to zero (point to itself) and redundant (duplicate) calculations set to zero (line 29)
    6. From the nonzero values (there should be no duplicate points anyway, BTW), calculate the distance's statistical parameters.
    7. Fancy print the results

 

import arcpy
import arraytools as art

in_fc = r"C:\Path_to_Your\File_geodatabase.gdb\pnts_intersect_polygons"

a = arcpy.da.FeatureClassToNumPyArray(in_fc, ['SHAPE@X', 'SHAPE@Y', 'ID_poly'])
a_sort = np.sort(a, order='ID_poly')
a_split = art.split_array(a_sort, fld='ID_poly')

frmt = """
Group .... {}
center ... {}
standard distance ... {}
distance matrix...{}
distance results...  mean {}, std dev. {}  min  {}, max      {}
"""


# ---- let's role ----
for i in range(len(a_split)):
    a0 = a_split[i][['SHAPE@X', 'SHAPE@Y']]
    a0_xy = art.geom._view_(a0)
    cent = np.mean(a0_xy, axis=0)
    var_x = np.var(a0_xy[:, 0])
    var_y = np.var(a0_xy[:, 1])
    stand_dist = np.sqrt(var_x + var_y)
    dm = art.geom.e_dist(a0_xy, a0_xy)
    dm_result = np.tril(dm, -1)
    dm_vals = dm_result[np.nonzero(dm_result)]
    args = [i,
            cent,
            stand_dist,
            art.form_(dm_result, deci=1, prn=False),
            dm_vals.mean(),
            dm_vals.std(),
            dm_vals.min(),
            dm_vals.max()]
    print(frmt.format(*args))

 

Here is the output for the first polygon.

Group .... 0
center ... [ 304932.447  5029991.887]
standard distance ... 795.1178530213251
distance matrix...

Array... ndim: 2  shape: (24, 24)
. .     0.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0.....
. .   253.6    0.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0.....
. .   179.9  214.1    0.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0.....
. .   377.4  417.6  225.6    0.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0.....
. .   644.3  513.3  466.6  379.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0.....
. .   866.3  685.6  697.3  637.8  259.5    0.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0.....
. .   819.0  696.3  639.5  514.0  183.1  219.6    0.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0.....
. .  1289.9 1164.9 1110.1  960.7  654.6  527.5  473.0    0.0    0.0    0.0    0.0    0.0    0.0    0.0.....
. .  1431.1 1290.0 1251.8 1113.8  788.6  622.9  612.4  164.4    0.0    0.0    0.0    0.0    0.0    0.0.....
. .  1583.5 1385.9 1416.1 1340.3  965.7  718.9  836.0  567.6  446.2    0.0    0.0    0.0    0.0    0.0.....
. .  1504.4 1410.2 1326.3 1148.0  897.6  810.8  715.9  292.8  309.6  745.2    0.0    0.0    0.0    0.0.....
. .  1777.6 1566.1 1616.3 1556.9 1178.8  924.2 1061.2  801.3  669.2  237.4  953.7    0.0    0.0    0.0.....
. .  1587.4 1483.2 1408.4 1236.2  969.9  861.3  786.9  333.8  297.2  703.4  101.2  898.4    0.0    0.0.....
. .  1643.8 1489.1 1465.6 1336.7  999.5  807.5  829.1  391.9  228.1  368.6  429.3  540.1  361.5    0.0.....
. .  1825.6 1740.4 1649.0 1461.7 1228.7 1138.9 1047.5  613.0  571.6  932.6  332.2 1096.0  282.0  565.3.....
. .  2069.4 1859.9 1906.4 1838.6 1462.6 1211.4 1335.0 1018.9  866.6  499.1 1108.9  294.1 1033.9  680.2.....
. .  2218.6 2007.6 2056.0 1988.4 1612.5 1361.3 1484.1 1157.9 1002.1  648.5 1229.6  441.6 1149.2  804.7.....
. .  2004.9 1880.9 1825.1 1663.3 1371.9 1220.5 1190.2  717.4  597.6  791.3  533.6  889.5  435.4  454.5.....
. .  2202.6 2002.9 2034.8 1950.7 1580.4 1337.6 1439.5 1076.4  915.1  619.2 1116.6  455.6 1030.7  702.8.....
. .  2075.5 1916.9 1897.4 1766.0 1431.2 1232.4 1260.5  810.0  652.3  633.7  736.6  650.5  637.8  431.7.....
. .  2144.5 1974.1 1968.1 1848.3 1501.8 1288.7 1337.3  904.1  741.6  636.9  859.7  602.4  763.0  514.0.....
. .  2161.4 2055.3 1982.6 1806.5 1542.4 1413.2 1359.3  893.4  795.7 1021.5  658.6 1115.3  574.4  678.7.....
. .  2245.4 2089.5 2066.9 1931.0 1601.2 1405.5 1428.7  971.7  817.8  798.8  868.6  788.6  767.6  602.0.....
. .  2390.9 2267.3 2211.2 2046.4 1758.7 1603.1 1577.0 1104.2  981.0 1103.4  905.0 1140.0  811.6  813.9.....

I will put this in the Point Tools toolset at some point.  

If people need the code, email me at the link on my profile and I can direct you to the links on GitHub.

Make it happen

 

Making conda package installs more fun... 

 

 

 Install Spyder, Jupyter console and Jupyter notebook for ArcGIS Pro by default 

 

With ArcGIS PRO 2.2 there has been a slight change in the way packages can be installed. 

This will evolve but some pictures to get you going.

---------------------     ---------------------     ---------------------    ---------------------    

Updates....

---------------------     ---------------------     ---------------------    ---------------------    

Readings

 

Esri's Anaconda cloud repo... location of the install packages for Pro 2.2

Spyder .... a view of the IDE's capabilities

The Python Package Manager—ArcPy Get Started | ArcGIS Desktop 

 

Issues and workarounds

Mangled paths issue … package manager fails to read the _ in the _backup

comment on arcgis pro 2.2.0 and python package manager 

 

---------------------     ---------------------     ---------------------    ---------------------    

Current installation

Here is where I installed Pro 2.2 a while ago and here it still resides today.

Now some of you don't have the luxury of being able to install your software where you want, but most of this should work if you are given the slightest bit of freedom in how you want to perform your work.  You will be able to perform most of the following in your User profile though (albeit, creating ridiculously long and convoluted path lengths..

 

Onward.... find out where ArcGIS Pro is installed....

 

 

---------------------     ---------------------     ---------------------    ---------------------    

Make some shortcuts

I covered making shortcuts previously, but I will reiterate here.  I like shortcuts on the desktop and I like to dock them on the Task Bar.  First... create your shortcut

 

Conda access shortcut

 

Follow the path to the proenv.bat file located as shown in the following figure.

My installation path is C:\ArcGISPro\ You just need to change that portion in your searching and the rest will be the same.

When I located, the file, I created a shortcut and modified the contents of the shortcut as shown below.  I then copied that shortcut to my desktop.

I always keep a separate folder containing customizations like shortcuts so that I can share and/or modify as needed.

 

 

 

: ----------------------------------------------------------------------------

Installing Spyder and other packages

 

Yes... spyder is in the default package, but since the ArcGISpro-py3 environment is shut down, you have to do installs using conda..

 

If you don't have a shortcut created... just fire up the python prompt

 

 

Or fire up your new shortcut and follow along

 

 

Navigate to  C:\Your_Install_Folder\bin\Python\envs\arcgispro-py3\Scripts\spyder-script.py

and make a shortcut like...

 

Target

C:\ArcGISPro\bin\Python\envs\arcgispro-py3\pythonw.exe "C:\ArcGISPro\bin\Python\envs\arcgispro-py3\Scripts\spyder-script.py"

 

NOTE...

!!! if your default installation path has flotsam (like spaces, dashes and dots),

you may have to enclose both portions of the Target in double quotes.  !!!!!

 

 

If you want to make a development environment, you now need to clone it to install packages.

Here is the Manage Environments dialog, found on the Project page.

 

---------------------     ---------------------     ---------------------    ---------------------    

Setup Spyder as your script IDE

You get to reuse the Target shortcut used before.  

Just remember 

  • the pythonw.exe and spyder-script.py paths below need to be on one line.
  • if the python path is in a horrid space or flotsam filled path... enclose it in double quotes.
  • always enclose the spyder-script.py path in double quotes
  • Remember to replace C:\ArcGISPro with your installation path... everything else is the same
  • This is your target path and it goes into the geoprocessing editor selection as below
  • environment:  C:\ArcGISPro\bin\Python\envs\arcgispro-py

 

Put them together....

     env\pythonw.exe "env\Scripts\spyder-script.py"

Yields....

 

C:\ArcGISPro\bin\Python\envs\arcgispro-py3\pythonw.exe "C:\ArcGISPro\bin\Python\envs\arcgispro-py3\Scripts\spyder-script.py"

 

 

 

---------------------     ---------------------     ---------------------    ---------------------    

Alternative IDEs ...

Of course you can always install and use Pythonwin for that easy-to-use Python IDE

 

 

How about just a simple QtConsole for parallel IDEs

 

 

---------------------     ---------------------     ---------------------    ---------------------    

Jupyter Notebook... graphing, analysis and graphing...

Just create a folder structure to store your notebooks and it can be replicated within the jupyter setup making it easy to find your examples.

 

A graphing example... there are lots

 

---------------------     ---------------------     ---------------------    ---------------------    

To Come... Managing your packages...

 

If will fill these in over the next week or so... a few things have to be cleaned up since they may not be for everyone.  So I will delay for now

 

I just used conda to upgrade jupyter notebook and install jupyter lab.  To stop it from trying to downgrade a bunch of packages, just add the default channel and --no-pin.  

 

 

ArcGIS Pro's package manager

Currently locked down for editing the initial install... and there are issues 'cloning' the existing one to make changes in.  Mind you... you can do your package installations using conda after the initial install, skipping the package manager altogether.  BUT .... if things get totally fouled up, you may be left with a dysfunctional installation... so only proceed if you sortof know what you are doing

 

Anaconda Navigator

So very useful, since you can access other Anaconda packages through one interface... but currently not ArcGIS Pro, but many others including documentation and the like.

 

https://community.esri.com/ideas/14817-anaconda-navigator-to-arcgis-pro

 

 

So... Once Anaconda Navigator got installed, I decided to see whether I could clone an environment that Pro 2.2 could see ... which seems to be an issue.  It seems to have worked so far.  Have a look

 

 

Now to experiment....

 

IF you have comments and questions... email me through my profile link

Patterns....

 

UPDATE :  2018-10-30

Added a demo at the end showing how to turn results into a structured array which can be used as a summary table in ArcGIS Pro.

 

Wonder where the .... it has rained for 10 days straight, the longest stretch since 1954 ....  Do you picture some poor sod flipping through pages or reeling through spreadsheets.  Unlikely.  Many questions have to deal with 'sequences' or 'patterns' in the data.  

 

I have put together a toolset and one of the tools in there allows you to identify a sequence and identify the value, the start and end locations in of the sequence,  and the count/frequency of it.  By dealing with a complete list of data, you can see whether the pattern is unique or has repeated over time.

 

The principle is fairly simple.  Provide the list, determine the sequential difference of your choice, split the input and then summarize.  For numbers, you can use numpy's diff function as shown below.

a = [1, 1, 2, 2, 3, 3, 3, 4, 4, 5, 5, 5, 5, 4, 4, 3, 3, 3, 2, 1]

seqs = np.split(a, np.where(np.diff(a) != stepsize)[0] + 1)

seqs

[array([1, 1]),
array([2, 2]),
array([3, 3, 3]),
array([4, 4]),
array([5, 5, 5, 5]),
array([4, 4]),
array([3, 3, 3]),
array([2]),
array([1])]

 

For text data, the process is the same, but you compare each sequential element directly

a = np.array(['B', 'B', 'B', 'B', 'A', 'B', 'B', 'A', 'A', 'A', 'A', 'B', 'A', 'B', 'B'], dtype='<U5')

seqs = np.split(a, np.where(a[1:] != a[:-1])[0] + 1)

seqs

[array(['B', 'B', 'B', 'B'], dtype='<U5'),
array(['A'], dtype='<U5'),
array(['B', 'B'], dtype='<U5'),
array(['A', 'A', 'A', 'A'], dtype='<U5'),
array(['B'], dtype='<U5'),
array(['A'], dtype='<U5'),
array(['B', 'B'], dtype='<U5')]

 

From that point on it is simply a matter of summarizing the result.  That is the purpose of the rest of the tool.

 

The dialog is fairly simple.  Specify an input table and the field to query and a 'step size' which is the spacing between the values that you want to use to identify the sequence.  The simplest case is to identify observations that are identical... that is, there sequential difference is zero.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

An optional output table can be created to permit further analysis.

 

The input and output table for the Sequences field are shown to the right.  The first sequence is four values of 2 beginning at ID number 0 and extending up to, but not including ID number 4.

 

Subsequent lines in the output table represent the different sequences.  In this case, a sequence of 1 (value = 1) is followed by a sequence of 2 (value = 2) and another sequence of 4 (value = 1).

 

Some answers to questions.

 

 

 

 

1 What is the longest sequence of value = 1?

2  What is the median?

 

A simple Select By Attributes followed by a Statistics on the Count field and you are done.

 

 

 

 

 

 

 

 

 

 

 

Alternately, you can Summarize on the Count field based on the Value field.

 

 

So examining sequences and patterns and their patterns and/or sequences come up a lot in analysis.  Hope these tools will get you thinking.

 

NOTE.. the url to the toolset will appear

here....

when ArcGIS PRO Beta 2.2 is complete.

Tools will also be available to analysis sequences (or duplicates) for text data.

 

# ----- Producing a summary table as a structured array for use in Pro.

 

Start with our initial sequence from the beginning.

a = [1, 1, 2, 2, 3, 3, 3, 4, 4, 5, 5, 5, 5, 4, 4, 3, 3, 3, 2, 1]

Take a seeming complex function... but it covers text and numbers, produces the counts for the sequence and does a positional from-to key.... 

 

Here are the results

prn_rec(sequences(a, stepsize=0))

id ID Value Count From_ To_
---------------------------------------
000 0 1 2 0 2
001 1 2 2 2 4
002 2 3 3 4 7
003 3 4 2 7 9
004 4 5 4 9 13
005 5 4 2 13 15
006 6 3 3 15 18
007 7 2 1 18 19
008 8 1 1 19 20

 

Here is the 'sequences' code, sans the elaborate doc string.

def sequences(data, stepsize=0):
    """Return an array of sequence information denoted by stepsize
    data : List/array of values in 1D
    stepsize : Separation between the values.
        If stepsize=0, sequences of equal values will be searched. If stepsize is 1,
        then sequences incrementing by 1... etcetera. Stepsize can be both positive or
        negative .... snip ....
    """

    a = np.array(data)
    a_dt = a.dtype.kind
    dt = [('ID', '<i4'), ('Value', a.dtype.str), ('Count', '<i4'), ('From_', '<i4'),
          ('To_', '<i4')]
    if a_dt in ('U', 'S'):
        seqs = np.split(a, np.where(a[1:] != a[:-1])[0] + 1)
    elif a_dt in ('i', 'f'):
        seqs = np.split(a, np.where(np.diff(a) != stepsize)[0] + 1)
    vals = [i[0] for i in seqs]
    cnts = [len(i) for i in seqs]
    seq_num = np.arange(len(cnts))
    too = np.cumsum(cnts)
    frum = np.zeros_like(too)
    frum[1:] = too[:-1]
    out = np.array(list(zip(seq_num, vals, cnts, frum, too)), dtype=dt)
    return out

 

The prn_rec code will be available in the toolbox

 

If you don't mind a quick print like...

Array fields:
('ID', 'Value', 'Count', 'From_', 'To_')
[(0, 1, 2, 0, 2)
(1, 2, 2, 2, 4)
(2, 3, 3, 4, 7)
(3, 4, 2, 7, 9)
(4, 5, 4, 9, 13)
(5, 4, 2, 13, 15)
(6, 3, 3, 15, 18)
(7, 2, 1, 18, 19)
(8, 1, 1, 19, 20)]

You can use quick_prn

def quick_prn(a, edgeitems=3, max_lines=25, wdth=100, decimals=2, prn=True):
    """Quick print and format a structured array
    """

    wdth = min(len(str(a[0])), wdth) # try to force wrap
    with np.printoptions(precision=decimals, edgeitems=edgeitems, threshold=max_lines, linewidth=wdth):
        print("\nArray fields:\n{}\n{}".format(a.dtype.names, a))

That's all for now.

Help.... Documentation

 

Last thing written... if ever... First thing needed.

I have been messing with formatting my code using numpy doc strings

 

Code documentation.... the view on the left is within the script IDE and the view on the right is what is will look like when viewing a script's help.  Fancy... fancy...

 

 

And! Not only that, your favorite Python Package Manager includes it in their python distribution so Sphinx can use it.

 

 

So here is a little 'dirr' function I wrote since dir(object) returns a messy hard to read output in any form.

Here is the documentation for the function.

def dirr(obj, colwise=True, cols=3, sub=None, prn=True):
    """A formatted dir listing of a module or function.

    Source : arraytools module in tools.py, dirr def

    Return a directory listing of a module's namespace or a part of it if the
    `sub` option is specified.  If  `prn=True`, then it is simply printed. If
    False, then the string is returned.

    Parameters
    ----------
    - colwise : `True` or `1`, otherwise, `False` or `0`
    - cols : pick a size to suit
    - sub : sub='a' all modules beginning with `a`
    - prn : `True` for print or `False` to return output as string

    Notes
    -----

    See the `inspect` module for possible additions like `isfunction`,
    'ismethod`, `ismodule`

    **Examples**::

        dirr(art, colwise=True, cols=3, sub=None, prn=True)  # all columnwise
        dirr(art, colwise=True, cols=3, sub='arr', prn=True) # just the `arr`'s

          (001)    _arr_common     arr2xyz         arr_json
          (002)    arr_pnts        arr_polygon_fc  arr_polyline_fc
          (003)    array2raster    array_fc
          (004)    array_struct    arrays_cols
    """

 

The key (apparently) is indentation, back-ticks and little tricks. 

In Spyder's help documentation, it looks like this. ( More on Spyder as an IDE for your work )

 

 

You can produce whole documentation for your modules or even single 'scripts' to accompany your code.

 

And the function itself makes finding my own documentation easier

Sample output
# ---- by column

dirr(art, colwise=True, cols=4, sub='arr')

----------------------------------------------------------------------
| dir(arraytools) ...
|    <module 'arraytools' from 'C:\\Git_Dan\\arraytools\\__init__.py'>
-------
  (001)    _arr_common     arr_pnts        array2raster    array_struct   
  (002)    arr2xyz         arr_polygon_fc  array_fc        arrays_cols    
  (003)    arr_json        arr_polyline_fc                                

# ---- by row

dirr(art, colwise=False, cols=4, sub='arr')

----------------------------------------------------------------------
| dir(arraytools) ...
|    <module 'arraytools' from 'C:\\Git_Dan\\arraytools\\__init__.py'>
-------
  (001)    _arr_common     arr2xyz         arr_json        arr_pnts       
  (002)    arr_polygon_fc  arr_polyline_fc array2raster                   
  (003)    array_fc        array_struct    arrays_cols                    

# ---- standard method

dir(art)
['__all__', '__args', '__art_modules__', '__art_version__', '__builtins__',
'__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__',
'__spec__', '_arr_common', '_center', '_centroid', '_common', '_convert',
'_cursor_array', '_demo_frmt', '_demo_ma', '_demo_rec', '_densify_2D', '_describe',
'_even_odd', '_extent', '_flat_', '_func', '_geo_array', '_get_shapes', '_help',
'_id_geom_array', '_max', '_min', '_ndarray', '_new_view_', '_pad_', '_pad_even_odd',
'_pad_nan', '_pad_zero', '_props', '_reshape_', '_split_array', '_two_arrays',
'_unpack', '_view_', '_xy', '_xyID', '_xy_idx', 'a_filter', 'a_io', 'all_folders',
'analysis', 'angle_2pnts', 'angle_np', 'angle_seq', 'angles_poly', 'apt', 'arc_np',
'areas', 'arr2xyz', 'arr_json', 'arr_pnts', 'arr_polygon_fc', 'arr_polyline_fc',
'array2raster', 'array_fc', 'array_struct', 'arrays_cols', 'azim_np', 'block',
'block_arr', 'centers', 'centroids', 'change_arr', 'change_fld', 'circle',

.... snip

 

 

Producing links to web links

This may be a bit much, but your documentation can also provide links to http: sites to reference specific web pages from within your documentation.  Here is an example.  The key is to provide a

  • link number ... [1]
  • a name for the link... with trailing __
  • and the actual link itself.

 

 
Web links in script documentation
References:
----------

**creating meshgrids from x,y data and plotting**

[1]
`2D array of values based on coordinates`__:

__ http://stackoverflow.com/questions/30764955/python-numpy-create-2darray-of-values-based-on-coordinates

[2]
`2D histogram issues`__:

__ https://github.com/numpy/numpy/issues/7317


**distance calculations and related** .... (scipy, skikit-learn)

 

The **text** provides bold highlighting.  The [1] on line 6, is the first link with line 7 being the link name in back-tics and trailing __.  Line 9, with the leading __ is the actual link itself.  What is show is a snip of the whole link list shown below.

 

 

Simpler option

You might need a simpler documentation example

    References:
    -----------

    `<https://stackoverflow.com/questions/7352684/how-to-find-the-groups-of-
    sequences-elements-from-an-array-in-numpy>`_.

    `<https://stackoverflow.com/questions/50551776/python-chunk-array-on-
    condition#50551924>`_.

 

This consists of `< at the start and >`_. at the end with the link in between.  This yields...

 

Finally

Remember, documentation is important. If not for yourself, for someone else.

Concave hulls...

 

Tons of examples, some suited to some point clouds, others not so much.

It has greatly amused me over the years that people spend so much time trying to find the 'corner cases' where a particular implementation fails.  For example, the ever popular C - shaped object. 

 

I know... in optical fields, being able to identify the letters of the alphabet or curly snaked-shaped patterns is important.... but, how many times have you sent your field assistant to collect data points with weird shapes.

 

Toolbox links

 

Concave Hulls  this is a separate toolbox

Point Tools or it is contained in this toolbox as well

 

So, regardless of the implementation, they can be illustrative in exploring point patterns and generating containers to describe them.  I recently posted on the Minimum Area Bounding Ellipse (MABE) and I have a toolset that produces Minimum Area Bounding Circles (MABC), rectangles (MABR) and extent rectangles.  This implementation of the concave hull was proposed by Adriano Moreira and Maribel Yasmina Santos in about 2007 and there are several implementations in various languages on GitHub for those needing a trolling session.

 

The 'tightness' of the concave hull by changing the number of nearest neighbors to include when you are trying to decide on which points on the perimeter to keep or dump.   This 'K' factor illustrates some of the possible outcomes.  The thing to watch out for is producing degenerate points which are outside the hull, but are just to much of an outsider to be allowed into the fold.  So in pictoral form,

 

3 neighbours

7 neighbors

11 neighbors

And finally... with the convex hull surrounding it.

And if that isn't tight enough for you, radial sorting and concave hull generation is as tight as it is going to get

 

So if you need to 'contain' something, add the concave hull to your suite of tools.

I will add the implementation of concave and convex hull to my

 

Geometry  Perhaps its due is overlooked. 

A pictoral of some of the things you can do with a set of points with a basic license in ArcMap and PRO with a bit of python and a toolbox to house the scripts in.

 

:---- (1)  -------------------------------------------------------------------------------------------------------------

Begin with some points

Random they are, no order.

:---- (2)  ------------------------------------------------------------------------------------------------------------------

Put order to the set

A radial sort was performed so that the mess was ordered by their angle relative to the x-axis.

Lines were drawn to the center of the cloud (just because)

 

:---- (3)  ------------------------------------------------------------------------------------------------------------------

What do we have now

Since the points are ordered, connect the dots.

:---- (4)  ------------------------------------------------------------------------------------------------------------------

Lets look at things conventionally

Everyone loves a convex hull.  It encompasses the points as a group.  There are other containers.

 

:---- (5)  ------------------------------------------------------------------------------------------------------------------

But some of the points are far from the center

A quick little negative buffer to only keep the points within 90% of the average distance to the center.

 

:---- (6)  ------------------------------------------------------------------------------------------------------------------

Convexity.... A different view on the data

Switching the selection in the above, then connecting the dots gives us a convex-ish hull..

Hmmm I wonder if you can change the rules to get a different view?

 

 

:---- (7)  ------------------------------------------------------------------------------------------------------------------

Every cool map has texture

Fill in boundary, obliterate the dots and 'forest' fill.  Just because we can

 

 

:---- (8)  ------------------------------------------------------------------------------------------------------------------

Connecting each point to its closest

 

:---- (9)  ------------------------------------------------------------------------------------------------------------------

Connections to 3 closest

Starting to see groupings and spaces.

 

:---- (10)  ------------------------------------------------------------------------------------------------------------------

Trees are useful (MST)

 

:---- (11)  ------------------------------------------------------------------------------------------------------------------

Make patterns from a single point (Ulam spiral)

:---- (12)  ------------------------------------------------------------------------------------------------------------------

Archimedes only needed a single point

 

:---- (13)  ------------------------------------------------------------------------------------------------------------------

 

:---- (14)  ------------------------------------------------------------------------------------------------------------------

The original Reclassify Rasters Simply... 

 

Interesting if you have a function that would be a pain to implement in the spatial analyst since it would take a number of steps... ufunc ... Example follows.  Multiple output formats possible

 

 

ufunc .... python user functions with numpy... np.frompyfunc

a = np.arange(1, 101).reshape(10, 10)  # ---- make a raster

a
array([[  1,   2,   3,   4,   5,   6,   7,   8,   9,  10],
       [ 11,  12,  13,  14,  15,  16,  17,  18,  19,  20],
       [ 21,  22,  23,  24,  25,  26,  27,  28,  29,  30],
       [ 31,  32,  33,  34,  35,  36,  37,  38,  39,  40],
       [ 41,  42,  43,  44,  45,  46,  47,  48,  49,  50],
       [ 51,  52,  53,  54,  55,  56,  57,  58,  59,  60],
       [ 61,  62,  63,  64,  65,  66,  67,  68,  69,  70],
       [ 71,  72,  73,  74,  75,  76,  77,  78,  79,  80],
       [ 81,  82,  83,  84,  85,  86,  87,  88,  89,  90],
       [ 91,  92,  93,  94,  95,  96,  97,  98,  99, 100]])

# ---- a weird request to reclass the data, expressed as a func ----

def reclass_2(a):
    '''reclassification using ufunc user functions'''
    if a > 80 and a != 91:             # ---- the first reclass option
        a = 8
    elif a > 60 and a <= 80:           # ---- another simple ont
        a = 7
    elif (a <= 60) and (a % 2 == 0):   # ---- now this one is a kicker
        a = 2
    else:                              # ---- I could continue, but I won't
        a = 0
    return a


# ---- Now... make the ufunc using numpy's frompyfunc
# frompyfunc(name of the def, number of input parameters in, results out)

r = np.frompyfunc(reclass_2, 1, 1)     # ---- function made

r                                      # ---- not much info
<ufunc '? (vectorized)'>

# ---- Make it do some work by passing the array

val =r(a)                              # ---- the array, 1 input, 1 output

array([[0, 2, 0, 2, 0, 2, 0, 2, 0, 2],
       [0, 2, 0, 2, 0, 2, 0, 2, 0, 2],
       [0, 2, 0, 2, 0, 2, 0, 2, 0, 2],
       [0, 2, 0, 2, 0, 2, 0, 2, 0, 2],
       [0, 2, 0, 2, 0, 2, 0, 2, 0, 2],
       [0, 2, 0, 2, 0, 2, 0, 2, 0, 2],
       [7, 7, 7, 7, 7, 7, 7, 7, 7, 7],
       [7, 7, 7, 7, 7, 7, 7, 7, 7, 7],
       [8, 8, 8, 8, 8, 8, 8, 8, 8, 8],
       [0, 8, 8, 8, 8, 8, 8, 8, 8, 8]], dtype=object)

# ---- return as integer
val.astype(np.int32)
Out[8]:
array([[0, 2, 0, 2, 0, 2, 0, 2, 0, 2],
       [0, 2, 0, 2, 0, 2, 0, 2, 0, 2],
       [0, 2, 0, 2, 0, 2, 0, 2, 0, 2],
       [0, 2, 0, 2, 0, 2, 0, 2, 0, 2],
       [0, 2, 0, 2, 0, 2, 0, 2, 0, 2],
       [0, 2, 0, 2, 0, 2, 0, 2, 0, 2],
       [7, 7, 7, 7, 7, 7, 7, 7, 7, 7],
       [7, 7, 7, 7, 7, 7, 7, 7, 7, 7],
       [8, 8, 8, 8, 8, 8, 8, 8, 8, 8],
       [0, 8, 8, 8, 8, 8, 8, 8, 8, 8]])

# ---- how about float?
val.astype(np.float32)

array([[ 0.,  2.,  0.,  2.,  0.,  2.,  0.,  2.,  0.,  2.],
       [ 0.,  2.,  0.,  2.,  0.,  2.,  0.,  2.,  0.,  2.],
       [ 0.,  2.,  0.,  2.,  0.,  2.,  0.,  2.,  0.,  2.],
       [ 0.,  2.,  0.,  2.,  0.,  2.,  0.,  2.,  0.,  2.],
       [ 0.,  2.,  0.,  2.,  0.,  2.,  0.,  2.,  0.,  2.],
       [ 0.,  2.,  0.,  2.,  0.,  2.,  0.,  2.,  0.,  2.],
       [ 7.,  7.,  7.,  7.,  7.,  7.,  7.,  7.,  7.,  7.],
       [ 7.,  7.,  7.,  7.,  7.,  7.,  7.,  7.,  7.,  7.],
       [ 8.,  8.,  8.,  8.,  8.,  8.,  8.,  8.,  8.,  8.],
       [ 0.,  8.,  8.,  8.,  8.,  8.,  8.,  8.,  8.,  8.]], dtype=float32)

# ---- be the first on your block to have a string raster

val.astype(np.str)
Out[10]:
array([['0', '2', '0', '2', '0', '2', '0', '2', '0', '2'],
       ['0', '2', '0', '2', '0', '2', '0', '2', '0', '2'],
       ['0', '2', '0', '2', '0', '2', '0', '2', '0', '2'],
       ['0', '2', '0', '2', '0', '2', '0', '2', '0', '2'],
       ['0', '2', '0', '2', '0', '2', '0', '2', '0', '2'],
       ['0', '2', '0', '2', '0', '2', '0', '2', '0', '2'],
       ['7', '7', '7', '7', '7', '7', '7', '7', '7', '7'],
       ['7', '7', '7', '7', '7', '7', '7', '7', '7', '7'],
       ['8', '8', '8', '8', '8', '8', '8', '8', '8', '8'],
       ['0', '8', '8', '8', '8', '8', '8', '8', '8', '8']],
      dtype='<U1')

 

If you have any interesting examples that you try, pass them on

It isn't quite false advertising. 

I will attribute it to an omission based on the needs of the many rather than on the needs of the few.

 

The story is in code...

# ---- start with an array ------

# ---- any data, floats for a tiny place over 5 days -----

array([[[  0.,   1.,   2.,   3.],
        [  4.,   5.,   6.,   7.],
        [  8.,   9.,  10.,  11.]],

       [[ 12.,  13.,  14.,  15.],
        [ 16.,  17.,  18.,  19.],
        [ 20.,  21.,  22.,  23.]],

       [[ 24.,  25.,  26.,  27.],
        [ 28.,  29.,  30.,  31.],
        [ 32.,  33.,  34.,  35.]],

       [[ 36.,  37.,  38.,  39.],
        [ 40.,  41.,  42.,  43.],
        [ 44.,  45.,  46.,  47.]],

       [[ 48.,  49.,  50.,  51.],
        [ 52.,  53.,  54.,  55.],
        [ 56.,  57.,  58.,  59.]]])

# ---- save to a raster *.tif ----------------

r = arcpy.NumPyArrayToRaster(a)
r.save("c:/temp/np2rast.tif")

# ---- load it back in to check your work ----

b = arcpy.RasterToNumPyArray("c:/temp/np2rast.tif")

# ---- seems to only read the first 'day' ---
array([[  0.,   1.,   2.,   3.],
       [  4.,   5.,   6.,   7.],
       [  8.,   9.,  10.,  11.]])

# ---- well that isn't good, hmmmm  ---------------
# The
rasters = []
for i in range(a.shape[0]):
    ai = a[i]
    rast = arcpy.NumPyArrayToRaster(ai)
    r_name = "in_memory/a{:0>3}.tif".format(i)
    rasters.append(r_name)
    rast.save(r_name)
rasters = ";".join([i for i in rasters])

rasters
'in_memory/a000.tif;in_memory/a001.tif;in_memory/a002.tif;in_memory/a003.tif;in_memory/a004.tif'

fname = "c:/temp/np2rast_comp.tif"
arcpy.management.CompositeBands(rasters, fname)  # ---- let's give Arc* another chance
c = arcpy.RasterToNumPyArray(fname)

# ---- everything is back as it should be -----------

array([[[  0.,   1.,   2.,   3.],
        [  4.,   5.,   6.,   7.],
        [  8.,   9.,  10.,  11.]],

       [[ 12.,  13.,  14.,  15.],
        [ 16.,  17.,  18.,  19.],
        [ 20.,  21.,  22.,  23.]],

       [[ 24.,  25.,  26.,  27.],
        [ 28.,  29.,  30.,  31.],
        [ 32.,  33.,  34.,  35.]],

       [[ 36.,  37.,  38.,  39.],
        [ 40.,  41.,  42.,  43.],
        [ 44.,  45.,  46.,  47.]],

       [[ 48.,  49.,  50.,  51.],
        [ 52.,  53.,  54.,  55.],
        [ 56.,  57.,  58.,  59.]]])

 

Now if you followed so far, I have made reference to working with numpy arrays to do a lot of work.  Rarely do I have the need to bring a 'raster' (ie *.tif) back in to ArcGIS PRO or ArcMap.  I did and discovered, that arcpy.RasterToNumPyArray seems to only like bringing back  a limited amount of data, probably with the expectation that user wants to map floating point values (ie 1 band) or integer values ( ie 3 bands for a 'picture'/'map').

 

This is a work-around, built-in.

 

You can of course store anything numeric in a tiff.  How about some statistics for our array 'a'.

 

mins = np.min(a, axis=0)
maxs = np.max(a, axis=0)
avg = np.mean(a, axis=0)
rang = np.ptp(a, axis=0)
medn = np.median(a, axis=0)

stats = [mins, maxs, avg, rang, medn]
rasters = []
for i in range(5):
    ai = stats[i]
    rast = arcpy.NumPyArrayToRaster(ai)
    r_name = "in_memory/a{:0>3}.tif".format(i)
    rasters.append(r_name)
    rast.save(r_name)
rasters = ";".join([i for i in rasters])

arcpy.management.CompositeBands(rasters, "c:/temp/stats.tif")
<Result 'c:\\Temp\\stats.tif'>

s = arcpy.RasterToNumPyArray("c:/temp/stats.tif")

# ---- in order, min, max, avg, range, median ----
array([[[  0.,   1.,   2.,   3.],
        [  4.,   5.,   6.,   7.],
        [  8.,   9.,  10.,  11.]],

       [[ 48.,  49.,  50.,  51.],
        [ 52.,  53.,  54.,  55.],
        [ 56.,  57.,  58.,  59.]],

       [[ 24.,  25.,  26.,  27.],
        [ 28.,  29.,  30.,  31.],
        [ 32.,  33.,  34.,  35.]],

       [[ 48.,  48.,  48.,  48.],
        [ 48.,  48.,  48.,  48.],
        [ 48.,  48.,  48.,  48.]],

       [[ 24.,  25.,  26.,  27.],
        [ 28.,  29.,  30.,  31.],
        [ 32.,  33.,  34.,  35.]]])

# ---- It won't make for a nice map, but your data is there

Next up, explorations of the limitations of working with *.tif files for multidimensional raster/array data.  

There are a number of options within the Esri suite of software, some of it relies in part on open source offerings.  Some of these (ie libtiff and tifffile) have even made it into open source software too (ie GDAL, scipy.ndimage and skimage etc etc), kindof like open source-open source.

 

Enough for now.

Table Tools...

 

 

This is a somewhat unorganized pictoral supplement to the tools contained there.

New additions

  • Added nested means classification for classification of numeric data (2019-05-02)
  • Added text file output to Crosstabulate and moved it to the Analysis menu (2019-02-23)
  • Table to csv and table to text (2018-12-29)
  • Statistics (2018-08-28)
    • Running statistics added to the Statistics toolset (ie running means etc)
  • Field calculator  (2018-08-28)
    • Poly* field calculations :  smallest/largest angle, min, max, avg length/perimeter
  • Field calculator
    • Calculations on numeric fields, includes a load of 'difference from .... ' type options, z-scores for fields etc etc
    • Calculations for text fields..  currently the ability to identify text data in a field and provide a sequence number for it   ie   A  A B A B A   becomes  A_00  A_01  B_00  A_02  B_01  A_03
    • Calculations for geometry properties for  polygon/polyline featureclasses including max_length, min_length, max_angle etc etc.
  • Find sequences under Analysis

 

General Toolbox Layout

These images show the contents so far

 

 

 

:----------------------------------------------------------------------------------------------------------------------

Analysis Toolset

Frequency

Longed for the functionality of this tool with a Standard license?  Here you go.  

You can get the frequency counts by field pairing and a 'sum' on one or more fields.

 

With a sample output from the inputs.

 

 

Cross-Tabulation

The cross-tabulation tools does just that.  Take two fields and get a cross-tabulation of their values.  I will add multidimensional crosstabulation at some point when/if a demand surfaces.

 

:----------------------------------------------------------------------------------------------------------------------

Array_tools Toolset

 

A handy conversion set, enabling the user to work with numpy arrays.

 

This is pretty self-explanatory.  The reason that there isn't an 'All' option is that it only takes fewer lines of code to provide that option than to check to see whether people want All fields or selected fields exported to an *.npy file. 

 

You can always pull out the line of code if you want all exported, otherwise, select the fields that you want exported and in the order that you want.  

 

When the Shape field is exported, the centroid (regardless of geometry type) is used so that you can retrieve the X and Y coordinates when and if you import the data.

 

To reverse the process, simply use the Numpy Array to Table tool.

The basic setup and the resultant are shown below.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

:----------------------------------------------------------------------------------------------------------------------

Field Calculator Toolset

 

Some of the optional tools in the field calculator toolset.

 

:----------------------------------------------------------------------------------------------------------------------

Statistics Toolset

 

Column Statistics

The tool allows you to specify the fields to summarize. Fields of 'Double' type are supported since you shouldn't be calculating statistics for integer fields... Do the conversion if you insist.

 

Here is a summary sample for 3 fields. Null values are accounted for in the calculation, so there is no need to exclude them prior to doing calculations.  

As you can see, the traditional statistics are calculated and they include Skewness and Kurtosis which the current Summary Statistics in PRO tools don't.  I will add the Case option when I feel the output looks nice .

 

 

I even got fancy and included an alternate for displaying the data in the 'View Details' window.  As it says... simply select all and copy and paste to your desired text editor.

 

 

 

Rank by Fields

 

Just does that... consider the following information in a 10K table on mass and size.  You need a quick ranking, give it a whirl.  You can not only get the biggest, biggests/smallest X, you can construct an ogive and derive other stats.

 

Here is a sample run and output.


 

 


 

:----------------------------------------------------------------------------------------------------------------------

This comes up time and again. ....I have 'X' rasters for an area and I need to determine Y over time....

Here is an example.... ....calculating percentiles....

 

Cell Statistics... is the obvious choice.  But you don't have the Spatial Analyst extension? Doesn't matter.  All the tools that you need are already provided if you are using current versions of ArcMap or ArcGIS PRO.

 

Do some work

 

(1)  Reading the raster files

import numpy as np
import arcpy

arrs = []
folder = r"C:\Data\rasters"  # ---- specify a folder containing all the rasters ----
arcpy.env.workspace = folder
rasters = arcpy.ListRasters("*", "TIF")
for raster in rasters:
    arrs.append(arcpy.RasterToNumPyArray(raster))

a = np.array(arrs) # ---- the master array ----

 

Once the rasters are all converted to arrays, they were appended to a list.  A master array was created in line 9.

As an example, 31 rasters in tif format were read in.  These are small rasters, but they can be as large as you can process.  You are now ready to calculate values for them using numpy.

 

(2)  Getting the statistics

median = np.median(a, axis=0) # ---- pick a statistic ----

array([[ 59.,  54.,  36., ...,  57.,  46.,  46.],
       [ 43.,  45.,  59., ...,  38.,  51.,  51.],
       [ 35.,  50.,  57., ...,  47.,  55.,  65.],
       ...,
       [ 43.,  52.,  40., ...,  56.,  62.,  60.],
       [ 45.,  57.,  39., ...,  45.,  44.,  48.],
       [ 49.,  57.,  50., ...,  56.,  50.,  58.]]

mins = np.min(a, axis=0)

array([[1, 3, 0, ..., 6, 4, 5],
       [1, 5, 6, ..., 6, 7, 2],
       [3, 2, 4, ..., 2, 5, 4],
       ...,
       [4, 0, 1, ..., 3, 4, 6],
       [0, 0, 0, ..., 5, 1, 0],
       [4, 1, 1, ..., 0, 3, 7]])

# ---- name a statistic, with or without nodata values, it can be done ----

 

Now when you are done your calculations you will probably want to make a raster so you can bask in the beauty of your efforts.

 

(3)  Reading the raster information and saving the result

rast = arcpy.Raster(r01)  # ---- read the first raster we loaded ----

rast = arcpy.Raster(r01)  # ---- simple

dir(rast)                 # ---- get raster information... some snipping here ----

[..., 'bandCount', 'catalogPath', 'compressionType', 'extent', 'format', 'hasRAT',
'height', 'isInteger', 'isTemporary', 'maximum', 'mean', 'meanCellHeight',
'meanCellWidth', 'minimum', 'name', 'noDataValue', 'path', 'pixelType', 'save',
'spatialReference', 'standardDeviation', 'uncompressedSize', 'width']

# ---- Save the result out to a new raster ------

r01 = rasters[1]                    # --- rasters have the same extent and cell size
rast = arcpy.Raster(r01)

lower_left = rast.extent.lowerLeft  # ---- needed to produce output
cell_size = rast.meanCellHeight     # ---- we will use this for x and y

f = r"c:\temp\result.tif"
r = arcpy.NumPyArrayToRaster(a, lower_left, cell_size, cell_size)
r.save(f)

 

(4) Saving and reloading the arrays

Now, the nice thing about arrays is that you can save a 3D to a file for reloading later (or any dimension for that matter).

If you have a 3D array like we have, this is kind of like a big 'zip'.  

np.save(r"c:\temp\myarray.npy", a)  # ---- 'a' is the multi dimensional array

a.shape  # ---- 31 days, 100 x 100 cells
(31, 100, 100)

# ---- reload it

im_back = np.load(r"c:\temp\myarray.npy")

im_back.shape
(31, 100, 100)

# ---- check to see if it is good

np.all(a == im_back)  # ---- some numpy magic ;)
True

 

That's about all.  Not covered here is nodata cells (easy, you can assign one) and statistics for nodata (easy, every stat has a nodata equivalent).

 

Try your skills with the attached *.npy file.

The link to the ArcGIS Code Sharing site... ... pictures are worth more than words. 

The toolset is easy to use.  There should be some expectation that your point sets can be enclosed with an ellipse or some other container.  

 

For non-degenerate ellipses, you need at least 3 points, since 3 non-collinear points is the minimum that esri's geometry engine will allow you to create polylines and polygons in. 

My Bounding Containers toolset covers other 'containers' to put stuff in.  Clean up your data and put it in its place

 

So here is a pretty picture, showing some simple cases of point groupings and their generated ellipses.

 

This is not the same as the Standard Deviational Ellipse.  That one doesn't enclose the boundary... it is a stats thing.

 

 

Just an example of the details from above

Then you get this... you can see it can't you...

Large data clouds are not problem and it is nice to see an elliptical cloud being delineated as an ellipse

 

Enjoy

Send me emails if you have questions

Start at a point.  You are given a list of bearings and distances.  Where do you end up.

This comes up fairly often and the COGO tools allow you to create them from scratch.  

Here is how you can do it using numpy and arcpy.

 

First a picture... given a location, you are instructed to move 100m in 10 degree increments between points.  You will obviously track a circle.  The traverse point locations can be constructed from base geometry.  The locations can then be saved to a featureclass.  The point featureclass can be converted to a continuous line, which can be segmented if you desire.  Total time, not much.

The image above, shows the results of such a venture.

  • I created my geometry using python/numpy
  • I save the results to a featureclass using Numpyarraytofeatureclass (hence points)
  • From those points I used Points to Line available at all license levels
  • The continuous line was blown apart to segments using Split Lines at Vertices but alas, only available at the Advance level. .... ( I will save this obstruction to another blog, where I will show you how to do it with a standard license   EDIT... From the second incarnation, you can produce a featureclass and use the XY to Line tool which is available at all license levels)
  • Add Geometry Attributes enabled me to obtain the coordinates of the start, middle and end locations of the line.  This could have been done during the construction of the geometry as well.

 

An how many of us have taught a class where this happens....

 

More to come....

Oh yes... code... NOW  the origin needs to be given in projected coordinates (ie UTM, State Plane, Web Mercator etc) since the calculations are done in planar units.  See my Vincenty implementation if you want to work with geodesic calculations.

def dist_bearing(orig=(0, 0), bearings=None, dists=None):
    """point locations given distance and bearing
    :Notes:  for sample calculations
    :-----
    : - bearings = np.arange(0, 361, 10.)
    : - dists = np.random.randint(10, 500, 36) * 1.0
    : - dists = np.ones((len(bearings),))
    :   dists.fill(100.)
    """

    rads = np.deg2rad(bearings)
    sines = np.sin(rads)
    coses = np.cos(rads)
    dx = sines * dists
    dy = coses * dists
    x_n = np.cumsum(dx) + orig[0]
    y_n = np.cumsum(dy) + orig[1]
    data = np.vstack((bearings, dists, dx, dy, x_n, y_n)).T
    frmt = "Origin (0,0)\n" + "{:>10s}"*6
    names = ["bearings", "dist", "dx", "dy", "Xn", "Yn"]
    print(frmt.format(*names))
    for i in data:
        print(("{: 10.2f}"*6).format(*i))
    return data

 

Optionally create a structured array that you can produce a featureclass from using this incarnation.

def dist_bearing(orig=(0, 0), bearings=None, dists=None, prn=False):
    """point locations given distance and bearing
    """

    orig = np.array(orig)
    rads = np.deg2rad(bearings)
    dx = np.sin(rads) * dists
    dy = np.cos(rads) * dists
    x_t = np.cumsum(dx) + orig[0]
    y_t = np.cumsum(dy) + orig[1]
    xy_f = np.array(list(zip(x_t[:-1], y_t[:-1])))
    xy_f = np.vstack((orig, xy_f))
    stack = (xy_f[:, 0], xy_f[:, 1], x_t, y_t, dx, dy, dists, bearings)
    data = np.vstack(stack).T
    names = ['X_f', 'Y_f', "X_t", "Yt", "dx", "dy", "dist", "bearing"]
    N = len(names)
    if prn:  # ---- just print the results ----------------------------------
        frmt = "Origin (0,0)\n" + "{:>10s}"*N
        print(frmt.format(*names))
        frmt = "{: 10.2f}"*N
        for i in data:
            print(frmt.format(*i))
        return data
    else:  # ---- produce a structured array from the output ----------------
        kind = ["<f8"]*N
        dt = list(zip(names, kind))
        out = np.zeros((data.shape[0],), dtype=dt)
        for i in range(N):
            nm = names[i]
            out[nm] = data[:, i]
        return out

# --- from the structured array, use NumPyArrayToFeatureclass

 

Again... more later