Skip navigation
All People > Dan_Patterson > Py... blog
1 2 3 Previous Next

Py... blog

95 posts

Help.... Documentation

 

Last thing written... if ever... First thing needed.

I have been messing with formatting my code using numpy doc strings

 

And! Not only that, your favorite Python Package Manager includes it in their python distribution so Sphinx can use it.

 

 

So here is a little 'dirr' function I wrote since dir(object) returns a messy hard to read output in any form.

Here is the documentation for the function.

def dirr(obj, colwise=True, cols=3, sub=None, prn=True):
    """A formatted dir listing of a module or function.

    Source : arraytools module in tools.py, dirr def

    Return a directory listing of a module's namespace or a part of it if the
    `sub` option is specified.  If  `prn=True`, then it is simply printed. If
    False, then the string is returned.

    Parameters
    ----------
    - colwise : `True` or `1`, otherwise, `False` or `0`
    - cols : pick a size to suit
    - sub : sub='a' all modules beginning with `a`
    - prn : `True` for print or `False` to return output as string

    Notes
    -----

    See the `inspect` module for possible additions like `isfunction`,
    'ismethod`, `ismodule`

    **Examples**::

        dirr(art, colwise=True, cols=3, sub=None, prn=True)  # all columnwise
        dirr(art, colwise=True, cols=3, sub='arr', prn=True) # just the `arr`'s

          (001)    _arr_common     arr2xyz         arr_json
          (002)    arr_pnts        arr_polygon_fc  arr_polyline_fc
          (003)    array2raster    array_fc
          (004)    array_struct    arrays_cols
    """

 

The key (apparently) is indentation, back-ticks and little tricks. 

In Spyder's help documentation, it looks like this. ( More on Spyder as an IDE for your work )

 

 

You can produce whole documentation for your modules or even single 'scripts' to accompany your code.

 

And the function itself makes finding my own documentation easier

Sample output
# ---- by column

dirr(art, colwise=True, cols=4, sub='arr')

----------------------------------------------------------------------
| dir(arraytools) ...
|    <module 'arraytools' from 'C:\\Git_Dan\\arraytools\\__init__.py'>
-------
  (001)    _arr_common     arr_pnts        array2raster    array_struct   
  (002)    arr2xyz         arr_polygon_fc  array_fc        arrays_cols    
  (003)    arr_json        arr_polyline_fc                                

# ---- by row

dirr(art, colwise=False, cols=4, sub='arr')

----------------------------------------------------------------------
| dir(arraytools) ...
|    <module 'arraytools' from 'C:\\Git_Dan\\arraytools\\__init__.py'>
-------
  (001)    _arr_common     arr2xyz         arr_json        arr_pnts       
  (002)    arr_polygon_fc  arr_polyline_fc array2raster                   
  (003)    array_fc        array_struct    arrays_cols                    

# ---- standard method

dir(art)
['__all__', '__args', '__art_modules__', '__art_version__', '__builtins__',
'__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__',
'__spec__', '_arr_common', '_center', '_centroid', '_common', '_convert',
'_cursor_array', '_demo_frmt', '_demo_ma', '_demo_rec', '_densify_2D', '_describe',
'_even_odd', '_extent', '_flat_', '_func', '_geo_array', '_get_shapes', '_help',
'_id_geom_array', '_max', '_min', '_ndarray', '_new_view_', '_pad_', '_pad_even_odd',
'_pad_nan', '_pad_zero', '_props', '_reshape_', '_split_array', '_two_arrays',
'_unpack', '_view_', '_xy', '_xyID', '_xy_idx', 'a_filter', 'a_io', 'all_folders',
'analysis', 'angle_2pnts', 'angle_np', 'angle_seq', 'angles_poly', 'apt', 'arc_np',
'areas', 'arr2xyz', 'arr_json', 'arr_pnts', 'arr_polygon_fc', 'arr_polyline_fc',
'array2raster', 'array_fc', 'array_struct', 'arrays_cols', 'azim_np', 'block',
'block_arr', 'centers', 'centroids', 'change_arr', 'change_fld', 'circle',

.... snip

 

 

Producing links to web links

This may be a bit much, but your documentation can also provide links to http: sites to reference specific web pages from within your documentation.  Here is an example.  The key is to provide a

  • link number ... [1]
  • a name for the link... with trailing __
  • and the actual link itself.

 

 
Web links in script documentation
References:
----------

**creating meshgrids from x,y data and plotting**

[1]
`2D array of values based on coordinates`__:

__ http://stackoverflow.com/questions/30764955/python-numpy-create-2darray-of-values-based-on-coordinates

[2]
`2D histogram issues`__:

__ https://github.com/numpy/numpy/issues/7317


**distance calculations and related** .... (scipy, skikit-learn)

 

The **text** provides bold highlighting.  The [1] on line 6, is the first link with line 7 being the link name in back-tics and trailing __.  Line 9, with the leading __ is the actual link itself.  What is show is a snip of the whole link list shown below.

 

 

Remember, documentation is important. If not for yourself, for someone else.

Concave hulls... tons of examples, some suited to some point clouds, others not so much.

It has greatly amused me over the years that people spend so much time trying to find the 'corner cases' where a particular implementation fails.  For example, the ever popular C - shaped object. 

 

I know... in optical fields, being able to identify the letters of the alphabet or curly snaked-shaped patterns is important.... but, how many times have you sent your field assistant to collect data points with weird shapes.

 

Toolbox links

Concave Hulls  this is a separate toolbox

Point Tools or it is contained in this toolbox as well

 

So, regardless of the implementation, they can be illustrative in exploring point patterns and generating containers to describe them.  I recently posted on the Minimum Area Bounding Ellipse (MABE) and I have a toolset that produces Minimum Area Bounding Circles (MABC), rectangles (MABR) and extent rectangles.  This implementation of the concave hull was proposed by Adriano Moreira and Maribel Yasmina Santos in about 2007 and there are several implementations in various languages on GitHub for those needing a trolling session.

 

The 'tightness' of the concave hull by changing the number of nearest neighbors to include when you are trying to decide on which points on the perimeter to keep or dump.   This 'K' factor illustrates some of the possible outcomes.  The thing to watch out for is producing degenerate points which are outside the hull, but are just to much of an outsider to be allowed into the fold.  So in pictoral form,

 

3 neighbours

7 neighbors

11 neighbors

And finally... with the convex hull surrounding it.

And if that isn't tight enough for you, radial sorting and concave hull generation is as tight as it is going to get

 

So if you need to 'contain' something, add the concave hull to your suite of tools.

I will add the implementation of concave and convex hull to my

Geometry  Perhaps its due is overlooked. 

A pictoral of some of the things you can do with a set of points with a basic license in ArcMap and PRO with a bit of python and a toolbox to house the scripts in.

 

:---- (1)  -------------------------------------------------------------------------------------------------------------

Begin with some points

Random they are, no order.

:---- (2)  ------------------------------------------------------------------------------------------------------------------

Put order to the setA radial sort was performed so that the mess was ordered by their angle relative to the x-axis.

Lines were drawn to the center of the cloud (just because)

 

:---- (3)  ------------------------------------------------------------------------------------------------------------------

What do we have now

Since the points are ordered, connect the dots.

:---- (4)  ------------------------------------------------------------------------------------------------------------------

Lets look at things conventionallyEveryone loves a convex hull.  It encompasses the points as a group.  There are other containers.

 

:---- (5)  ------------------------------------------------------------------------------------------------------------------

But some of the points are far from the centerA quick little negative buffer to only keep the points within 90% of the average distance to the center.

 

:---- (6)  ------------------------------------------------------------------------------------------------------------------

Convexity.... A different view on the data

Switching the selection in the above, then connecting the dots gives us a convex-ish hull..

Hmmm I wonder if you can change the rules to get a different view?

 

 

:---- (7)  ------------------------------------------------------------------------------------------------------------------

Every cool map has textureFill in boundary, obliterate the dots and 'forest' fill.  Just because we can

 

 

:---- (8)  ------------------------------------------------------------------------------------------------------------------

Connecting each point to its closest

 

:---- (9)  ------------------------------------------------------------------------------------------------------------------

Connections to 3 closest

Starting to see groupings and spaces.

 

:---- (10)  ------------------------------------------------------------------------------------------------------------------

Trees are useful (MST)

 

:---- (11)  ------------------------------------------------------------------------------------------------------------------

Make patterns from a single point (Ulam spiral)

:---- (12)  ------------------------------------------------------------------------------------------------------------------

Archimedes only needed a single point

 

:---- (13)  ------------------------------------------------------------------------------------------------------------------

 

:---- (14)  ------------------------------------------------------------------------------------------------------------------

The original Reclassify Rasters Simply... 

 

Interesting if you have a function that would be a pain to implement in the spatial analyst since it would take a number of steps... ufunc ... Example follows.  Multiple output formats possible

 

 

ufunc .... python user functions with numpy... np.frompyfunc

a = np.arange(1, 101).reshape(10, 10)  # ---- make a raster

a
array([[  1,   2,   3,   4,   5,   6,   7,   8,   9,  10],
       [ 11,  12,  13,  14,  15,  16,  17,  18,  19,  20],
       [ 21,  22,  23,  24,  25,  26,  27,  28,  29,  30],
       [ 31,  32,  33,  34,  35,  36,  37,  38,  39,  40],
       [ 41,  42,  43,  44,  45,  46,  47,  48,  49,  50],
       [ 51,  52,  53,  54,  55,  56,  57,  58,  59,  60],
       [ 61,  62,  63,  64,  65,  66,  67,  68,  69,  70],
       [ 71,  72,  73,  74,  75,  76,  77,  78,  79,  80],
       [ 81,  82,  83,  84,  85,  86,  87,  88,  89,  90],
       [ 91,  92,  93,  94,  95,  96,  97,  98,  99, 100]])

# ---- a weird request to reclass the data, expressed as a func ----

def reclass_2(a):
    '''reclassification using ufunc user functions'''
    if a > 80 and a != 91:             # ---- the first reclass option
        a = 8
    elif a > 60 and a <= 80:           # ---- another simple ont
        a = 7
    elif (a <= 60) and (a % 2 == 0):   # ---- now this one is a kicker
        a = 2
    else:                              # ---- I could continue, but I won't
        a = 0
    return a


# ---- Now... make the ufunc using numpy's frompyfunc
# frompyfunc(name of the def, number of input parameters in, results out)

r = np.frompyfunc(reclass_2, 1, 1)     # ---- function made

r                                      # ---- not much info
<ufunc '? (vectorized)'>

# ---- Make it do some work by passing the array

val =r(a)                              # ---- the array, 1 input, 1 output

array([[0, 2, 0, 2, 0, 2, 0, 2, 0, 2],
       [0, 2, 0, 2, 0, 2, 0, 2, 0, 2],
       [0, 2, 0, 2, 0, 2, 0, 2, 0, 2],
       [0, 2, 0, 2, 0, 2, 0, 2, 0, 2],
       [0, 2, 0, 2, 0, 2, 0, 2, 0, 2],
       [0, 2, 0, 2, 0, 2, 0, 2, 0, 2],
       [7, 7, 7, 7, 7, 7, 7, 7, 7, 7],
       [7, 7, 7, 7, 7, 7, 7, 7, 7, 7],
       [8, 8, 8, 8, 8, 8, 8, 8, 8, 8],
       [0, 8, 8, 8, 8, 8, 8, 8, 8, 8]], dtype=object)

# ---- return as integer
val.astype(np.int32)
Out[8]:
array([[0, 2, 0, 2, 0, 2, 0, 2, 0, 2],
       [0, 2, 0, 2, 0, 2, 0, 2, 0, 2],
       [0, 2, 0, 2, 0, 2, 0, 2, 0, 2],
       [0, 2, 0, 2, 0, 2, 0, 2, 0, 2],
       [0, 2, 0, 2, 0, 2, 0, 2, 0, 2],
       [0, 2, 0, 2, 0, 2, 0, 2, 0, 2],
       [7, 7, 7, 7, 7, 7, 7, 7, 7, 7],
       [7, 7, 7, 7, 7, 7, 7, 7, 7, 7],
       [8, 8, 8, 8, 8, 8, 8, 8, 8, 8],
       [0, 8, 8, 8, 8, 8, 8, 8, 8, 8]])

# ---- how about float?
val.astype(np.float32)

array([[ 0.,  2.,  0.,  2.,  0.,  2.,  0.,  2.,  0.,  2.],
       [ 0.,  2.,  0.,  2.,  0.,  2.,  0.,  2.,  0.,  2.],
       [ 0.,  2.,  0.,  2.,  0.,  2.,  0.,  2.,  0.,  2.],
       [ 0.,  2.,  0.,  2.,  0.,  2.,  0.,  2.,  0.,  2.],
       [ 0.,  2.,  0.,  2.,  0.,  2.,  0.,  2.,  0.,  2.],
       [ 0.,  2.,  0.,  2.,  0.,  2.,  0.,  2.,  0.,  2.],
       [ 7.,  7.,  7.,  7.,  7.,  7.,  7.,  7.,  7.,  7.],
       [ 7.,  7.,  7.,  7.,  7.,  7.,  7.,  7.,  7.,  7.],
       [ 8.,  8.,  8.,  8.,  8.,  8.,  8.,  8.,  8.,  8.],
       [ 0.,  8.,  8.,  8.,  8.,  8.,  8.,  8.,  8.,  8.]], dtype=float32)

# ---- be the first on your block to have a string raster

val.astype(np.str)
Out[10]:
array([['0', '2', '0', '2', '0', '2', '0', '2', '0', '2'],
       ['0', '2', '0', '2', '0', '2', '0', '2', '0', '2'],
       ['0', '2', '0', '2', '0', '2', '0', '2', '0', '2'],
       ['0', '2', '0', '2', '0', '2', '0', '2', '0', '2'],
       ['0', '2', '0', '2', '0', '2', '0', '2', '0', '2'],
       ['0', '2', '0', '2', '0', '2', '0', '2', '0', '2'],
       ['7', '7', '7', '7', '7', '7', '7', '7', '7', '7'],
       ['7', '7', '7', '7', '7', '7', '7', '7', '7', '7'],
       ['8', '8', '8', '8', '8', '8', '8', '8', '8', '8'],
       ['0', '8', '8', '8', '8', '8', '8', '8', '8', '8']],
      dtype='<U1')

 

If you have any interesting examples that you try, pass them on

It isn't quite false advertising. 

I will attribute it to an omission based on the needs of the many rather than on the needs of the few.

 

The story is in code...

# ---- start with an array ------

# ---- any data, floats for a tiny place over 5 days -----

array([[[  0.,   1.,   2.,   3.],
        [  4.,   5.,   6.,   7.],
        [  8.,   9.,  10.,  11.]],

       [[ 12.,  13.,  14.,  15.],
        [ 16.,  17.,  18.,  19.],
        [ 20.,  21.,  22.,  23.]],

       [[ 24.,  25.,  26.,  27.],
        [ 28.,  29.,  30.,  31.],
        [ 32.,  33.,  34.,  35.]],

       [[ 36.,  37.,  38.,  39.],
        [ 40.,  41.,  42.,  43.],
        [ 44.,  45.,  46.,  47.]],

       [[ 48.,  49.,  50.,  51.],
        [ 52.,  53.,  54.,  55.],
        [ 56.,  57.,  58.,  59.]]])

# ---- save to a raster *.tif ----------------

r = arcpy.NumPyArrayToRaster(a)
r.save("c:/temp/np2rast.tif")

# ---- load it back in to check your work ----

b = arcpy.RasterToNumPyArray("c:/temp/np2rast.tif")

# ---- seems to only read the first 'day' ---
array([[  0.,   1.,   2.,   3.],
       [  4.,   5.,   6.,   7.],
       [  8.,   9.,  10.,  11.]])

# ---- well that isn't good, hmmmm  ---------------
# The
rasters = []
for i in range(a.shape[0]):
    ai = a[i]
    rast = arcpy.NumPyArrayToRaster(ai)
    r_name = "in_memory/a{:0>3}.tif".format(i)
    rasters.append(r_name)
    rast.save(r_name)
rasters = ";".join([i for i in rasters])

rasters
'in_memory/a000.tif;in_memory/a001.tif;in_memory/a002.tif;in_memory/a003.tif;in_memory/a004.tif'

fname = "c:/temp/np2rast_comp.tif"
arcpy.management.CompositeBands(rasters, fname)  # ---- let's give Arc* another chance
c = arcpy.RasterToNumPyArray(fname)

# ---- everything is back as it should be -----------

array([[[  0.,   1.,   2.,   3.],
        [  4.,   5.,   6.,   7.],
        [  8.,   9.,  10.,  11.]],

       [[ 12.,  13.,  14.,  15.],
        [ 16.,  17.,  18.,  19.],
        [ 20.,  21.,  22.,  23.]],

       [[ 24.,  25.,  26.,  27.],
        [ 28.,  29.,  30.,  31.],
        [ 32.,  33.,  34.,  35.]],

       [[ 36.,  37.,  38.,  39.],
        [ 40.,  41.,  42.,  43.],
        [ 44.,  45.,  46.,  47.]],

       [[ 48.,  49.,  50.,  51.],
        [ 52.,  53.,  54.,  55.],
        [ 56.,  57.,  58.,  59.]]])

 

Now if you followed so far, I have made reference to working with numpy arrays to do a lot of work.  Rarely do I have the need to bring a 'raster' (ie *.tif) back in to ArcGIS PRO or ArcMap.  I did and discovered, that arcpy.RasterToNumPyArray seems to only like bringing back  a limited amount of data, probably with the expectation that user wants to map floating point values (ie 1 band) or integer values ( ie 3 bands for a 'picture'/'map').

 

This is a work-around, built-in.

 

You can of course store anything numeric in a tiff.  How about some statistics for our array 'a'.

 

mins = np.min(a, axis=0)
maxs = np.max(a, axis=0)
avg = np.mean(a, axis=0)
rang = np.ptp(a, axis=0)
medn = np.median(a, axis=0)

stats = [mins, maxs, avg, rang, medn]
rasters = []
for i in range(5):
    ai = stats[i]
    rast = arcpy.NumPyArrayToRaster(ai)
    r_name = "in_memory/a{:0>3}.tif".format(i)
    rasters.append(r_name)
    rast.save(r_name)
rasters = ";".join([i for i in rasters])

arcpy.management.CompositeBands(rasters, "c:/temp/stats.tif")
<Result 'c:\\Temp\\stats.tif'>

s = arcpy.RasterToNumPyArray("c:/temp/stats.tif")

# ---- in order, min, max, avg, range, median ----
array([[[  0.,   1.,   2.,   3.],
        [  4.,   5.,   6.,   7.],
        [  8.,   9.,  10.,  11.]],

       [[ 48.,  49.,  50.,  51.],
        [ 52.,  53.,  54.,  55.],
        [ 56.,  57.,  58.,  59.]],

       [[ 24.,  25.,  26.,  27.],
        [ 28.,  29.,  30.,  31.],
        [ 32.,  33.,  34.,  35.]],

       [[ 48.,  48.,  48.,  48.],
        [ 48.,  48.,  48.,  48.],
        [ 48.,  48.,  48.,  48.]],

       [[ 24.,  25.,  26.,  27.],
        [ 28.,  29.,  30.,  31.],
        [ 32.,  33.,  34.,  35.]]])

# ---- It won't make for a nice map, but your data is there

Next up, explorations of the limitations of working with *.tif files for multidimensional raster/array data.  

There are a number of options within the Esri suite of software, some of it relies in part on open source offerings.  Some of these (ie libtiff and tifffile) have even made it into open source software too (ie GDAL, scipy.ndimage and skimage etc etc), kindof like open source-open source.

 

Enough for now.

Table Tools... the link on ArcGIS Code Sharing...

This is a somewhat unorganized pictoral supplement to the tools contained there.

 

General Toolbox Layout

These images show the contents so far

 

:----------------------------------------------------------------------------------------------------------------------

Analysis Toolset

Frequency

Longed for the functionality of this tool with a Standard license?  Here you go.  

You can get the frequency counts by field pairing and a 'sum' on one or more fields.

 

With a sample output from the inputs.

 

 

:----------------------------------------------------------------------------------------------------------------------

Array_tools Toolset

 

A handy conversion set, enabling the user to work with numpy arrays.

 

This is pretty self-explanatory.  The reason that there isn't an 'All' option is that it only takes fewer lines of code to provide that option than to check to see whether people want All fields or selected fields exported to an *.npy file. 

 

You can always pull out the line of code if you want all exported, otherwise, select the fields that you want exported and in the order that you want.  

 

When the Shape field is exported, the centroid (regardless of geometry type) is used so that you can retrieve the X and Y coordinates when and if you import the data.

 

To reverse the process, simply use the Numpy Array to Table tool.

The basic setup and the resultant are shown below.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

:----------------------------------------------------------------------------------------------------------------------

Statistics Toolset

 

Column Statistics

The tool allows you to specify the fields to summarize. Fields of 'Double' type are supported since you shouldn't be calculating statistics for integer fields... Do the conversion if you insist.

 

Here is a summary sample for 3 fields. Null values are accounted for in the calculation, so there is no need to exclude them prior to doing calculations.  

As you can see, the traditional statistics are calculated and they include Skewness and Kurtosis which the current Summary Statistics in PRO tools don't.  I will add the Case option when I feel the output looks nice .

 

 

I even got fancy and included an alternate for displaying the data in the 'View Details' window.  As it says... simply select all and copy and paste to your desired text editor.

 

 

 

Cross-Tabulation

The cross-tabulation tools does just that.  Take two fields and get a cross-tabulation of their values.  I will add multidimensional crosstabulation at some point when/if a demand surfaces.

 

 

 

 

 

Rank by Fields

 

Just does that... consider the following information in a 10K table on mass and size.  You need a quick ranking, give it a whirl.  You can not only get the biggest, biggests/smallest X, you can construct an ogive and derive other stats.

 

Here is a sample run and output.

 

 

 

:----------------------------------------------------------------------------------------------------------------------

This comes up time and again. ....I have 'X' rasters for an area and I need to determine Y over time....

Here is an example.... ....calculating percentiles....

 

Cell Statistics... is the obvious choice.  But you don't have the Spatial Analyst extension? Doesn't matter.  All the tools that you need are already provided if you are using current versions of ArcMap or ArcGIS PRO.

 

Do some work

 

(1)  Reading the raster files

import numpy as np
import arcpy

arrs = []
folder = r"C:\Data\rasters"  # ---- specify a folder containing all the rasters ----
arcpy.env.workspace = folder
rasters = arcpy.ListRasters("*", "TIF")
for raster in rasters:
    arrs.append(arcpy.RasterToNumPyArray(raster))

a = np.array(arrs) # ---- the master array ----

 

Once the rasters are all converted to arrays, they were appended to a list.  A master array was created in line 9.

As an example, 31 rasters in tif format were read in.  These are small rasters, but they can be as large as you can process.  You are now ready to calculate values for them using numpy.

 

(2)  Getting the statistics

median = np.median(a, axis=0) # ---- pick a statistic ----

array([[ 59.,  54.,  36., ...,  57.,  46.,  46.],
       [ 43.,  45.,  59., ...,  38.,  51.,  51.],
       [ 35.,  50.,  57., ...,  47.,  55.,  65.],
       ...,
       [ 43.,  52.,  40., ...,  56.,  62.,  60.],
       [ 45.,  57.,  39., ...,  45.,  44.,  48.],
       [ 49.,  57.,  50., ...,  56.,  50.,  58.]]

mins = np.min(a, axis=0)

array([[1, 3, 0, ..., 6, 4, 5],
       [1, 5, 6, ..., 6, 7, 2],
       [3, 2, 4, ..., 2, 5, 4],
       ...,
       [4, 0, 1, ..., 3, 4, 6],
       [0, 0, 0, ..., 5, 1, 0],
       [4, 1, 1, ..., 0, 3, 7]])

# ---- name a statistic, with or without nodata values, it can be done ----

 

Now when you are done your calculations you will probably want to make a raster so you can bask in the beauty of your efforts.

 

(3)  Reading the raster information and saving the result

rast = arcpy.Raster(r01)  # ---- read the first raster we loaded ----

rast = arcpy.Raster(r01)  # ---- simple

dir(rast)                 # ---- get raster information... some snipping here ----

[..., 'bandCount', 'catalogPath', 'compressionType', 'extent', 'format', 'hasRAT',
'height', 'isInteger', 'isTemporary', 'maximum', 'mean', 'meanCellHeight',
'meanCellWidth', 'minimum', 'name', 'noDataValue', 'path', 'pixelType', 'save',
'spatialReference', 'standardDeviation', 'uncompressedSize', 'width']

# ---- Save the result out to a new raster ------

r01 = rasters[1]                    # --- rasters have the same extent and cell size
rast = arcpy.Raster(r01)

lower_left = rast.extent.lowerLeft  # ---- needed to produce output
cell_size = rast.meanCellHeight     # ---- we will use this for x and y

f = r"c:\temp\result.tif"
r = arcpy.NumPyArrayToRaster(a, lower_left, cell_size, cell_size)
r.save(f)

 

(4) Saving and reloading the arrays

Now, the nice thing about arrays is that you can save a 3D to a file for reloading later (or any dimension for that matter).

If you have a 3D array like we have, this is kind of like a big 'zip'.  

np.save(r"c:\temp\myarray.npy", a)  # ---- 'a' is the multi dimensional array

a.shape  # ---- 31 days, 100 x 100 cells
(31, 100, 100)

# ---- reload it

im_back = np.load(r"c:\temp\myarray.npy")

im_back.shape
(31, 100, 100)

# ---- check to see if it is good

np.all(a == im_back)  # ---- some numpy magic ;)
True

 

That's about all.  Not covered here is nodata cells (easy, you can assign one) and statistics for nodata (easy, every stat has a nodata equivalent).

 

Try your skills with the attached *.npy file.

The link to the ArcGIS Code Sharing site... ... pictures are worth more than words. 

The toolset is easy to use.  There should be some expectation that your point sets can be enclosed with an ellipse or some other container.  

 

For non-degenerate ellipses, you need at least 3 points, since 3 non-collinear points is the minimum that esri's geometry engine will allow you to create polylines and polygons in. 

My Bounding Containers toolset covers other 'containers' to put stuff in.  Clean up your data and put it in its place

 

So here is a pretty picture, showing some simple cases of point groupings and their generated ellipses.

 

This is not the same as the Standard Deviational Ellipse.  That one doesn't enclose the boundary... it is a stats thing.

 

 

Just an example of the details from above

Then you get this... you can see it can't you...

Large data clouds are not problem and it is nice to see an elliptical cloud being delineated as an ellipse

 

Enjoy

Send me emails if you have questions

Dan_Patterson

Spyder

Posted by Dan_Patterson Champion Jan 27, 2018

In pictures.

 

:--------- Some other links

Script documenting ... It is all in the docs - links to spyder help pane for user-created help.

 

:--------- The Icon 

:--------- The whole interface... which can be arranged to suit

 

:--------- Keep track of project files

 

:--------- Need a little help?

 

:--------- Check out your variables

 

:--------- Some graphs?  Yes from within Spyder, you can use Matplotlib or any other installed graphics pack (ie seaborn, orange etc)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

:------------- See your scripts in outline mode, quickly navigate within them like a table of contents

:-------------- Don't like something?  change it

 

: --------  Set Spyder as your IDE for PRO

 

 

 

More later

: --------

Start at a point.  You are given a list of bearings and distances.  Where do you end up.

This comes up fairly often and the COGO tools allow you to create them from scratch.  

Here is how you can do it using numpy and arcpy.

 

First a picture... given a location, you are instructed to move 100m in 10 degree increments between points.  You will obviously track a circle.  The traverse point locations can be constructed from base geometry.  The locations can then be saved to a featureclass.  The point featureclass can be converted to a continuous line, which can be segmented if you desire.  Total time, not much.

The image above, shows the results of such a venture.

  • I created my geometry using python/numpy
  • I save the results to a featureclass using Numpyarraytofeatureclass (hence points)
  • From those points I used Points to Line available at all license levels
  • The continuous line was blown apart to segments using Split Lines at Vertices but alas, only available at the Advance level. .... ( I will save this obstruction to another blog, where I will show you how to do it with a standard license   EDIT... From the second incarnation, you can produce a featureclass and use the XY to Line tool which is available at all license levels)
  • Add Geometry Attributes enabled me to obtain the coordinates of the start, middle and end locations of the line.  This could have been done during the construction of the geometry as well.

 

An how many of us have taught a class where this happens....

 

More to come....

Oh yes... code... NOW  the origin needs to be given in projected coordinates (ie UTM, State Plane, Web Mercator etc) since the calculations are done in planar units.  See my Vincenty implementation if you want to work with geodesic calculations.

def dist_bearing(orig=(0, 0), bearings=None, dists=None):
    """point locations given distance and bearing
    :Notes:  for sample calculations
    :-----
    : - bearings = np.arange(0, 361, 10.)
    : - dists = np.random.randint(10, 500, 36) * 1.0
    : - dists = np.ones((len(bearings),))
    :   dists.fill(100.)
    """

    rads = np.deg2rad(bearings)
    sines = np.sin(rads)
    coses = np.cos(rads)
    dx = sines * dists
    dy = coses * dists
    x_n = np.cumsum(dx) + orig[0]
    y_n = np.cumsum(dy) + orig[1]
    data = np.vstack((bearings, dists, dx, dy, x_n, y_n)).T
    frmt = "Origin (0,0)\n" + "{:>10s}"*6
    names = ["bearings", "dist", "dx", "dy", "Xn", "Yn"]
    print(frmt.format(*names))
    for i in data:
        print(("{: 10.2f}"*6).format(*i))
    return data

 

Optionally create a structured array that you can produce a featureclass from using this incarnation.

def dist_bearing(orig=(0, 0), bearings=None, dists=None, prn=False):
    """point locations given distance and bearing
    """

    orig = np.array(orig)
    rads = np.deg2rad(bearings)
    dx = np.sin(rads) * dists
    dy = np.cos(rads) * dists
    x_t = np.cumsum(dx) + orig[0]
    y_t = np.cumsum(dy) + orig[1]
    xy_f = np.array(list(zip(x_t[:-1], y_t[:-1])))
    xy_f = np.vstack((orig, xy_f))
    stack = (xy_f[:, 0], xy_f[:, 1], x_t, y_t, dx, dy, dists, bearings)
    data = np.vstack(stack).T
    names = ['X_f', 'Y_f', "X_t", "Yt", "dx", "dy", "dist", "bearing"]
    N = len(names)
    if prn:  # ---- just print the results ----------------------------------
        frmt = "Origin (0,0)\n" + "{:>10s}"*N
        print(frmt.format(*names))
        frmt = "{: 10.2f}"*N
        for i in data:
            print(frmt.format(*i))
        return data
    else:  # ---- produce a structured array from the output ----------------
        kind = ["<f8"]*N
        dt = list(zip(names, kind))
        out = np.zeros((data.shape[0],), dtype=dt)
        for i in range(N):
            nm = names[i]
            out[nm] = data[:, i]
        return out

# --- from the structured array, use NumPyArrayToFeatureclass

 

Again... more later

I rarely get a great idea.  This is another one of those cases.  But I do recognize a great idea when I see it.  Sadly I usually forget it, then rediscover it many months or years later.  I spotted a fantastic idea today and I had to drop everything (between answering questions) and jot it down.

 

The premise that twigged my interest was about how to 'combine' values of elevation, slope and aspect on Stack Overflow.  I did a wowwser!  Terrain derivatives on Stack!  I was enthralled.  The author.... unutbu ... of mysterious persona, put together a short missive using built-in numpy tools.  The code is short and sweet, works with floating point or integer rasters (aka, arrays) and is fast (at least in my testing so far).

 

Before I ruin the rest of my day with some droll corner case I hadn't thought of, I will jot this down now for posterity.

 

The large print

  • This doesn't require a Spatial Analyst License
  • A complete implementation (coming soon) uses arcpy's RasterToNumPyArray and NumPyArrayToRaster to communicate between the thin veil separating Arc* from the array world
  • Rasters/arrays with nodata values will be implemented
  • This is in the initial stages of development of a full-fledged tool, so hold you ...but!... comments

 

Start with 3 small rasters/arrays.  This way, you can see how the system works.

 

Array 'a'Array 'b'
array([[0, 0, 0, 4, 4, 4, 1, 1, 1],
       [0, 0, 0, 4, 4, 4, 1, 1, 1],
       [0, 0, 0, 4, 4, 4, 1, 1, 1],
       [2, 2, 2, 1, 1, 1, 2, 2, 2],
       [2, 2, 2, 1, 1, 1, 2, 2, 2],
       [2, 2, 2, 1, 1, 1, 2, 2, 2],
       [1, 1, 1, 4, 4, 4, 0, 0, 0],
       [1, 1, 1, 4, 4, 4, 0, 0, 0],
       [1, 1, 1, 4, 4, 4, 0, 0, 0]])
array([[0, 0, 0, 1, 1, 1, 2, 2, 2],
       [0, 0, 0, 1, 1, 1, 2, 2, 2],
       [0, 0, 0, 1, 1, 1, 2, 2, 2],
       [3, 3, 3, 4, 4, 4, 5, 5, 5],
       [3, 3, 3, 4, 4, 4, 5, 5, 5],
       [3, 3, 3, 4, 4, 4, 5, 5, 5],
       [0, 0, 0, 1, 1, 1, 2, 2, 2],
       [0, 0, 0, 1, 1, 1, 2, 2, 2],
       [0, 0, 0, 1, 1, 1, 2, 2, 2]])

 

Array 'c'unique combinations
array([[0, 0, 0, 3, 3, 3, 0, 0, 0],
       [0, 0, 0, 3, 3, 3, 0, 0, 0],
       [0, 0, 0, 3, 3, 3, 0, 0, 0],
       [1, 1, 1, 4, 4, 4, 1, 1, 1],
       [1, 1, 1, 4, 4, 4, 1, 1, 1],
       [1, 1, 1, 4, 4, 4, 1, 1, 1],
       [2, 2, 2, 5, 5, 5, 2, 2, 2],
       [2, 2, 2, 5, 5, 5, 2, 2, 2],
       [2, 2, 2, 5, 5, 5, 2, 2, 2]])
array([[0, 0, 0, 6, 6, 6, 1, 1, 1],
       [0, 0, 0, 6, 6, 6, 1, 1, 1],
       [0, 0, 0, 6, 6, 6, 1, 1, 1],
       [2, 2, 2, 7, 7, 7, 3, 3, 3],
       [2, 2, 2, 7, 7, 7, 3, 3, 3],
       [2, 2, 2, 7, 7, 7, 3, 3, 3],
       [4, 4, 4, 8, 8, 8, 5, 5, 5],
       [4, 4, 4, 8, 8, 8, 5, 5, 5],
       [4, 4, 4, 8, 8, 8, 5, 5, 5]],

 

So here it is.

def combine_(*arrs):
    """Combine arrays to produce a unique classification scheme
    :
    :References:
    :-----------
    : https://stackoverflow.com/questions/48035246/
    :       intersect-multiple-2d-np-arrays-for-determining-zones
    : original def find_labels(*arrs):
    """

    indices = [np.unique(arr, return_inverse=True)[1] for arr in arrs]
    M = np.array([item.max()+1 for item in indices])
    M = np.r_[1, M[:-1]]
    strides = M.cumprod()
    indices = np.stack(indices, axis=-1)
    vals = (indices * strides).sum(axis=-1)
    uniqs, labels = np.unique(vals, return_inverse=True)
    labels = labels.reshape(arrs[0].shape)
    return labels, uniqs

 

Now stop reading here if you aren't interested in the details, but have a quasi-firm grasp on the process.  Read on if you must, but suffice to say unubtu was on to something.

 

-----------------------------  details .... details .... details... read at your peril  ------------------------------------------------------------

In essence, what the code does, is find the unique values of each array shoved into the function.  In our case, the list of unique values is short (see line 3 below).  Not too onerous a combination slate.

Line 10 can be modified to show the unique values in each array... useful if they aren't sequential.  This is show below in line 1.

induni = [np.unique(arr, return_inverse=True)[0] for arr in arrs]  # arrays a, b, c

[array([0, 1, 2, 4]), array([0, 1, 2, 3, 4, 5]), array([0, 1, 2, 3, 4, 5])]

 

The nifty bit, is how unutbu constructs the 'indices' in two steps, using the maximum value in each array, determines the unique combination, applies it to the arrays to get a unique value ('val') for each combination and then reassigns a class ('label' to it).

 

This process, not documented in the code, is expanded upon here.  In lines 1-4 below, an array, combos, is constructed from the values in the array plus the 'val' derived from the above process.

idx = [np.unique(arr, return_inverse=True)[1] for arr in arrs]
vals = (indices * strides).sum(axis=-1)
dt = [('a', '<i8'), ('b', '<i8'), ('c', '<i8'), ('vals', '<i8')]
combos = np.array(list(zip(*idx)), dtype= dt)

 

If the 'combos' array is sorted, the unique values can be extracted from it.  The values in the 'vals' column is used to assign a 'label' to each unique combination.

srt_order = np.argsort(combos, order=['a','b', 'c'])  # sort the combos

srted = combos[srt_order]  # ---- extract them in sorted order

np.unique(srted)  # ---- determine the unique combinations

array([(0, 0, 0,   0),
       (0, 2, 2,  56),
       (1, 0, 2,  49),
       (1, 2, 0,   9),
       (1, 4, 4, 113),
       (2, 3, 1,  38),
       (2, 5, 1,  46),
       (3, 1, 3,  79),
       (3, 1, 5, 127)],
      dtype=[('a', '<i8'), ('b', '<i8'),
             ('c', '<i8'), ('vals', '<i8')])

 

Hope it is clear as mud, and I have some experimenting to do, but I just thought I would throw it out there.

Yup... adding points to poly* features

Already offered... Densify at all license levels?  NO!

My missive .... Densification by Factor  YES!

 

 

There is also a Geodesic Densify, but I don't do Geographic in any event, so use esri tools

Why did I reinvent the wheel?  Because the densification basis is to simply provide a sequence of locations, for X and Y then subdivide the intervening location by some factor (hence the ... by Factor)

 

Here is an example....

a = np.array([[ 20.,  20.],
              [ 20.,  30.],
              [ 30.,  30.],
              [ 30.,  20.],
              [ 20.,  20.]])

 

Then you run some code

def _densify_2D(a, fact=2):
    """Densify a 2D array using np.interp.
    """

    # Y = a changed all the y's to a
    a = np.squeeze(a)
    n_fact = len(a) * fact
    b = np.arange(0, n_fact, fact)
    b_new = np.arange(n_fact - 1)
    c0 = np.interp(b_new, b, a[:, 0])
    c1 = np.interp(b_new, b, a[:, 1])
    n = c0.shape[0]
    c = np.zeros((n, 2))
    c[:, 0] = c0
    c[:, 1] = c1
    return c

And you get this.

a_dens = _densify_2D(a, fact=2)

a_dens

array([[ 20.,  20.],
       [ 20.,  25.],
       [ 20.,  30.],
       [ 25.,  30.],
       [ 30.,  30.],
       [ 30.,  25.],
       [ 30.,  20.],
       [ 25.,  20.],
       [ 20.,  20.]])
a.shape  # (49874, 2)

%timeit _densify_2D(a, fact=2)

5.25 ms ± 323 µs per loop
     (mean ± std. dev. of 7 runs,
      100 loops each)

 

 

Where it really shines, is for long features that just need to be densified without any particular requirement for a set spacing.  If is fairly speedy as well.

Hope someone finds this useful, if you don't have the Standard or Advanced license in particular.

This is the current state of the Point and PointGeometry .... not the best....

pt2 = arcpy.Point()

pt2
<Point (0.0, 0.0, #, #)>        # ---- make a point

pg2 = arcpy.PointGeometry(pt2)  # ---- make a point geometry

pg2.WKT                         # ---- why does it have valid X, Y values????
'POINT (0 0)'

So there is no such thing as a 'null' point (but a 'nominal' point as one of our esteemed participants noted). 

'None' isn't good enough.  You can create an 'empty' geometry recognized by other representations with a little trickery.

import arcpy

n = np.nan

a = np.array([n, n, n, n])

pt = arcpy.Point()

pt.X, pt.Y, pt.Z, pt.M = a

pg = arcpy.PointGeometry(pt)

pg
<PointGeometry object at 0x200c6ed6ac8[0x200bf9a39e0]>

pg.WKT
'POINT EMPTY'

pt
<Point (nan, nan, nan, nan)>

 

And when you look at the PointGeometry, you decide which is best\.

# ---- by using NaN (not a number) for point properties ----
pt        # <Point (nan, nan, nan, nan)>

pg = arcpy.PointGeometry(pt)

pg.JSON   # '{"x":"NaN","y":"NaN","spatialReference":{"wkid":null}}'

pg.WKT    #  'POINT EMPTY'

# ---- The current state of affairs ----

pt2       # <Point (0.0, 0.0, #, #)>

pg2 = arcpy.PointGeometry(pt2)

pg2.JSON  # '{"x":0,"y":0,"spatialReference":{"wkid":null}}'

pg2.WKT   # 'POINT (0 0)'

 

I should point out that you can use math.nan inplace of np.nan

m = math.nan
b = np.array([m, m, m, m])
pt3 = arcpy.Point()
pt3.X, pt3.Y, pt3.Z, pt3.M = b
pt3  # ---- <Point (nan, nan, nan, nan)>

pg3 = arcpy.PointGeometry(pt3)

pg3.JSON  # ---- '{"x":"NaN","y":"NaN","spatialReference":{"wkid":null}}'

pg3.WKT   # ---- 'POINT EMPTY'

Natural terrain is so fickle.  Wouldn't it be nice to have a sink-less terrain so that your flowdirection, flowaccumulation and stream generation would go as nature planned.

 

If you have a need to test algorithms, then natural surfaces aren't the best, but generating artificial terrain can be a pain.  They tend to have repetitive patterns and lack the finesse that real surface has.  

 

This is a brief look at one of the many algorithms that have been employed in early gaming software and largely ignored by the 'derivative' people.  The algorithm is pretty simple.... provide random seeds to the corners of a square array/raster, then divide it into triangles and squares in a predictable fashion calculating the averages of the intervening space, assigning the central location, the value of the pattern points.  Now, to mix it up, one can add some randomness to the calculated values to 'roughen' the surface to so speak.

 

The following is the sequence of the pattern generation... the code is on my GitHub account and the Code Sharing site for those that want to explore on their own.  I repurposed an implementation adding a few twists, like leaving out the random jitters and the ability to replicate in both rows and columns should you want to generate a surface with some repetition.

 

More importantly, this is for fun and is a good way to learn how to use numpy arrays to work with array data that forms images or rasters in the GIS sense of the term.

 

Input template array with 4 corners defined...  The empty space is represented
by 'nan' aka, 'not a number', you could use zero, but 'not a number' is a
raster null, without having to get into the issues of masks and the like.

Step 1... throw in 4 corners

[[ 0.250  nan  nan  nan  nan  nan  nan  nan  0.750]
[ nan  nan  nan  nan  nan  nan  nan  nan  nan]
[ nan  nan  nan  nan  nan  nan  nan  nan  nan]
[ nan  nan  nan  nan  nan  nan  nan  nan  nan]
[ nan  nan  nan  nan  nan  nan  nan  nan  nan]
[ nan  nan  nan  nan  nan  nan  nan  nan  nan]
[ nan  nan  nan  nan  nan  nan  nan  nan  nan]
[ nan  nan  nan  nan  nan  nan  nan  nan  nan]
[ 0.500  nan  nan  nan  nan  nan  nan  nan  1.000]]

Step 2... calculate the central location...

[[ 0.250  nan  nan  nan  nan  nan  nan  nan  0.750]
[ nan  nan  nan  nan  nan  nan  nan  nan  nan]
[ nan  nan  nan  nan  nan  nan  nan  nan  nan]
[ nan  nan  nan  nan  nan  nan  nan  nan  nan]
[ nan  nan  nan  nan  0.625  nan  nan  nan  nan]
[ nan  nan  nan  nan  nan  nan  nan  nan  nan]
[ nan  nan  nan  nan  nan  nan  nan  nan  nan]
[ nan  nan  nan  nan  nan  nan  nan  nan  nan]
[ 0.500  nan  nan  nan  nan  nan  nan  nan  1.000]]

Step 3...
Split into the diamond structure and calculate the compass points

[[ 0.250  nan  nan  nan  0.542  nan  nan  nan  0.750]
[ nan  nan  nan  nan  nan  nan  nan  nan  nan]
[ nan  nan  nan  nan  nan  nan  nan  nan  nan]
[ nan  nan  nan  nan  nan  nan  nan  nan  nan]
[ 0.458  nan  nan  nan  0.625  nan  nan  nan  0.792]
[ nan  nan  nan  nan  nan  nan  nan  nan  nan]
[ nan  nan  nan  nan  nan  nan  nan  nan  nan]
[ nan  nan  nan  nan  nan  nan  nan  nan  nan]
[ 0.500  nan  nan  nan  0.708  nan  nan  nan  1.000]]

Step 4....
Calculate the central values for the squares

[[ 0.250  nan  nan  nan  0.542  nan  nan  nan  0.750]
[ nan  nan  nan  nan  nan  nan  nan  nan  nan]
[ nan  nan  0.469  nan  nan  nan  0.677  nan  nan]
[ nan  nan  nan  nan  nan  nan  nan  nan  nan]
[ 0.458  nan  nan  nan  0.625  nan  nan  nan  0.792]
[ nan  nan  nan  nan  nan  nan  nan  nan  nan]
[ nan  nan  0.573  nan  nan  nan  0.781  nan  nan]
[ nan  nan  nan  nan  nan  nan  nan  nan  nan]
[ 0.500  nan  nan  nan  0.708  nan  nan  nan  1.000]]

Step ....
Repeat......
[[ 0.250  nan  0.420  nan  0.542  nan  0.656  nan  0.750]
[ nan  nan  nan  nan  nan  nan  nan  nan  nan]
[ 0.392  nan  0.469  nan  0.578  nan  0.677  nan  0.740]
[ nan  nan  nan  nan  nan  nan  nan  nan  nan]
[ 0.458  nan  0.531  nan  0.625  nan  0.719  nan  0.792]
[ nan  nan  nan  nan  nan  nan  nan  nan  nan]
[ 0.510  nan  0.573  nan  0.672  nan  0.781  nan  0.858]
[ nan  nan  nan  nan  nan  nan  nan  nan  nan]
[ 0.500  nan  0.594  nan  0.708  nan  0.830  nan  1.000]]

Repeat
[[ 0.250  nan  0.420  nan  0.542  nan  0.656  nan  0.750]
[ nan  0.383  nan  0.502  nan  0.613  nan  0.706  nan]
[ 0.392  nan  0.469  nan  0.578  nan  0.677  nan  0.740]
[ nan  0.463  nan  0.551  nan  0.650  nan  0.732  nan]
[ 0.458  nan  0.531  nan  0.625  nan  0.719  nan  0.792]
[ nan  0.518  nan  0.600  nan  0.699  nan  0.787  nan]
[ 0.510  nan  0.573  nan  0.672  nan  0.781  nan  0.858]
[ nan  0.544  nan  0.637  nan  0.748  nan  0.867  nan]
[ 0.500  nan  0.594  nan  0.708  nan  0.830  nan  1.000]]


Step.... finally... all the squares and triangles are filled in

[[ 0.250  0.351  0.420  0.488  0.542  0.604  0.656  0.704  0.750]
[ 0.342  0.383  0.443  0.502  0.559  0.613  0.663  0.706  0.732]
[ 0.392  0.427  0.469  0.525  0.578  0.630  0.677  0.714  0.740]
[ 0.438  0.463  0.503  0.551  0.601  0.650  0.694  0.732  0.754]
[ 0.458  0.493  0.531  0.577  0.625  0.673  0.719  0.757  0.792]
[ 0.496  0.518  0.556  0.600  0.649  0.699  0.747  0.787  0.812]
[ 0.510  0.536  0.573  0.620  0.672  0.725  0.781  0.823  0.858]
[ 0.518  0.544  0.587  0.637  0.691  0.748  0.807  0.867  0.908]
[ 0.500  0.546  0.594  0.646  0.708  0.762  0.830  0.899  1.000]]

 

Of course, the image that follows uses a much larger array (1025x1025).  The generated array, was sink-less, and flow properties and a stream network was generated from the result.  Pretty smooth eh?

 

Next up... terrain generation from polygons and other vector geometry

Filter

Started this with rolling stats. 

 

Rolling Statistics for grid data... an alternative

 

Once you have a filter... kernel whatever you want to call it, all you need is a moving window of some form and shape.  I won't go into those details since my previous work covers this.  I will just report a few simple ones, and show you some of the possible kernels that you can create.

 

# ---- my input array ----

Input array.......
-shape (1, 10, 10), ndim 3
.... 1 1 1 1 1 2 2 2 2 2 
.... 1 1 1 1 1 2 2 2 2 2 
.... 1 1 1 1 1 2 2 2 2 2 
.... 1 1 1 1 1 2 2 2 2 2 
.... 1 1 1 1 1 2 2 2 2 2 
.... 3 3 3 3 3 4 4 4 4 4 
.... 3 3 3 3 3 4 4 4 4 4 
.... 3 3 3 3 3 4 4 4 4 4 
.... 3 3 3 3 3 4 4 4 4 4 
.... 3 3 3 3 3 4 4 4 4 4

From the above, you can see that I kept it simple... 4 blocks of values, the perfect study area.

 

Let's see if we can pick up some lines that divide the raster/array

Sobel horizonal filter...
-shape (1, 3, 3), ndim 3

  . -1 -2 -1 
  .  0  0  0 
  .  1  2  1 


Sobel horizontal result.......
-shape (1, 10, 10), ndim 3, masked array, fill value 999999

.... -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 
.... -1  0  0  0  0  0  0  0  0 -1 
.... -1  0  0  0  0  0  0  0  0 -1 
.... -1  0  0  0  0  0  0  0  0 -1 
.... -1  8  8  8  8  8  8  8  8 -1 
.... -1  8  8  8  8  8  8  8  8 -1 
.... -1  0  0  0  0  0  0  0  0 -1 
.... -1  0  0  0  0  0  0  0  0 -1 
.... -1  0  0  0  0  0  0  0  0 -1 
.... -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 

See those 8's! nailed the horizontal divide.

 

vertical filter...

  . -1  0  1 
  . -2  0  2 
  . -1  0  1 

Sobel vertical result.......
-shape (1, 10, 10), ndim 3, masked array, fill value 999999
.... -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 
.... -1  0  0  0  4  4  0  0  0 -1 
.... -1  0  0  0  4  4  0  0  0 -1 
.... -1  0  0  0  4  4  0  0  0 -1 
.... -1  0  0  0  4  4  0  0  0 -1 
.... -1  0  0  0  4  4  0  0  0 -1 
.... -1  0  0  0  4  4  0  0  0 -1 
.... -1  0  0  0  4  4  0  0  0 -1 
.... -1  0  0  0  4  4  0  0  0 -1 
.... -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 

 

So easy... vertical nailed as well.

 

One more...

Array...
-shape (1, 3, 3), ndim 3
  . -1 -1  0 
  . -1  0  1 
  .  0  1  1 

Emboss vertical result.......

-shape (1, 10, 10), ndim 3, masked array, fill value 999999
.... -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 
.... -1  0  0  0  2  2  0  0  0 -1 
.... -1  0  0  0  2  2  0  0  0 -1 
.... -1  0  0  0  2  2  0  0  0 -1 
.... -1  4  4  4  6  6  4  4  4 -1 
.... -1  4  4  4  6  6  4  4  4 -1 
.... -1  0  0  0  2  2  0  0  0 -1 
.... -1  0  0  0  2  2  0  0  0 -1 
.... -1  0  0  0  2  2  0  0  0 -1 
.... -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 

 

How about some filters that I have found.

    n = np.nan
    all_f = [1, 1, 1, 1, 1, 1, 1, 1, 1]
    no_cnt = [1, 1, 1, 1, n, 1, 1, 1, 1]
    cross_f = [1, 0, 1, 0, 0, 0, 1, 0, 1]
    plus_f = [0, 1, 0, 1, 0, 1, 0, 1, 0]
    grad_e = [1, 0, -1, 2, 0, -2, 1, 0, -1]
    grad_n = [-1, -2, -1, 0, 0, 0, 1, 2, 1]
    grad_ne = [0, -1, -2, 1, 0, -1, 2, 1, 0]
    grad_nw = [2, -1, 0, -1, 0, 1, 0, 1, 2]
    grad_s = [1, 2, 1, 0, 0, 0, -1, -2, -1]
    grad_w = [-1, 0, 1, -2, 0, 2, -1, 0, 1]
    lap_33 = [0, -1, 0, -1, 4, -1, 0, -1, 0]
    line_h = [-1, -1, -1, 2, 2, 2, -1, -1, -1]
    line_ld = [2, -1, -1, -1, 2, -1, -1, -1, 2]
    line_rd = [-1, -1, 2, -1, 2, -1, 2, -1, -1]
    line_v = [-1, 0, -1, -1, 2, -1, -1, 2, -1]
    high = [-0.7, -1.0, -0.7, -1.0, 6.8, -1.0, -0.7, -1.0, -0.7]
    sob_hor = [-1, -2, -1, 0, 0, 0, 1, 2, 1]   # sobel y
    sob_vert = [-1, 0, 1, -2, 0, 2, -1, 0, 1]  # sobel x
    emboss = [-1, -1, 0, -1, 0, 1, 0, 1, 1]
    sharp1 = [0., -0.25, 0., -0.25, 2.0, -0.25, 0., -0.25, 0.]
    sharp2 = [-0.25, -0.25, -0.25, -0.25, 3.0, -0.25, -0.25, -0.25, -0.25]
    sharp3 = [-1, -1, -1, -1, 9, -1, -1, -1, -1]
    lowpass = [1, 2, 1, 2, 4, 2, 1, 2, 1]

    # ---- assemble the dictionary ----

    d = {1: all_f, 2: no_cnt, 3: cross_f, 4: plus_f, 5: grad_e,
         6: grad_n, 7: grad_ne, 8: grad_nw, 9: grad_s, 10: grad_w,
         11: lap_33, 12: line_h, 13: line_ld, 14: line_rd, 15: line_v,
         16: high, 17: sob_hor, 18: sob_vert, 19: emboss, 20: sharp1,
         21: sharp2, 23: sharp3, 24: lowpass}

    filter_ = np.array(d[mode]).reshape(3, 3)

You call the filter using the mode number (ie d[23] is sharp3.

import numpy as np
from numpy.lib.stride_tricks import as_strided

def stride(a, win=(3, 3), stepby=(1, 1)):
    """Provide a 2D sliding/moving view of an array.
    :  There is no edge correction for outputs. Use _pad first.
    """

    err = """Array shape, window and/or step size error.
    Use win=(3,3) with stepby=(1,1) for 2D array
    or win=(1,3,3) with stepby=(1,1,1) for 3D
    ----    a.ndim != len(win) != len(stepby) ----
    """

    assert (a.ndim == len(win)) and (len(win) == len(stepby)), err
    shape = np.array(a.shape)  # array shape (r, c) or (d, r, c)
    win_shp = np.array(win)    # window      (3, 3) or (1, 3, 3)
    ss = np.array(stepby)      # step by     (1, 1) or (1, 1, 1)
    newshape = tuple(((shape - win_shp) // ss) + 1)
    newshape += tuple(win_shp)
    newstrides = tuple(np.array(a.strides) * ss) + a.strides
    a_s = as_strided(a, shape=newshape, strides=newstrides).squeeze()
    return a_s


def a_filter(a, mode=1, ignore_nodata=True, nodata=None):
    """put the filter stuff below this line"""
    #
    # ---- stride the input array ----
    a_strided = stride(a)
    if ignore_nodata:
        c = (np.sum(a_strided * filter_, axis=(2, 3)))
    else:
        c = (np.nansum(a_strided * filter_, axis=(2, 3)))
    pad_ = nodata
    if nodata is None:
        if c.dtype.name in ('int', 'int32', 'int64'):
            pad_ = min([0, -1, a.min()-1])
        else:
            pad_ = min([0.0, -1.0, a.min()-1])
    c = np.lib.pad(c, (1, 1), "constant", constant_values=(pad_, pad_))
    m = np.where(c == pad_, 1, 0)
    c = np.ma.array(c, mask=m, fill_value=None)
    return filter_, c

# ----------------------------------------------------------------------
# __main__ .... code section
if __name__ == "__main__":
    """Optionally...
    : - print the script source name.
    : - run the _demo
    """

#    you can put your stuff here... use RasterToNumPyArray to get an array

 

So that is pretty well some other stuff.

I should point out that the same process can be used for statistical calculations, terrain derivatives (slope, aspect, hillshade, curvature etc ) and I have reported on these before.

 

Now I should point out that none of the above requires the Spatial Analyst extension and the RasterToNumPyArray and NumPyArrayToRaster are provided with arcpy.da 's module.  Lots of possibilities, lots of capability is built-in.

 

More later.