Skip navigation
All People > Dan_Patterson > Py... blog > 2016 > April
2016

Masks ... nodata ... nulls

The attached pdf will serve for now.  I will add additional documentation here on how to work with rasters with nodata areas here soon.

 

A simple example here, using Matplotlib to do the mapping.  The -- cells are nodata values

FullSizeRender-1 1.jpg

Raster/array valuesSample properties
>>> print(c.filled())  # -9 is nodata
[[ 3  1  3 -9  3  2 -9 -9 -9  2  2  2  3 -9  1 -9  3  3 -9  1]
 [ 1  3  2  3 -9 -9 -9  3  1  3  3 -9  3  2  3  3  3  3  1  1]
 [ 2  2  2 -9  1  2  2 -9  2  2  3 -9  3 -9  3 -9  2  2  1  3]
 [ 3  1  3  2 -9 -9  2 -9  2 -9 -9  1  2  1  1  2  1  3  3 -9]
 [-9  2 -9  2 -9 -9  1  2  1 -9 -9  2  2  1  1  1  3  1  3  3]
 [-9  2  3  3  1  2  2  3 -9  1  1  3  1 -9 -9 -9  2  1  3 -9]
 [-9 -9 -9  2  1  1 -9  2  2  1  1  2  2  3  2  3  2  2  2 -9]
 [ 3 -9  3 -9 -9  2  3 -9  3  2  2  2  1 -9  3  2 -9  2  2  1]
 [ 1 -9  2  2  1 -9  2  1  2  2 -9 -9  3 -9  2  2 -9  1 -9  1]
 [ 1  1 -9 -9 -9  2 -9  2  3  2 -9  1  2  1  3  1 -9 -9  1  3]
 [ 1 -9 -9  1  2  1 -9  1 -9 -9 -9 -9  1 -9 -9 -9 -9  2 -9  3]
 [-9  3 -9 -9 -9  2  3 -9 -9  1  2  1  1  2  1  1  3  2  3  2]
 [ 3  3  3 -9  3  1  3 -9 -9 -9  3  2 -9 -9  3  2  3 -9  1 -9]
 [-9  2  2  3  3  1  3  1 -9 -9  2  3  3  1  1  1  1  1 -9  1]
 [-9  1 -9  3  1  1 -9  2 -9  1  1  2  2  1 -9  2  2  3 -9  3]
 [ 3  2 -9  2  2 -9 -9  2  1  2  1  2 -9  3  2  1  1  1  3  1]
 [-9 -9  3  2  2 -9  2  2  1  2 -9  3  1  2  2 -9  3  3  2  1]
 [ 1 -9  1 -9  2  2  3 -9  3  2  2 -9  1  2  3 -9 -9 -9  3  3]
 [ 3  1  1  1  2  1  2 -9 -9  2  2  1  2 -9 -9 -9  2 -9  3  2]
 [-9  2  3  1 -9  1  1 -9  2 -9  1  1  1  2  1  2  3 -9 -9  3]]

>>> c.mean()

1.96028880866426

>>> c.min()  = 1

>>> c.max() = 3

>>>

np.histogram(c, bins=[1,2,3,4])

(array([ 92, 104,  81]),

array([1, 2, 3, 4]))

There are a large number of questions that deal with some aspect of distance.  If you are interested in quick distance calculations, then download and read the attachment.

The tools are builtin to arcpy to get the data into this form to facilitate calculations.

You can determine distances using different metrics... euclidean distance is shown here.

 

If you need to find the nearest, a list of closest objects or to calculate the perimeter of a polygon or length of a line represented by a sequence of points, there are other options available to you.

 

Associated references:

 

Python Near Analysis (No Advanced Licence)

 

Single origin to multiple destinationsMultiple origin to multiple destinations

 

The example here uses a single origin in an origin list.  You will note that the coordinates are given as a list, of lists, of coordinates.

Example 1...

Single origin...
[[ 0.  0.]]
Multiple destinations
[[ 4.  0.]
 [ 0.  2.]
 [ 2.  2.]
 [-2. -3.]
 [ 4.  4.]]
Distances: [ 4.    2.    2.83  3.61  5.66]

 

An example for 3D length/distance

X, Y and Z values array, their differences and the resultant distances and the length if they formed a circuit

e_leng(d3d,verbose=True)
Input array....(shape=(1, 8, 3))
[[[ 0.  0.  0.]
  [ 1.  1.  1.]
  [ 0.  1.  0.]
  [ 1.  0.  1.]
  [ 0.  1.  1.]
  [ 1.  1.  0.]
  [ 1.  0.  0.]
  [ 0.  0.  1.]]]
differences...


[[[-1. -1. -1.]
  [ 1.  0.  1.]
  [-1.  1. -1.]
  [ 1. -1.  0.]
  [-1.  0.  1.]
  [ 0.  1.  0.]
  [ 1.  0. -1.]]]
distances...
[[ 1.73  1.41  1.73  1.41  1.41  1.    1.41]]
length...[ 10.12]

This example  uses the same destinations.

 

Example 2...
Multiple Origins...
[[[ 0.  0.]]
 [[ 1.  0.]]

 [[ 0., 1.]]
 [[ 1.  1.]]]

Destinations...
[[ 4.  0.]
 [ 0.  2.]
 [ 2.  2.]
 [-2. -3.]
 [ 4.  4.]]

Distances...
[[ 4.    2.    2.83  3.61  5.66]
 [ 3.    2.24  2.24  4.24  5.  ]
 [ 4.12  1.    2.24  4.47  5.  ]
 [ 3.16  1.41  1.41  5.    4.24]]

Origin-Destination, distance matrix
dests->:     0     1     2     3     4
origins
      0: [ 4.    2.    2.83  3.61  5.66]
      1: [ 3.    2.24  2.24  4.24  5.  ]
      2: [ 4.12  1.    2.24  4.47  5.  ]
      3: [ 3.16  1.41  1.41  5.    4.24]

 

That's all for now...

I always forget.  I read about something pythonic, then quasi-remember what it was I read.  So this is what is coming to python.  A look into the future.  Preparing for the future.

 

Python 3.0 released  on December 3rd, 2008.

Python 3.6 expected final  December 12th, 2016    8 years old!

Python 3.7 beta released  September 23rd, 2017

 

New Additions:                                                                             Update: 2017-06-28

Exiting 2.7... supporters and timeline 2020.... set your clocks

Python 3.7.0a math.remainder ...

Numpy 1.13 release notes  updated

Apparently the wait will continue arcmap 10.5 python version??

Recent blog  not using python 3?.... 

Python 3.7 beta...  Overview — Python 3.7.0a0 documentation 

Conda tips and tools...  Conda and ArcGIS Pro | ArcPy Café 

Python 3.6 final coming soon... sorted dictionary keys and more  Release information and changes

Anaconda, Spyder and PRO Anaconda, Spyder and ArcGIS PRO

Python & Pro  Python and ArcGIS Pro 1.3 : Conda

Anaconda  Download Anaconda now! | Continuum

SciPy.org    SciPy.org — SciPy.org latest release for numpy 1.11.1 and SciPy 0.17.1

Learning resourcesData Science in Python

        GitHub - Data Science Python: common data analysis and machine learning tasks using python

 

A classic example is those of you that have avoided reading:

 

  Python Mini Formatting Language .

 

Oh wait!! Many of you are still using python 2.7.  Good news! Install ArcGIS PRO and enter the realm of Python 3.4.  Yes I did say 3.4. but it gets worse.... Python 3.7 is in beta as of Sept 2016.

 

Python 2.7.x is the last of the 2.x series.  It will remain on support for some time (to indefinitely), but there have been lots happening since Python 3.0 was introduced (homework... when was python 3.0 introduced).

 

So this is not going to be a retrospective going back to python 3.0 and what it introduced, but I will highlight what is going forward in the python world since python 3.6 is set for final release soon.. and you will be using python 3.4 ... IF you install ArcGIS PRO or dabble in the neverworld of alternate installs.  So in reverse order, this is what I think will be useful for you to look forward to, and what you may have missed.

 

python_help.png

 

-------------------------------------------------------------------------------------------------

Contents : Python  NumPy  SciPy  Matplotlib  Pandas       Update:  2017-02-18

Last Update:   numpy 1.12 and scipy 0.17.1 and

   added Pandas 0.18.1 release What’s New — pandas 0.18.1 documentation

--------------------------------------------------------------------------------------------------

GitHub section

Esri             GitHub - Esri/esri.github.com: Esri on Github

Matplotlib   GitHub - matplotlib/matplotlib: matplotlib: plotting with Python

Numpy        GitHub - numpy/numpy: Numpy main repository

Pandas       GitHub - pydata/pandas: Flexible and powerful data analysis ...

SciPy          GitHub - scipy/scipy: Scipy library main repository

Sympy        GitHub - sympy/sympy: A computer algebra system written in pure Python

--------------------------------------------------------------------------------------------------

Python section

The main link  What’s New in Python ... this goes back in history

Pre-existing functionality  What is in version 3 that existed in version 2.6

with statement, print as a function, io module

 

----- python 3.7 ---------------------------------------------------------------------------------------
What’s New In Python 3.7 ... main page
Highlights

----- python 3.6 ---------------------------------------------------------------------------------------

What’s New In Python 3.6 ... main page

Highlights

>>> name = "Fred"

>>> f"He said his name is {name}."

'He said his name is Fred.'

---- python 3.5 -------------------------------------------------------------------------------------------

What’s New In Python 3.5 ...main page

  • Highlights

>>> *range(4), 4                        (0, 1, 2, 3, 4)

>>> [*range(4), 4]                      [0, 1, 2, 3, 4]

>>> {*range(4), 4, *(5, 6, 7)}      {0, 1, 2, 3, 4, 5, 6, 7}

>>> {'x': 1, **{'y': 2}}                 {'x': 1, 'y': 2}

 

  • Improved Modules
    • link includes, but not limited to:
      • collections, csv, distutils, doctest, enum, inspect, io, json, locale, logger, math, os, pathlib, re, readline, sys, time, timeit, trackback, types, zipfile
      • collections.OrderedDict is now implemented in C which makes it 4 to 100 times faster.

      • csv.write_rows now supports any iterable
      • enum.Enum now supports a start number for text enumeration
      • math ... added nan and inf constants (finally!!!) and gcd (get common divisor)
      • os.walk  significant speed improvements

 

---- python 3.4 -----------------------------------------------------------------------------------------

What's New in Python 3.4 ... main page

  • Highlights

>>> import statistics

>>> dir(statistics)

[ .... snip ... 'mean', 'median', 'median_grouped', 'median_high', 'median_low', 'mode', 'pstdev', 'pvariance', 'stdev', 'variance']

  • Improved Modules
    • link includes, but not limited to:  collections, inspect, multiprocessing, operator, os, re, sys, textwrap, zipfile
      • textwrap adds   maxlines, placeholder and shorten

---- python 3.3 and before  -------------------------------------------------------------------------------

Previous stuff (aka pre-3.4)

  • porting to 2to3            2to3 - Automated Python 2 to 3 code translation
    • changes to or replacement of :  apply, asserts, dict (dict.items(), dict.keys(), dict.values()), exec, has_key, isinstance, itertools, long, map, next, nonzero, operator, print, raise, range, raw_input, unicode, xrange, zip
  • virtual environments    pep 405
  • python launcher          pep 397
  • ordered dictionaries    pep 372
  • print as a function       pep 3105
  • sysconfig                    Python’s configuration information
  • argparse
  • order dictionaries
  • print statement
  • division  / float, // integer

way more ..... you will just have to read the original migration docs.

Significant changes to text handling and the migration to full unicode support.

Some of the changes have been implemented in python 2.7 and/or can be accessed via ... from future import ....

 

-------------------------------------------------------------------------------------------------

Anaconda section

Anaconda, Spyder and ArcGIS PRO

Python and ArcGIS Pro 1.3 : Conda

Download Anaconda now! | Continuum

-------------------------------------------------------------------------------------------------

NumPy section

The reverse chronological list of changes to numpy Release Notes — NumPy v1.12 Manual

numpy 1.12- einsum optimized for speed

- keepdims added to many functions

- axis keyword for rot90

- flipud and fliplr now have axis generalization

- nancumsum and nancumprod added to the nan line of functions

- too many more to list

numpy 1.10

np.rollaxis, np.swapaxis  now return views

np.ravel, np.diagonal, np.diag  now preserve subtypes

recarray field and view  changes in how data are treated, see documentation

matrix @ operator implemented in keeping with its implementation in python 3.5

-------------------------------------------------------------------------------------------------

SciPy section

Too many to list, but the change logs are given here Release Notes — SciPy v0.17.0 Reference Guide

SciPy central  http://central.scipy.org/

-------------------------------------------------------------------------------------------------

Matplotlib section

users guide        User’s Guide — Matplotlib 1.5.1 documentation

chronologic change history   What’s new in matplotlib — Matplotlib 1.5.1 documentation

-------------------------------------------------------------------------------------------------

Pandas section

Pandas 0.18.1         What’s New — pandas 0.18.1 documentation

The full list is here   pandas main page and release notes

Have you every come across a situation like one of these:

 

  • you need to test out something but don't have the data
  • are you sick of trying to get a function to work in the field calculator
  • you want to test out one of ArcMap's functions but none of your data are suitable
  • all I need are some points which have a particular distribution
  • someone forgot to post a sample of their data on GeoNet for testing and you don't have a match
  • you forgot to collect something in the field

 

Well, this lesson is for you.  It is a culmination of a number of the previous lessons and a few
NumPy Snippets and Before I Forget posts.  I have attached  a script to this post below

 

There is also a GitHub repository that takes this one step further providing more output options... see Silly on GitHub

 

The following provides the basic requirements to operate a function should you choose not to
incorporate the whole thing.  Obviously, the header section enclosed within triple quotes
isn't needed but the import section is.

 

# -*- coding: UTF-8 -*-
"""
:Script:   random_data_demo.py
:Author:   Dan.Patterson AT carleton.ca
:Modified: 2015-08-29
:Purpose:
:  Generate an array containing random data.  Optional fields include:
:  ID, Shape, text, integer and float fields
:Notes:
:  The numpy imports are required for all functions
"""

#-----------------------------------------------------------------------------
# Required imports

from functools import wraps
import numpy as np
import numpy.lib.recfunctions as rfn
np.set_printoptions(edgeitems=5, linewidth=75, precision=2,
                    suppress=True,threshold=5,
                    formatter={'bool': lambda x: repr(x.astype('int32')),
                               'float': '{: 0.2f}'.format})
#-----------------------------------------------------------------------------
# Required constants  ... see string module for others
str_opt = ['0123456789',
           '!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~',
           'abcdefghijklmnopqrstuvwxyz',
           'ABCDEFGHIJKLMNOPQRSTUVWXYZ',
           'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'
           ]
#-----------------------------------------------------------------------------
# decorator
def func_run(func):
    """Prints basic function information and the results of a run.
    :Required:  from functools import wraps
    """

    @wraps(func)
    def wrapper(*args,**kwargs):
        print("\nFunction... {}".format(func.__name__))
        print("  args.... {}\n  kwargs.. {}".format(args, kwargs))
        print("  docs.... \n{}".format(func.__doc__))
        result = func(*args, **kwargs)
        print("{!r:}\n".format(result))  # comment out if results not needed
        return result                    # for optional use outside.
    return wrapper
#-----------------------------------------------------------------------------
# functions

 

Before I go any further, lets have a look at the above code.

  • line 14          - functools wraps module -  I will be using decorators to control output and wraps handles all the fiddly stuff in decorators (see Before I Forget # 14)
  • line 16 -        - numpy.lib.recfunctions is a useful module for working with ndarrays and recarrays in particular...it is imported as rfn
  • lines 17-20  - np.set_printoptions allows you to control how arrays are formatted when printing or working from the command line.  Most of the  parameters are self-explanatory or you will soon get the drift
  • lines 30 - 43 - the decorator function presented in BIF # 14.

 

Now back to the main point.  If you would like to generate data with some control on the output.

This will present some functions to do so and put it together into a standalone table or feature class.

An example follows:

 

Array generated....
array([(0, (7.0, 1.0), 'E0', '(0,0)', 'A', 'ARXYPJ', 'cat', 'Bb', 0, 9.380410289877375),
       (1, (2.0, 9.0), 'D0', '(4,0)', 'B', 'RAMKH', 'cat', 'Aa', 9, 1.0263298179133362),
       (2, (5.0, 8.0), 'C0', '(1,0)', 'B', 'EGWSC', 'cat', 'Aa', 3, 2.644448491753841),
       (3, (9.0, 7.0), 'A0', '(1,0)', 'A', 'TMXZSGHAKJ', 'dog', 'Aa', 8, 6.814471938888746),
       (4, (10.0, 3.0), 'E0', '(1,0)', 'B', 'FQZCTDEY', '-1', 'Aa', 10, 2.438467639965038)],
       ............. < snip >
      dtype=[('ID', '<i4'), ('Shape', [('X', '<f8'), ('Y', '<f8')]),
             ('Colrow', '<U2'), ('Rowcol', '<U5'), ('txt_fld', '<U1'),
             ('str_fld', '<U10'), ('case1_fld', '<U3'), ('case2_fld', '<U2'),
             ('int_fld', '<i4'), ('float_fld', '<f8')])

 

Here are the code snippets...

 

Code snippets
def pnts_IdShape(N=10, x_min=0, x_max=10, y_min=0, y_max=10):
    """  Create an array with a nested dtype which emulates a shapefile's
    : data structure.  This array is used to append other arrays to enable
    :  import of the resultant into ArcMap.  Array construction, after hpaulj
    :  http://stackoverflow.com/questions/32224220/
    :    methods-of-creating-a-structured-array
    """

    Xs = np.random.random_integers(x_min, x_max, size=N)
    Ys = np.random.random_integers(y_min, y_max, size=N)
    IDs = np.arange(0, N)
    c_stack = np.column_stack((IDs, Xs, Ys))
    if simple:     # version 1
        dt = [('ID', '<i4'),('Shape', '<f8', (2,))]  # short version, optional form
        a = np.ones(N, dtype=dt)
        a['ID'] = c_stack[:, 0]
        a['Shape'] = c_stack[:, 1:]                  # this line too
    else:          # version 2
        dt = [('ID', '<i4'), ('Shape', ([('X', '<f8'),('Y', '<f8')]))]
        a = np.ones(N, dtype=dt)
        a['Shape']['X'] = c_stack[:, 1]
        a['Shape']['Y'] = c_stack[:, 2]
        a['ID'] = c_stack[:, 0]
    return a



def colrow_txt(N=10, cols=2, rows=2, zero_based=True):
    """  Produce spreadsheet like labels either 0- or 1-based.
    :N  - number of records/rows to produce.
    :cols/rows - this combination will control the output of the values
    :cols=2, rows=2 - yields (A0, A1, B0, B1)
    :  as optional classes regardless of the number of records being produced
    :zero-based - True for conventional array structure,
    :             False for spreadsheed-style classes
    """


    if zero_based:
        start = 0
    else:
        start = 1; rows = rows + 1
    UC = (list("ABCDEFGHIJKLMNOPQRSTUVWXYZ"))[:cols]  # see constants
    dig = (list('0123456789'))[start:rows]
    cr_vals = [c + r for r in dig for c in UC]
    colrow = np.random.choice(cr_vals,N)
    return colrow


Yields

array(['D0', 'E0', 'C0', 'E0', 'C0', 'C0', 'D0', 'D0', 'E0', 'D0'],
      dtype='<U2')
def rowcol_txt(N=10,rows=2,cols=2):
    """  Produce array-like labels in a tuple format.
    """

    rc_vals = ["({},{})".format(r, c) for c in range(cols) for r in range(rows)]
    rowcol = np.random.choice(rc_vals, N)
    return rowcol


Yields

array(['(2,0)', '(2,0)', '(4,0)', '(0,0)', '(4,0)', '(2,0)', '(4,0)',
       '(0,0)', '(2,0)', '(0,0)'],
      dtype='<U5')


def rand_text(N=10,cases=3,vals=str_opt[3]):
    """  Generate N samples from the letters of the alphabet denoted by the
    :  number of cases.  If you want greater control on the text and
    :  probability, see rand_case or rand_str.
    :
    : vals:  see str_opt in required constants section
    """

    vals = list(vals)
    txt_vals = np.random.choice(vals[:cases],N)
    return txt_vals

Yields
array(['C', 'C', 'C', 'B', 'A', 'B', 'A', 'C', 'C', 'C'],
      dtype='<U1')
def rand_str(N=10,low=1,high=10,vals=str_opt[3]):
    """  Returns N strings constructed from 'size' random letters to form a string
    : - create the cases as a list:  string.ascii_lowercase or ascii_uppercase etc
    : - determine how many letters. Ensure min <= max. Add 1 to max alleviate low==high
    : - shuffle the case list each time through loop
    """

    vals = list(vals)
    letts = np.arange(min([low,high]),max([low,high])+1)  # number of letters
    result = []
    for i in range(N):
        np.random.shuffle(vals)  
        size = np.random.choice(letts, 1)
        result.append("".join(vals[:size]))
    result = np.array(result)
    return result

Yields
array(['ZDULHYJSB', 'LOSZJNB', 'PKECZOIJ', 'ZV', 'DENCBP', 'XRNITEJ',
       'HJMDLBNSEF', 'DWLYPQF', 'HZOUTBSLN', 'MOEXR'],
      dtype='<U10')
def rand_case(N=10,cases=["Aa","Bb"],p_vals=[0.8,0.2]):
    """  Generate N samples from a list of classes with an associated probability
    :  ensure: len(cases)==len(p_vals) and  sum(p_values) == 1
    :  small sample sizes will probably not yield the desired p-values
    """

    p = (np.array(p_vals))*N   # convert to integer
    kludge = [np.repeat(cases[i], p[i]).tolist() for i in range(len(cases))]
    case_vals = np.array([val for i in range(len(kludge)) for val in kludge[i]])
    np.random.shuffle(case_vals)
    return case_vals

Yields
array(['cat', 'cat', 'cat', 'cat', 'dog', 'dog', 'cat', 'dog', 'cat',
       'fish'],
      dtype='<U4')
# or
array(['Aa', 'Bb', 'Aa', 'Aa', 'Aa', 'Aa', 'Bb', 'Aa', 'Aa', 'Aa'],
      dtype='<U2')
def rand_int(N=10,begin=0,end=10):
    """  Generate N random integers within the range begin - end
    """

    int_vals = np.random.random_integers(begin,end,size=(N))
    return int_val

Yields
array([7, 1, 4, 1, 6, 4, 5, 2, 2, 2])
def rand_float(N=10,begin=0,end=10):
    """  Generate N random floats within the range begin - end
    Technically, N random integers are produced then a random
    amount within 0-1 is added to the value
    """

    float_vals = np.random.random_integers(begin,end-1,size=(N))
    float_vals = float_vals + np.random.rand(N)
    return float_vals

Yield
array([ 8.40,  9.09,  0.90,  9.64,  8.63,  5.05,  2.07,  8.13,  9.91,  0.22])

 

The above functions can be used with the main portion of the script and your own function.

 

Sample function
# required imports
# required constants
# pnts_IdShape  function
# rand_case  function
# rand_int  function
def blog_post():
    """sample run"""
    N = 10
    id_shape = pnts_IdShape(N,x_min=300000,x_max=300500,y_min=5000000,y_max=5000500)
    case1_fld = rand_case(N,cases=['cat','dog','fish'],p_vals=[0.6,0.3,0.1])
    int_fld = rand_int(N,begin=0,end=10)
    fld_names = ['Pets','Number']
    fld_data = [case1_fld,int_fld]
    arr = rfn.append_fields(id_shape,fld_names,fld_data,usemask=False)
    return arr

if __name__ == '__main__':
    """create ID,Shape,{txt_fld,int_fld...of any number}
    """

    returned = blog_post()
array([(0, (300412.0, 5000473.0), 'dog', 4),
       (1, (300308.0, 5000043.0), 'cat', 4),
       (2, (300443.0, 5000170.0), 'dog', 5),
       (3, (300219.0, 5000240.0), 'cat', 0),
       (4, (300444.0, 5000067.0), 'cat', 9),
       (5, (300486.0, 5000106.0), 'cat', 3),
       (6, (300242.0, 5000145.0), 'cat', 5),
       (7, (300038.0, 5000341.0), 'dog', 7),
       (8, (300335.0, 5000495.0), 'cat', 9),
       (9, (300345.0, 5000108.0), 'fish', 7)],
      dtype=[('ID', '<i4'), ('Shape', [('X', '<f8'), ('Y', '<f8')]),
                  ('Pets', '<U4'), ('Number', '<i4')])

 

You will notice in the above example that the rand_case function was to determine

the number of pets based upon p-values of 0.6, 0.3 and 0.1, with cats being favored, as they should be, and
this is reflected in the data.  The coordinates in this example were left as integers, reflecting a 1m resolution.

It is possible to add a random pertubation of floating point values in the +/- 0.99 to add centimeter values if you desire.

This is not shown here, but I can provide the example if needed.

The 'Number' field in this example simply reflects the number of pets per household.

 

Homework...

Using NumPyArrayToFeatureclass, create a shapefile using a NAD_1983_CSRS_MTM_9 projection

(Projected, National Grids, Canada, NAD83 CSRS_MTM_9)

 

Answer...

>>> import arcpy
>>> a = blog_post()  # do the run if it isn't done
>>> # ..... snip ..... the output
>>> # ..... snip ..... now create the featureclass
>>> SR_name = 32189  # u'NAD_1983_CSRS_MTM_9'
>>> SR = arcpy.SpatialReference(SR_name)
>>> output_shp ='F:/Writing_Projects/NumPy_Lessons/Shapefiles/out.shp'
>>> arcpy.da.NumPyArrayToFeatureClass(a, output_shp, 'Shape', SR)

 

Result

 

NumPy_Lessons_06_1.png

That's all...