2018

### Py... blog

January 2018 # Origin, distances and bearings... geometry wanderings

Posted by Dan_Patterson Jan 20, 2018

Start at a point.  You are given a list of bearings and distances.  Where do you end up.

This comes up fairly often and the COGO tools allow you to create them from scratch.

Here is how you can do it using numpy and arcpy.

First a picture... given a location, you are instructed to move 100m in 10 degree increments between points.  You will obviously track a circle.  The traverse point locations can be constructed from base geometry.  The locations can then be saved to a featureclass.  The point featureclass can be converted to a continuous line, which can be segmented if you desire.  Total time, not much. The image above, shows the results of such a venture.

• I created my geometry using python/numpy
• I save the results to a featureclass using Numpyarraytofeatureclass (hence points)
• From those points I used Points to Line available at all license levels
• The continuous line was blown apart to segments using Split Lines at Vertices but alas, only available at the Advance level. .... ( I will save this obstruction to another blog, where I will show you how to do it with a standard license   EDIT... From the second incarnation, you can produce a featureclass and use the XY to Line tool which is available at all license levels)
• Add Geometry Attributes enabled me to obtain the coordinates of the start, middle and end locations of the line.  This could have been done during the construction of the geometry as well.

An how many of us have taught a class where this happens.... More to come....

Oh yes... code... NOW  the origin needs to be given in projected coordinates (ie UTM, State Plane, Web Mercator etc) since the calculations are done in planar units.  See my Vincenty implementation if you want to work with geodesic calculations.

``def dist_bearing(orig=(0, 0), bearings=None, dists=None):    """point locations given distance and bearing    :Notes:  for sample calculations    :-----    : - bearings = np.arange(0, 361, 10.)    : - dists = np.random.randint(10, 500, 36) * 1.0    : - dists = np.ones((len(bearings),))    :   dists.fill(100.)    """    rads = np.deg2rad(bearings)    sines = np.sin(rads)    coses = np.cos(rads)    dx = sines * dists    dy = coses * dists    x_n = np.cumsum(dx) + orig    y_n = np.cumsum(dy) + orig    data = np.vstack((bearings, dists, dx, dy, x_n, y_n)).T    frmt = "Origin (0,0)\n" + "{:>10s}"*6    names = ["bearings", "dist", "dx", "dy", "Xn", "Yn"]    print(frmt.format(*names))    for i in data:        print(("{: 10.2f}"*6).format(*i))    return data``

Optionally create a structured array that you can produce a featureclass from using this incarnation.

``def dist_bearing(orig=(0, 0), bearings=None, dists=None, prn=False):    """point locations given distance and bearing    """    orig = np.array(orig)    rads = np.deg2rad(bearings)    dx = np.sin(rads) * dists    dy = np.cos(rads) * dists    x_t = np.cumsum(dx) + orig    y_t = np.cumsum(dy) + orig    xy_f = np.array(list(zip(x_t[:-1], y_t[:-1])))    xy_f = np.vstack((orig, xy_f))    stack = (xy_f[:, 0], xy_f[:, 1], x_t, y_t, dx, dy, dists, bearings)    data = np.vstack(stack).T    names = ['X_f', 'Y_f', "X_t", "Yt", "dx", "dy", "dist", "bearing"]    N = len(names)    if prn:  # ---- just print the results ----------------------------------        frmt = "Origin (0,0)\n" + "{:>10s}"*N        print(frmt.format(*names))        frmt = "{: 10.2f}"*N        for i in data:            print(frmt.format(*i))        return data    else:  # ---- produce a structured array from the output ----------------        kind = ["<f8"]*N        dt = list(zip(names, kind))        out = np.zeros((data.shape,), dtype=dt)        for i in range(N):            nm = names[i]            out[nm] = data[:, i]        return out# --- from the structured array, use NumPyArrayToFeatureclass``

Again... more later # Combine... data classification from raster combinations

Posted by Dan_Patterson Jan 19, 2018

I rarely get a great idea.  This is another one of those cases.  But I do recognize a great idea when I see it.  Sadly I usually forget it, then rediscover it many months or years later.  I spotted a fantastic idea today and I had to drop everything (between answering questions) and jot it down.

The premise that twigged my interest was about how to 'combine' values of elevation, slope and aspect on Stack Overflow.  I did a wowwser!  Terrain derivatives on Stack!  I was enthralled.  The author.... unutbu ... of mysterious persona, put together a short missive using built-in numpy tools.  The code is short and sweet, works with floating point or integer rasters (aka, arrays) and is fast (at least in my testing so far).

Before I ruin the rest of my day with some droll corner case I hadn't thought of, I will jot this down now for posterity.

The large print

• This doesn't require a Spatial Analyst License
• A complete implementation (coming soon) uses arcpy's RasterToNumPyArray and NumPyArrayToRaster to communicate between the thin veil separating Arc* from the array world
• Rasters/arrays with nodata values will be implemented
• This is in the initial stages of development of a full-fledged tool, so hold you ...but!... comments

Start with 3 small rasters/arrays.  This way, you can see how the system works.

Array 'a'Array 'b'
``array([[0, 0, 0, 4, 4, 4, 1, 1, 1],       [0, 0, 0, 4, 4, 4, 1, 1, 1],       [0, 0, 0, 4, 4, 4, 1, 1, 1],       [2, 2, 2, 1, 1, 1, 2, 2, 2],       [2, 2, 2, 1, 1, 1, 2, 2, 2],       [2, 2, 2, 1, 1, 1, 2, 2, 2],       [1, 1, 1, 4, 4, 4, 0, 0, 0],       [1, 1, 1, 4, 4, 4, 0, 0, 0],       [1, 1, 1, 4, 4, 4, 0, 0, 0]])``
``array([[0, 0, 0, 1, 1, 1, 2, 2, 2],       [0, 0, 0, 1, 1, 1, 2, 2, 2],       [0, 0, 0, 1, 1, 1, 2, 2, 2],       [3, 3, 3, 4, 4, 4, 5, 5, 5],       [3, 3, 3, 4, 4, 4, 5, 5, 5],       [3, 3, 3, 4, 4, 4, 5, 5, 5],       [0, 0, 0, 1, 1, 1, 2, 2, 2],       [0, 0, 0, 1, 1, 1, 2, 2, 2],       [0, 0, 0, 1, 1, 1, 2, 2, 2]])``

Array 'c'unique combinations
``array([[0, 0, 0, 3, 3, 3, 0, 0, 0],       [0, 0, 0, 3, 3, 3, 0, 0, 0],       [0, 0, 0, 3, 3, 3, 0, 0, 0],       [1, 1, 1, 4, 4, 4, 1, 1, 1],       [1, 1, 1, 4, 4, 4, 1, 1, 1],       [1, 1, 1, 4, 4, 4, 1, 1, 1],       [2, 2, 2, 5, 5, 5, 2, 2, 2],       [2, 2, 2, 5, 5, 5, 2, 2, 2],       [2, 2, 2, 5, 5, 5, 2, 2, 2]])``
``array([[0, 0, 0, 6, 6, 6, 1, 1, 1],       [0, 0, 0, 6, 6, 6, 1, 1, 1],       [0, 0, 0, 6, 6, 6, 1, 1, 1],       [2, 2, 2, 7, 7, 7, 3, 3, 3],       [2, 2, 2, 7, 7, 7, 3, 3, 3],       [2, 2, 2, 7, 7, 7, 3, 3, 3],       [4, 4, 4, 8, 8, 8, 5, 5, 5],       [4, 4, 4, 8, 8, 8, 5, 5, 5],       [4, 4, 4, 8, 8, 8, 5, 5, 5]],``

So here it is.

``def combine_(*arrs):    """Combine arrays to produce a unique classification scheme    :    :References:    :-----------    : https://stackoverflow.com/questions/48035246/    :       intersect-multiple-2d-np-arrays-for-determining-zones    : original def find_labels(*arrs):    """    indices = [np.unique(arr, return_inverse=True) for arr in arrs]    M = np.array([item.max()+1 for item in indices])    M = np.r_[1, M[:-1]]    strides = M.cumprod()    indices = np.stack(indices, axis=-1)    vals = (indices * strides).sum(axis=-1)    uniqs, labels = np.unique(vals, return_inverse=True)    labels = labels.reshape(arrs.shape)    return labels, uniqs``

Now stop reading here if you aren't interested in the details, but have a quasi-firm grasp on the process.  Read on if you must, but suffice to say unubtu was on to something.

In essence, what the code does, is find the unique values of each array shoved into the function.  In our case, the list of unique values is short (see line 3 below).  Not too onerous a combination slate.

Line 10 can be modified to show the unique values in each array... useful if they aren't sequential.  This is show below in line 1.

``induni = [np.unique(arr, return_inverse=True) for arr in arrs]  # arrays a, b, c[array([0, 1, 2, 4]), array([0, 1, 2, 3, 4, 5]), array([0, 1, 2, 3, 4, 5])]``

The nifty bit, is how unutbu constructs the 'indices' in two steps, using the maximum value in each array, determines the unique combination, applies it to the arrays to get a unique value ('val') for each combination and then reassigns a class ('label' to it).

This process, not documented in the code, is expanded upon here.  In lines 1-4 below, an array, combos, is constructed from the values in the array plus the 'val' derived from the above process.

``idx = [np.unique(arr, return_inverse=True) for arr in arrs]vals = (indices * strides).sum(axis=-1)dt = [('a', '<i8'), ('b', '<i8'), ('c', '<i8'), ('vals', '<i8')]combos = np.array(list(zip(*idx)), dtype= dt)``

If the 'combos' array is sorted, the unique values can be extracted from it.  The values in the 'vals' column is used to assign a 'label' to each unique combination.

``srt_order = np.argsort(combos, order=['a','b', 'c'])  # sort the combossrted = combos[srt_order]  # ---- extract them in sorted ordernp.unique(srted)  # ---- determine the unique combinations array([(0, 0, 0,   0),       (0, 2, 2,  56),       (1, 0, 2,  49),       (1, 2, 0,   9),       (1, 4, 4, 113),       (2, 3, 1,  38),       (2, 5, 1,  46),       (3, 1, 3,  79),       (3, 1, 5, 127)],      dtype=[('a', '<i8'), ('b', '<i8'),             ('c', '<i8'), ('vals', '<i8')])``

Hope it is clear as mud, and I have some experimenting to do, but I just thought I would throw it out there. # Densification.... sometimes being dense is a good thing

Posted by Dan_Patterson Jan 18, 2018

Yup... adding points to poly* features

My missive .... Densification by Factor  YES!

There is also a Geodesic Densify, but I don't do Geographic in any event, so use esri tools

Why did I reinvent the wheel?  Because the densification basis is to simply provide a sequence of locations, for X and Y then subdivide the intervening location by some factor (hence the ... by Factor)

Here is an example....

``a = np.array([[ 20.,  20.],              [ 20.,  30.],              [ 30.,  30.],              [ 30.,  20.],              [ 20.,  20.]])``

Then you run some code

``def _densify_2D(a, fact=2):    """Densify a 2D array using np.interp.    """    # Y = a changed all the y's to a    a = np.squeeze(a)    n_fact = len(a) * fact    b = np.arange(0, n_fact, fact)    b_new = np.arange(n_fact - 1)    c0 = np.interp(b_new, b, a[:, 0])    c1 = np.interp(b_new, b, a[:, 1])    n = c0.shape    c = np.zeros((n, 2))    c[:, 0] = c0    c[:, 1] = c1    return c``

And you get this.

``a_dens = _densify_2D(a, fact=2)a_dens array([[ 20.,  20.],       [ 20.,  25.],       [ 20.,  30.],       [ 25.,  30.],       [ 30.,  30.],       [ 30.,  25.],       [ 30.,  20.],       [ 25.,  20.],       [ 20.,  20.]])``
``a.shape  # (49874, 2)%timeit _densify_2D(a, fact=2)5.25 ms ± 323 µs per loop      (mean ± std. dev. of 7 runs,      100 loops each)``

Where it really shines, is for long features that just need to be densified without any particular requirement for a set spacing.  If is fairly speedy as well.

Hope someone finds this useful, if you don't have the Standard or Advanced license in particular. # Point of view... on points

Posted by Dan_Patterson Jan 12, 2018

This is the current state of the Point and PointGeometry .... not the best....

``pt2 = arcpy.Point()pt2<Point (0.0, 0.0, #, #)>        # ---- make a pointpg2 = arcpy.PointGeometry(pt2)  # ---- make a point geometrypg2.WKT                         # ---- why does it have valid X, Y values????'POINT (0 0)'``

So there is no such thing as a 'null' point (but a 'nominal' point as one of our esteemed participants noted).

'None' isn't good enough.  You can create an 'empty' geometry recognized by other representations with a little trickery.

``import arcpyn = np.nana = np.array([n, n, n, n])pt = arcpy.Point()pt.X, pt.Y, pt.Z, pt.M = apg = arcpy.PointGeometry(pt)pg<PointGeometry object at 0x200c6ed6ac8[0x200bf9a39e0]>pg.WKT'POINT EMPTY'pt<Point (nan, nan, nan, nan)>``

And when you look at the PointGeometry, you decide which is best\.

``# ---- by using NaN (not a number) for point properties ----pt        # <Point (nan, nan, nan, nan)>pg = arcpy.PointGeometry(pt)pg.JSON   # '{"x":"NaN","y":"NaN","spatialReference":{"wkid":null}}'pg.WKT    #  'POINT EMPTY'# ---- The current state of affairs ----pt2       # <Point (0.0, 0.0, #, #)>pg2 = arcpy.PointGeometry(pt2)pg2.JSON  # '{"x":0,"y":0,"spatialReference":{"wkid":null}}'pg2.WKT   # 'POINT (0 0)'``

I should point out that you can use math.nan inplace of np.nan

``m = math.nanb = np.array([m, m, m, m])pt3 = arcpy.Point()pt3.X, pt3.Y, pt3.Z, pt3.M = bpt3  # ---- <Point (nan, nan, nan, nan)>pg3 = arcpy.PointGeometry(pt3)pg3.JSON  # ---- '{"x":"NaN","y":"NaN","spatialReference":{"wkid":null}}'pg3.WKT   # ---- 'POINT EMPTY'``

By date: By tag: