Skip navigation
All People > Dan_Patterson > Py... blog > 2018 > December

Point Tools ....

Posted by Dan_Patterson Champion Dec 29, 2018



The foundation of most geometry.  Lots of tools for working with them, sort of.

This is a collection of tools for working with point data sets, creating point data sets or using points for other purposes.

I have lots of blogs on individual aspects of points.  This is just a link and an update notice for any additions to the point tools toolbox.



A fairly simple list.

The code access...

code sharing site

point tools on my github


Simple to use.  Just unzip the zip file from the code sharing site.  Keep the toolbox and the scripts folder in the same location so that the tbx and the scripts are kept 'relative' to one another.

GitHub if more for the terminally curious that just want to examine the code for their own purposes.


Some blog links as I think of them....

Distance Explorations... Trees are cool 

Concave Hulls ... the elusive container 

Standard Distance, inter-point distance:  ... the "Special Analyst" to the rescue 

Geometry... Stuff to do with points 


Anything you think should be added, fire me off an email.

C lone or not to clone


NOTE:     I will be updating with a new guide for ArcGIS Pro 2.4 when it is released (Beta 2 is now complete.

  I will provide the link here.


Why Clone?

  • If you want to install a package or upgrade an existing package.
    • You should be able to update or install packages distributed by esri within the existing environment... but you can't. I know it is confusing, but it may mean that the distributed packages haven't been fully tested (ie packaged for AI or machine learning)
  • If you aren't the guardian of your machine... you need to clone.
    • You are in the Star Wars clone category.

Why Not Clone?

  • You are the master of your machine and are familiar with software installation and 'conda' stuff.  If this is you, then you can install packages base ArcGIS Pro environment because you will have full admin rights.
  • TIP
    • Keep the *.exe download and/or the *.msi and *.cab files if you 'toast' something and need to do a reinstall.  This need hasn't happened with any of my cohort.


My approach

  • I completely uninstalled previous versions of Arc-Anything-and-Everything.
    •  Alternately, good time to buy a really good upgraded machine
  • Do a fresh new install of ArcGIS Pro.
  • Don't let the installation software decide where it wants to install.  Make a folder ahead of time (ie arc_pro or similar and install there... C:\Program Files.... is a really long path with spaces and I hate installing anything there that doesn't need to
  • Make a clone as describe below the dashed ===== line if you aren't in control of your machine


Result.... I couldn't install any packages through Pro's package manager, and when I installed Spyder via conda in my clone, it couldn't import arcpy




  • I installed spyder via conda into the arcgispro-py3 env and now I have spyder working.
  •  I also installed other packages into that environment without issue.
  •  When you download, use save as to download the *.exe to a folder ... you want to keep this.
  • Run the *.exe you downloaded so you have the installer *.msi and *.cab files
  • Double-click on the *.msi file to begin the installation
  • Specify the folder where you want Pro to be installed
  • Run 'conda' via proenv.bat ( the python command prompt) and make sure your arcgispro-py3 is active and install away
  • Alternately, create your clone and try to get it working with your packages and arcpy


Packages updates....

So far I have made upgrades to

  • python 3.6.8 the last of the 3.6 line
  • numpy 1.16.2 the last of the 1.15 line
  • I upgraded other packages as well


Testing without installing

You can check the affect of package updates without actually installing them. I still recommend using this first, then check the list for possible conflicts issues.  Launch conda, then ...


conda install <your package name>  --dry-run


This is the dashed line... below is for cloning... the package installation is the same for both


Clone... If you have to do it, here is a guide.  This guide is only for people which have actual control over their computers.

The Clone Guide


Access proenv.bat


You can launch proenv.bat via your windows start options under the guise of the Python Command Prompt.


I prefer to make a desktop shortcut as shown below.


Your environments can be controlled within ArcGIS Pro's package manager or via 'conda' accessed through proenv.bat.

Cloning from within Pro

It is slower and you don't get a lot of information, but they are improving it as they go along.  Activate the environment, close Pro, then restart with the new environment.



Working with conda

The shortcut brings up the command prompt in you active environment.  To obtain information on your environments, just run conda info --envs


Installing packages

You can add a package from within the package manager of via conda.  Since I prefer the --dry-run option in conda, I will illustrate it here.  You can leave out the --dry-run option to perform the actual install once you are sure you won't cause any foreseen issues.



Upgrading packages

You can upgrade a package either from the package manager in ArcGIS Pro or via conda.  The package manager seems to take longer and you don't get much feedback during the process.

Again, I prefer to examine an upgrade using the --dry-run option first, prior to committing.



You don't need this section


Proenv.bat window

Ok... love that blue?  Making conda package installs more fun... 


Anaconda Navigator

Now not everyone needs this nor can everyone do this, but with a patch on a single file, you can add an alternate package manager and access to a load of documentation links.



application launcher



the catch

In order to get the above, you have to edit a few lines in the '' which will located in your clone path



The patch given by 

Patch Anaconda Navigator to use conda executable instead of root python wrapper to conda · GitHub 

entails altering a couple of lines in the file.  I made a copy of the original and made fixes to the other in case I needed to undo the changes quickly.  Not ideal, but worth it if you need to provided documentation and application shortcuts to users with diverse computing backgrounds.

Like I said... you don't need it, but it is a definite 'nice'.



You need to create a Point object


import arcpy

pnt = arcpy.Point(300000, 5025000)


300000 5025000 NaN NaN


Simple... But what does that... import arcpy ...line do?

Let's see


(1) ---- Direct import

| dir(arcpy) ...
|    <module 'arcpy' from 'C:\\ArcGISPro\\Resources\\ArcPy\\arcpy\\'>
  (001)    ASCII3DToFeatureClass_3d   ASCIIToRaster_conversion AcceptConnections                                       
  (004)    AddAngleDirectedLayout_un  . . . SNIP . . .

  (1174)    stats                     stpm                     sys                                                     
  (1177)    td                        time                     toolbox                                                 
  (1180)    topographic               un                       utils                                                   
  (1183)    warnings                  winreg                   wmx   


That's correct... about 1200 names added to python's namespace.


(2) ---- Alternatives?



# ---- arcgisscripting is located in the folder
# C:\{your_install_path}\bin\Python\envs\arcgispro-py3\Lib\site-packages\arcgisscripting
# In there there is an arcgisscripting.pyd file

import arcgisscripting as ags

['ClearCredentials', 'ExecuteAbort', 'ExecuteError', 'ExecuteWarning', 'Extent',
'ImportCredentials', 'NumPyArrayToRaster', 'Raster', 'RasterToNumPyArray',
'SignInToPortal', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__',
'__name__', '__package__', '__path__', '__spec__', '_addTimeInterval', '_analyzeForSD',
'_arcgisscripting', '_attachLocator', '_chart', '_convertWebMapToMapDocument',
'_createGISServerConnectionFile', '_createGeocodeSDDraft', '_createMapSDDraft',
'_createimageservicesddraft', '_getImageEXIFProperties', '_getRasterKeyMetadata', '_getUTMFromLocation', '_hasLocalFunctionRasterImplementation', '_hgp', '_ia',
'_listDateTimeStringFormats', '_listStyleItems', '_listTimeZones', '_mapping',
'_reserved', '_setRasterKeyMetadata', '_sharing', '_ss', '_wrapLocalFunctionRaster',
'_wrapToolRaster', 'create', 'da', 'getmytoolboxespath', 'getsystemtoolboxespath',
'getsystemtoolboxespaths', 'na', 'un']

# ---- how about the *data access* stuff??

['ContingentFieldValue', 'ContingentValue', 'DatabaseSequence', 'Describe', 'Domain',
'Editor', 'ExtendTable', 'FeatureClassToNumPyArray', 'InsertCursor',
'ListContingentValues', 'ListDatabaseSequences', 'ListDomains',
'ListFieldConflictFilters', 'ListReplicas', 'ListSubtypes', 'ListVersions',
'NumPyArrayToFeatureClass', 'NumPyArrayToTable', 'Replica', 'SearchCursor',
'TableToNumPyArray', 'UpdateCursor', 'Version', 'Walk', '__doc__', '__loader__',
'__name__', '__package__', '__spec__', '_internal_eq', '_internal_sd', '_internal_vb']



from arcpy.arcobjects import Point


| dir(Point) ...
|    <class 'arcpy.arcobjects.arcobjects.Point'>
  (001)    ID                M                 X                 Y                
  (005)    Z                 __class__         __cmp__           __delattr__      
  (009)    __dict__          __dir__           __doc__           __eq__           
  (013)    __format__        __ge__            __getattribute__  __gt__           
  (017)    __hash__          __init__          __init_subclass__ __le__           
  (021)    __lt__            __module__        __ne__            __new__          
  (025)    __reduce__        __reduce_ex__     __repr__          __setattr__      
  (029)    __sizeof__        __str__           __subclasshook__  __weakref__      
  (033)    _arc_object       _go               clone             contains         
  (037)    crosses           disjoint          equals                             
  (041)    overlaps          touches           within  

(3) ---- The data access module

Get less fluff when working with tables, and featureclasses.  Compare to the arcgisscripting  import... any differences?
dirr(arcpy.da, cols=3)
| dir(arcpy.da) ...
|    <module 'arcpy.da' from 'C:\\ArcGISPro\\Resources\\ArcPy\\arcpy\\'>
  (001)    Describe                 Domain                   Editor                  
  (004)    ExtendTable              FeatureClassToNumPyArray InsertCursor            
  (007)    ListDomains              ListFieldConflictFilters ListReplicas            
  (010)    ListSubtypes             ListVersions             NumPyArrayToFeatureClass
  (013)    NumPyArrayToTable        Replica                  SearchCursor            
  (016)    TableToNumPyArray        UpdateCursor             Version                 
  (019)    Walk                     __all__                  __builtins__            
  (022)    __cached__               __doc__                  __file__                
  (025)    __loader__               __name__                 __package__             
  (028)    __spec__                 _internal_eq                                     
  (031)    _internal_sd             _internal_vb   

(4) ---- Arcobjects geometry

| dir(arcpy.arcobjects.geometries) ...
|    <module 'arcpy.arcobjects.geometries' from 'C:\\ArcGISPro\\Resources\\ArcPy\\arcpy\\arcobjects\\'>
  (001)    Annotation    AsShape       Dimension     Geometry     
  (005)    Multipatch    Multipoint    PointGeometry Polygon      
  (009)    Polyline      __all__       __builtins__  __cached__   
  (013)    __doc__       __file__      __loader__    __name__     
  (017)    __package__   __spec__      basestring                 
  (021)    gp            operator      xrange    
but it import gp as well.

(5) ---- Arcobjects

import arcpy.arcobjects as arco
dirr(arco, cols=3)
| dir(arcpy.arcobjects.arcobjects) ...
|    <module 'arcpy.arcobjects.arcobjects' from 'C:\\ArcGISPro\\Resources\\ArcPy\\arcpy\\arcobjects\\'>
  (001)    ArcSDESQLExecute               Array                          Cursor                        
  (004)    Extent                         FeatureSet                     Field                         
  (007)    FieldInfo                      FieldMap                       FieldMappings                 
  (010)    Filter                         GeoProcessor                   Geometry                      
  (013)    Index                          NetCDFFileProperties           Parameter                     
  (016)    Point                          RandomNumberGenerator          RecordSet                     
  (019)    Result                         Row                            Schema                        
  (022)    SpatialReference               Value                          ValueTable                    
  (025)    _BaseArcObject                 __builtins__                   __cached__                    
  (028)    __doc__                        __file__                       __loader__                    
  (031)    __name__                       __package__                    __spec__                      
  (034)    convertArcObjectToPythonObject mixins                         passthrough_attr 

(6) ---- environments perhaps?

from arcpy.geoprocessing import env
dirr(env, cols=3)
| dir(<class 'arcpy.geoprocessing._base.GPEnvironments.<locals>.GPEnvironment'>) ...
|    np version
  (001)    MDomain                        MResolution                    MTolerance                    
  (004)    S100FeatureCatalogueFile       XYDomain                       XYResolution                  
  (007)    XYTolerance                    ZDomain                        ZResolution                   
  (010)    ZTolerance                     __class__                      __delattr__                   
  (013)    __delitem__                    __dict__                       __dir__                       
  (016)    __doc__                        __eq__                         __format__                    
  (019)    __ge__                         __getattribute__               __getitem__                   
  (022)    __gt__                         __hash__                       __init__                      
  (025)    __init_subclass__              __iter__                       __le__                        
  (028)    __lt__                         __module__                     __ne__                        
  (031)    __new__                        __reduce__                     __reduce_ex__                 
  (034)    __repr__                       __setattr__                    __setitem__                   
  (037)    __sizeof__                     __str__                        __subclasshook__              
  (040)    __weakref__                    _environments                  _gp                           
  (043)    _refresh                       addOutputsToMap                autoCancelling                
  (046)    autoCommit                     baDataSource                   buildStatsAndRATForTempRaster 
  (049)    cartographicCoordinateSystem   cartographicPartitions         cellSize                      
  (052)    coincidentPoints               compression                    configKeyword                 
  (055)    extent                         geographicTransformations      isCancelled                   
  (058)    items                          iteritems                      keys                          
  (061)    maintainAttachments            maintainSpatialIndex           mask                          
  (064)    nodata                         outputCoordinateSystem         outputMFlag                   
  (067)    outputZFlag                    outputZValue                   overwriteOutput               
  (070)    packageWorkspace               parallelProcessingFactor       preserveGlobalIds             
  (073)    processingServer               processingServerPassword       processingServerUser          
  (076)    pyramid                        qualifiedFieldNames            randomGenerator               
  (079)    rasterStatistics               referenceScale                 resamplingMethod              
  (082)    scratchFolder                  scratchGDB                     scratchWorkspace              
  (085)    scriptWorkspace                snapRaster                     terrainMemoryUsage            
  (088)    tileSize                       tinSaveVersion                 transferDomains               
  (091)    transferGDBAttributeProperties values                         workspace   

(7) ---- Know your imports.

More to come

X  The xlrd and openpyxl modules packaged with ArcGIS Pro is a pretty cool for working with Excel files... if you are stuck and have to use them.  Pandas depends on them, so you could just use Pandas to do the data conversion, but, numpy can be called into play to do a pretty good and quick conversion while at the same time cleaning up the data on its way in to a table in ArcGIS Pro


(1) ---- Spreadsheets gone bad

Here is a spreadsheet with so flubs builtin.


Column A contains integers, but excel treats them as just floating point numbers without a decimal place.


Column D is just text but with leading/trailing spaces, cells with spaces, empty cells and just about anything that can go wrong when working with text.


Column E has floats but two of the cells are empty or worse... a space.


All these conditions need to be fixed.


As for fixing blank rows, missing column headers, data not in the upper left quadrant of a sheet, or data that share the page with other stuff (like graphs etc etc)… its not going to happen here.  This discussion assumes that you have an understanding on how you should work with spreadsheet data if you have to.



(2) ---- Spreadsheets to array

So with a tad of code, the spreadsheet can be converted to a numpy structured/recarray.

During the process, numeric fields which are obviously integer get cast to the correct format.


Malformed text fields/columns are cleaned up.  Leading/trailing spaces are removed and empty cells and/or those with nothing but spaces in them are replaced by 'None'.


Empty cells in numeric floating point fields are replaced with 'nan' (not a number).  Sadly there isn't an equivalent for integers, so you will either have to upcast your integer data or provide a null value yourself.

Best approach... provide your own null/nodata values



(3) ---- Spreadsheets to array to geodatabase table

Now... The array can be converted to a geodatabase table using NumPyArrayToTable.


arcpy.da.NumPyArrayAsTable(array, path)



`array` is from the previous step 

`path`  the full path and name of the geodatabase table.


it comes in as expected.  The dtypes are correct and the text column widths are as expected. Note that text column widths are twice the Unicode dtype width (ie U20 becomes 40 characters for field length)



(4) ---- Spreadsheets to geodatabase table via arcpy

Excel to Table does a good job on the data types, but it takes some liberty with the field length. This may be by design or useful or a pain, depending what you intend to do with the data subsequently.


You can even combine xlrd with arcpy to batch read multiple sheets at once using the code snippet in the reference link below.








(5) ---- Spreadsheets to Pandas to table

Yes pandas does it via xlrd, then the data to numpy arrays, then to series and dataframes, then out to geodatabase tables.  So you can skip pandas altogether if you want.


The data types can be a bit unexpected however, and there is no cleaning up of text fields isn't carried out completely, blank/empty cells are translated to 'nan' (not a number?) but a space in a cell remains as such.

The data type for the text column is an 'object' dtype which is usually reserved for ragged arrays (ie mixed length or data type).


[' a leading space', 'b', 'a trailing space ', nan, ' ', 'back_to_good', '    four_leading', 'b', 'a', 'done']



(6) ---- The code

I put the code in the  link to my `gist` on GitHub in case code formatting on this site isn't fixed. .... convert excel to a structured array


There are some things you can do to ensure a proper data type.  The following demonstrates how one little blank cell can foul up a whole column or row of data.

def isfloat(a):
    """float check"""
        i = float(a)
        return i
    except ValueError:
        return np.nan
# ---- Take some numbers... but you forgot a value so the cell is empty ie ''
vals = [6, 9, 1, 3, '', 2, 7, 6, 6, 9]

# ---- convert it to an array... we will keep it a surprise for now
ar = np.asarray(vals)

# ---- use the helper function `isfloat` to see if there are numbers there
np.array([isfloat(i) for i in ar])

# ---- yup! they were all numbers except for the blank
array([ 6.,  9.,  1.,  3., nan,  2.,  7.,  6.,  6.,  9.])

# ---- if we hadn't checked we would have ended up with strings
array(['6', '9', '1', '3', '', '2', '7', '6', '6', '9'], dtype='<U11')

If you really need to conserve the integer data type, they you will have to some hurdle jumping to check for `nan` (aka not-a-number)

# ---- our list of integers with a blank resulted in a float array
np.isnan(ar)  # --- conversion resulted in one `nan`
array([False, False, False, False,  True, False, False, False, False, False])

# ---- assign an appropriate integer nodata value
ar = ar[np.isnan(ar)] = -999

# ---- cast the array to integer and you are now done
ar = ar.astype('int')
array([   6,    9,    1,    3, -999,    2,    7,    6,    6,    9])


(7) ---- End notes...

So the next time you need to work with spreadsheets and hope that the magic of xlrd, openpyxl or pandas (which uses both) can solve all your problems.... take the time to look at your data carefully and decide if it is truly in the format you want BEFORE you bring it into ArcGIS Pro as a table


arr = excel_np("c:/temp/x.xlsx")


array([('aaa', '01', 1), ('bbb', '02', 2)],
      dtype=[('field1', '<U4'), ('field2', '<U4'), ('field3', '<i4')])

import arcpy
arcpy.da.NumPyArrayToTable(arr, r"C:\Your_spaceless_path_to\a.gdb\x_arr")

An example for a very simple table



If you have any use cases where the standard conversion methods aren't good let me know.





excel to table ...

xlrd on GitHub …

openpyxl on bitbucket... and openpyxl docs page...

Recently stumbled upon…          Using conda in spyder


I will just leave this as a thought.  It is one of those IPython %magic things


Update...       Spyder 3.3.6 is out  2019-07-16


Battle cry.... Install Spyder, Jupyter console and Jupyter notebook for ArcGIS Pro by default 

                      Make it so.... now on to the content


 Spyder... install once, use in multiple environments New... 2018-08-30


Table of contents:  (use browser back to return here)



:--------- Latest Version

    Version 3.3.6: installed 2019-07-16

    Use proenv.bat and just ran..... conda update spyder


:--------- Installing Spyder in ArcGIS Pro


Clone... ArcGIS Pro ... for non administrators 


:--------- Some other links

Script documenting ... It is all in the docs - links to spyder help pane for user-created help.

Spyder on GitHub ... If you want to keep abreast of issues and/or enhancement requests

Spyder Documentation …. On GitHub or Spyder Documentation

Spyder-notebook ... Jupyter notebook inside Spyder...



Spyder in pictures

:---- The Icon 

:----- The whole interface

... which can be arranged to suit


:---- Keep track of project files


:---- Need a little help?


:---- Fancy documentation with minimal effort



:---- Help for IPython?



:---- Help for other Modules?



:---- Check out your variables



:---- Some graphs? 

Yes from within Spyder, you can use Matplotlib or any other installed graphics pack (ie seaborn, orange etc)



















:---- See your scripts in outline mode,

Quickly navigate within them like a table of contents or use outline to get a basic shell listing of your script


The trick to outline is to use # ---- Your comment here   4 dashes seems to be the key for some reason



:---- Don't like something? 

Change it


: ----  Set Spyder as your IDE for PRO




: ---- Fear not...

You can even pretty up the interface


Making conda package installs more fun... 




More later

: --------

KD trees


Table of contents:  (use browser back to return here)

Distance stuff, lots of tree types.  Your distribution of Python comes with scipy which has a couple of implementations.

This is just a quick demo of its potential use.  And an homage to Canadian university students, for which KD will always have a special meaning as a primary food group. Kraft Dinner for the non-student


Start with some points and you then want to calculate the closest 2 points to form point origin-destination pairs... because it can be done.


  • Just deal with the coordinates first, leave the attribute baggage to the side for now.
  • Decide on the number of points you want to find the 'closest' of.  Don't get ridiculous and ask for an origin-destination matrix with a couple of thousand points.  Go back to the project design stage or look at scipy.distance.cdist and a new computer.
  • Sorting the points by X, then Y coordinate is useful in some situations.  An option to do so is provided.
  • Building the KDTree is fairly straightforward using scipy.
    • decide on the number of points to find
    • the returned list of indices will include the origin point itself, so if you want the closest 2 points, then set your query to N = 3.  This can be exploited to suck up the x,y values to form origin-destination pairs if you want to form lines, and/or polygons.
  • Decide if you want to just pull out the indices of the closest pairs with their distance.
  • Optionally, you can produce a structured array, which you can then bring into ArcGIS Pro as a table for use with a couple of ArcToolbox tools to create geometry
  • You are done.  Do the join thing if you really need the attributes.


The picture:


The code:

So this function just requires a point array of x,y pairs, the number of closest points (N), whether you want to do an x,y sort first and finally, whether you want an output table suitable for use in ArcGIS Pro.

From there, you simply use arcpy.NumPyArrayToTable to produce a gdb featureclass table. 

You can them use... XY to Line … to produce line segments, connecting the various origins and destinations as you see fit, or just bask in the creation of an... XY event layer.


Note:  lines 32 and 41 can use... cKDTree place of... KDTree ..., if you just need speed for Euclidean calculations.

def nn_kdtree(a, N=3, sorted=True, to_tbl=True):
    """Produce the N closest neighbours array with their distances using
    scipy.spatial.KDTree as an alternative to einsum.

    a : array
        Assumed to be an array of point objects for which `nearest` is needed.
    N : integer
        Number of neighbors to return.  Note: the point counts as 1, so N=3
        returns the closest 2 points, plus itself.
    sorted : boolean
        A nice option to facilitate things.  See `xy_sort`.  Its mini-version
        is included in this function.

    def _xy_sort_(a):
mini xy_sort"""
        a_view = a.view(a.dtype.descr * a.shape[1])
        idx =np.argsort(a_view, axis=0, order=(a_view.dtype.names)).ravel()
        a = np.ascontiguousarray(a[idx])
        return a, idx
    def xy_dist_headers(N):
        """Construct headers for the optional table output"""
        vals = np.repeat(np.arange(N), 2)
        names = ['X_{}', 'Y_{}']*N + ['d_{}']*(N-1)
        vals = (np.repeat(np.arange(N), 2)).tolist() + [i for i in range(1, N)]
        n = [names[i].format(vals[i]) for i in range(len(vals))]
        f = ['<f8']*N*2 + ['<f8']*(N-1)
        return list(zip(n,f))
    from scipy.spatial import cKDTree
    idx_orig = []
    if sorted:
        a, idx_orig = _xy_sort_(a)
    # ---- query the tree for the N nearest neighbors and their distance
    t = cKDTree(a)
    dists, indices = t.query(a, N)
    if to_tbl:
        dt = xy_dist_headers(N)  # --- Format a structured array header
        xys = a[indices]
        new_shp = (xys.shape[0],[1:]))
        xys = xys.reshape(new_shp)
        ds = dists[:, 1:]  #[d[1:] for d in dists]
        arr = np.concatenate((xys, ds), axis=1)
        arr = arr.view(dtype=dt).squeeze()
        return arr
        return np.array(indices), idx_orig


The output

Just a slightly better formatting that you can get with one of my numpy functions... obviating the need for Pandas for table niceness.

 id  X_0    Y_0    X_1    Y_1    X_2    Y_2    d_1     d_2    
000   3.00  98.00  10.00  94.00  23.00  94.00    8.06   20.40
001  10.00  94.00   3.00  98.00  23.00  94.00    8.06   13.00
002  13.00  18.00  19.00  22.00  34.00  16.00    7.21   21.10
003  19.00  22.00  13.00  18.00  34.00  16.00    7.21   16.16
004  23.00  94.00  10.00  94.00   3.00  98.00   13.00   20.40
005  34.00  16.00  19.00  22.00  43.00   1.00   16.16   17.49
006  37.00  64.00  43.00  89.00  56.00  84.00   25.71   27.59
007  43.00   1.00  34.00  16.00  66.00   6.00   17.49   23.54
008  43.00  89.00  56.00  84.00  61.00  87.00   13.93   18.11
009  56.00  84.00  61.00  87.00  43.00  89.00    5.83   13.93
010  61.00  87.00  56.00  84.00  43.00  89.00    5.83   18.11
011  66.00   6.00  76.00  20.00  43.00   1.00   17.20   23.54
012  67.00  41.00  78.00  50.00  76.00  20.00   14.21   22.85
013  76.00  20.00  66.00   6.00  67.00  41.00   17.20   22.85
014  78.00  50.00  67.00  41.00  80.00  67.00   14.21   17.12
015  80.00  67.00  91.00  66.00  78.00  50.00   11.05   17.12
016  82.00  91.00  94.00  95.00  61.00  87.00   12.65   21.38
017  91.00  66.00  80.00  67.00  78.00  50.00   11.05   20.62
018  94.00  95.00  82.00  91.00  91.00  66.00   12.65   29.15
019  96.00  40.00  78.00  50.00  91.00  66.00   20.59   26.48



Behind the scenes, there should be some 'spatial cleanup'.  Specifically, if you look at the image you have point pairs connected by a single segment, that is because they are the closest to one another.  Rather than duplicating the segment with opposing directions, you can 'prune' the indices and remove those prior to producing the geometry.


There are lots of tools that you can produce/show geometric relationships.  Use them to provide answers to your questions.  This implementation will appear soon on the code sharing site.  I will provide a link soon.