POST
|
Thanks for sharing! None of the below is criticism, just observations; I think both of our approaches are good for different scenarios, and your version has shown me some errors and improvements in my own version. I like your approach because it may be more applicable than mine for some real-world situations. My computer would catch fire (or just take forever) if I tried to use my code to process a real-world island polygon because there would likely be hundreds of thousands (more?) of vertex pairs, a problem that you solve by "generalizing" with "Generate Points Along Line." So, I will implement this change in my own tool based on a total vertex count or something like that. Reviewing your approach has also shown me that while I still stand behind my edge transect approach, my internal line approach is INCORRECT. So, thanks for offering a different perspective. My longest line (measurements all in US Feet) is shown in Black, yours is in Red, and my (I am now realizing) incorrect "entirely internal" line is shown in Blue. So, disregard Blue, I need to remove that feature. For absolute accuracy, I think my two-vertex algorithm is the best approach, but for realistic use on complex polygons (islands, parcels, etc.) your approach could likely get pretty darn close, and probably requires far less processing time. Another exampl--again, mine in Black, yours in Red: Another thing to watch out for with your toolbox would be holes in polygons, as well as concave polygons. As you can see, your line is "longer" here, but passes through a hole. The second image shows your line going out of bounds in a concave polygon. Thanks for the alternate approach, very helpful in thinking about this problem further.
... View more
04-14-2023
02:36 PM
|
0
|
0
|
272
|
POST
|
Curious about this approach--any chance you would be able to attach a copy of your toolbox here?
... View more
04-14-2023
09:42 AM
|
0
|
2
|
3445
|
POST
|
@JohannesLindner one more question for you--I think I've managed to break this using Field Calculator (as Field Calculator often has the power to do...). Here is the feature class table correctly reporting on the "uniqueness" of each of the IDs. As the screenshot shows, I have a pending calculation for the entireIntegerField1 column. And here we are after the calculation. Note there are no pending edits here, and the table has been refreshed. In my testing, it seems like every record processes correctly, except the last row processed, which is typically (or always?) the last ObjectID. So, for some reason, Field Calculator processes the given calculation for each row, and then kicks out at the last row without activating the Attribute Rule maybe? Or perhaps, Field Calculator is iterating over a temporary copy of the rows in such a way that at the very last row, it's the only row "left" in memory, and since there are no other rows to compare the value to, it will always report "Unique?" Any other ideas you have would be fantastic. Here's another one with the @OID shown (sort in this case based on the GlobalID).
... View more
04-12-2023
10:19 AM
|
0
|
1
|
1145
|
POST
|
I investigated your speculation about SQL and nulls, and it seems like you were correct! I added a "null catch" into my original script, and it now works as a Calculation Attribute Rule without throwing the SQL error upon saving the rule. Snippet of the catch (with the problematic SQL on line 5) is below: if (fieldVal == null) {
return "Null!"
}
var idCountStats = First(Filter(groupedFeatSet, `DOM1 = ${fieldVal}`)) This works, but my script was not advanced enough to update the other fields that would also need updating, in the event that those rows became duplicates after an update to a different row. So, you solved a problem I wasn't aware I had yet when I originally posted. Thanks again!
... View more
04-11-2023
01:03 PM
|
0
|
0
|
1166
|
POST
|
I hadn't thought to "Refresh," works perfectly as expected! Your speculation above is very interesting as well, I appreciate it! I'll play around with that. Regardless, it's helpful in thinking about what's going on under the hood. Thanks again!
... View more
04-11-2023
12:12 PM
|
0
|
0
|
1167
|
POST
|
Thanks for the reply! If you have any idea why my version works via Field Calculator but not as an Attribute Rule, I would love to hear it. Must be something about how a script executes in one environment vs. the other? This script sort of works, but it still doesn't address updating both rows once an ID becomes a duplicate. In other words, if I update row 1 to be "9", then row 2, which is already "9", should also be marked as a duplicate. Currently, that doesn't seem to happen. Below, I have changed the first row from "8" to "9", and the "UNQ" field in that row correctly updates to "Not Unique." However, the second row, which is now ALSO a duplicate (value = "9"), does not receive the same update in the "UNQ" field. To be clear, I have also saved all edits. Perhaps this type of update is not possible? I tried with a fresh feature class using your field names as well, just to be sure. BUT, if I manually change the second row to some other value, then back to "9", I get the correct reporting in the uniqueness field for that row as well, as expected. THEN, if I delete the second row, the first row (the one that remains, undeleted) will correctly update from "Not Unique" to the correct "Unique": Thanks again for the help, and anything additional you can provide would be appreciated!
... View more
04-11-2023
10:09 AM
|
0
|
3
|
1179
|
POST
|
I am trying to write an Attribute Rule to report on whether a value in a field is unique within the entire field (unique IDs for features, for example). I am aware of things like database sequences, GlobalIDs, and GUIDs--that's not what this question is about. I'm using ArcGIS Pro 3.1.1, and my data is in a local File Geodatabase. The uniqueness "reporting" will be done in a separate field. The image below shows this. The code below achieves the result above; however, this is done using Field Calculator: var fieldName = "DOM1"
var fieldVal = $feature["DOM1"]
// Create FeatureSet using field name above. Do not include geometry.
var featSet = FeatureSetByName($datastore, "ATTR_RULE_TEST", [fieldName], false)
Console(`ROWS IN FEATURE SET: ${Count(featSet)}\n`)
// Create a new FeatureSet, dissolving by given field.
var groupedFeatSet = GroupBy(featSet, [fieldName], {name: "COUNT", expression: "0", statistic: "COUNT"})
// Filter FeatureSet to where FeatureSet ID matches ID of the row being calculated.
var idCountStats = First(Filter(groupedFeatSet, `DOM1 = ${fieldVal}`))
// Get the number of times the current ID appears in the FeatureSet. If > 1, it's not unique.
var idCount = idCountStats["COUNT"]
var result = IIf(idCount > 1, "NOT UNIQUE", "UNIQUE")
Console(`FeatureSet Row (w/ Count Info): ${idCountStats}\nCOUNT: ${idCount}\nRESULT: ${result}`)
return result My goal would be to implement this is a Calculation-style Attribute Rule, updating the "UNQ" column as IDs change (Updates and Insertions). The error that is reported to me when attempting to save the same code in an Attribute Rule is: "ERROR 002717: Invalid Arcade expression, Arcade error: General SQL error, Script line 12," which corresponds to the SQL query on that line. But... if a modification is made to a row resulting in that row becoming a duplicate, there must also be another row that requires updating simultaneously (because it too, will have become a duplicate), which may not be possible. As an alternative, it seems like this should be implementable as a Constraint-style Attribute Rule, however I'm getting the same error message when attempting to save the rule. Any advice would be appreciated.
... View more
04-10-2023
04:36 PM
|
0
|
7
|
1213
|
POST
|
EDIT: Thanks to @RobertEisler for providing an example showing my transect_type = "PROPER" to be incorrect (the dotted black lines in all the pictures). As far as I am aware, the "BOUNDARY" option is still the optimal algorithm. As @KimOllivier already noted, only checking vertex-to-vertex is not a sufficient algorithm to solve this problem. Here is my approach. I am working under the assumption that the longest line will contain two vertices, but does not necessarily end at those two vertices. The user has the option to include the edges of shapes, or require the transect to be entirely internal [this is wrong, see EDIT at the top and replies down below] (red vs. dotted, respectively; sometimes these are the same). Multipart polygons only get one transect generated, so that's why a few shapes appear to be missing lines. Shapes involving true curves have transects estimated using "densify" to generate actual vertices, so those are only close approximations. See the second screenshot, where the line should presumably be right up against the hole. Would be happy to hear suggestions on this. These are test polygons, but on a real-world parcel dataset where the number of vertex-to-vertex combinations within a polygon can get into the hundreds of thousands, my version can take a while. Seemed like the bottleneck was converting every vertex coordinate combination to a Polyline object, so I would be very interested in optimizations if anyone has suggestions. To summarize--script generates lines from all vertex-to-vertex pairs, extends those lines a bit (to cover cases like the castle shape at the top of the first below), clips them back to the polygon boundary, and then finds the longest in the list of lines within the given geometry. EDIT: The "stopwatch as sw" module in this code is a custom time/message reporting module. Delete it or replace those with standard "print()" if anyone ends up using this. Transect line outputs. Longest transect through a feature composed of curved boundaries. """LONGEST TRANSECT"""
#---------------------------------------------------------------------------------------#
# IMPORTS
import collections
import itertools
import math
import os
import sys
import traceback
import uuid
import arcpy
import stopwatch as sw
#---------------------------------------------------------------------------------------#
# SETTINGS
arcpy.env.overwriteOutput = True
arcpy.env.workspace = **YOUR GEODATABASE PATH**
#---------------------------------------------------------------------------------------#
# INPUTS
polygons = "POLYGON"
polygon_id_fld = "POLY_GUID"
transects = "TRANSECT_BOUNDARY_TEST"
# Boundary = transect can be entire edge. Proper = transect must be witihin polygon.
transect_type = "BOUNDARY" # ["BOUNDARY", "PROPER"]
# Only save the longest line for each polygon?
discard_nonmax_transects = False
#---------------------------------------------------------------------------------------#
# FUNCTIONS
def create_geom_dict(poly_fc, poly_id_fld, generalize=False):
"""Read poly geoms to information dict.
returns {GUID: {POLYGON: <poly_obj>, POINTS: [<pnt_obj>, <pnt_obj>]}, ...}"""
poly_geom_dict = collections.defaultdict(dict)
with arcpy.da.SearchCursor(poly_fc, [poly_id_fld, "SHAPE@"]) as scurs:
# For each row in the FC table (could be single, multi, have holes, etc.).
for guid, geom in scurs:
poly_geom_dict[guid]["POLYGON"] = geom
# Curves to segments; generalize segments to speed process (?).
if geom.hasCurves:
geom = geom.generalize(1)
elif generalize:
geom = geom.generalize(0.5)
# For each part of the geometry:
for i, part in enumerate(geom, start=1):
print(f" Part {i}")
# For each point in this part of the geometry:
for j, point in enumerate(part, start=1):
if point:
poly_geom_dict[guid].setdefault("POINTS", []).append(point)
else:
print(f" Point {j:>2}--Null point. Start interior ring.")
stopwatch.stamptime(msg=f"READ {guid} GEOMETRY")
return poly_geom_dict
def line_within_polygon(line, polygon, relation):
"""If the line is outside the polygon, or of zero length, return False."""
if not line.within(polygon, relation=relation) or line.length == 0:
return False
else:
return True
#---------------------------------------------------------------------------------------#
# MAIN
stopwatch = sw.Stopwatch()
stopwatch.starttime(msg="PROCESS STARTING")
spat_ref = arcpy.da.Describe(polygons)["spatialReference"].factoryCode
poly_geom_dict = create_geom_dict(poly_fc=polygons, poly_id_fld=polygon_id_fld)
# Remove duplicate points; added due to closing geoms, maybe rings?
for guid in poly_geom_dict:
unique_coors = set((p.X, p.Y) for p in poly_geom_dict[guid]["POINTS"])
poly_geom_dict[guid]["POINTS"] = [arcpy.Point(*coor) for coor in unique_coors]
# Create transect FC. Add Fields.
trans_fc = arcpy.management.CreateFeatureclass(out_path=arcpy.env.workspace,
out_name=transects,
geometry_type="POLYLINE",
spatial_reference=spat_ref)
flds = [("POLY_GUID", "GUID"), ("LINE_GUID", "GUID"), ("LONGEST", "TEXT")]
for fld_name, fld_type in flds:
arcpy.management.AddField(in_table=trans_fc, field_name=fld_name,
field_type=fld_type, field_length=1)
stopwatch.stamptime(msg="CREATED TRANSECT FC\n")
# Walk through the dictionary making lines:
# These lists house intermediate geometries; review during testing.
test_multiparts = []
test_extensions = []
for guid, geom_dict in poly_geom_dict.items():
stopwatch.stamptime(msg=f"PROCESSING STARTED ON {guid}")
vert_pairs = len(list(itertools.combinations(iterable=geom_dict["POINTS"], r=2)))
print(f"VERTEX PAIRS TO PROCESS: {vert_pairs}")
line_geoms = []
for p1, p2 in itertools.combinations(iterable=geom_dict["POINTS"], r=2):
line = arcpy.Polyline(inputs=arcpy.Array([p1, p2]))
# WITHIN POLYGON
poly_geom = geom_dict["POLYGON"]
if not line_within_polygon(line, poly_geom, relation=transect_type):
continue
# EXTEND LINE
new_pnts = arcpy.Array()
# Extension distance is somewhat arbitrary. Extend line 1/4 perimeter
# of the Extent object. Seems to be enough (?).
extend_distance = poly_geom.extent.polygon.boundary().length / 4
angle = math.atan2(p2.Y - p1.Y, p2.X - p1.X)
new_pnts.add(arcpy.Point(p1.X - extend_distance * math.cos(angle),
p1.Y - extend_distance * math.sin(angle)))
new_pnts.add(arcpy.Point(p2.X + extend_distance * math.cos(angle),
p2.Y + extend_distance * math.sin(angle)))
# CLIP TO POLYGON
extended_line = arcpy.Polyline(new_pnts)
clipped_line = extended_line.intersect(poly_geom, 2)
test_extensions.append(extended_line)
# Get all points on line. >2 possible from edge intersections.
merged_array = arcpy.Array()
for part in clipped_line:
for point in part:
merged_array.add(point)
# Sort points along X, then Y. Since the line is straight, we don't
# need to do linear referencing or anything. Keep first/last only.
# Removes needless vertices and reorders points on line.
sort_arr = arcpy.Array(sorted(merged_array, key=lambda p: (p.X, p.Y)))
full_polyline = arcpy.Polyline(arcpy.Array([sort_arr[0], sort_arr[-1]]))
test_multiparts.append(full_polyline)
# If line extends over white space between multipart polygons.
# isclose() is used here for float precision issues.
if not math.isclose(full_polyline.length, clipped_line.length):
# Retain parts of extended lines that are still within polygons.
# Append each part as a separate line for accurate GDB lengths.
for edge_part in full_polyline.intersect(poly_geom, 2):
line_geoms.append(arcpy.Polyline(edge_part))
else:
# Otherwise, append the new line as-is.
line_geoms.append(full_polyline)
# Export intermediate geometries for testing review.
for geoms, fc_name in [(test_multiparts, "MULTIS"), (test_extensions, "EXTENSIONS")]:
arcpy.management.CopyFeatures(geoms, fc_name)
stopwatch.stamptime(msg=f"{guid} POLYLINES CREATED")
# Sort by length; option to retain only the max length lines, discard the rest.
line_geoms_by_length = sorted(line_geoms, key=lambda x: x.length, reverse=True)
stopwatch.stamptime(msg=f"{guid} TRANSECTS SORTED BY LENGTH")
if discard_nonmax_transects:
line_geoms_by_length = line_geoms_by_length[:1]
# Write lines for this polygon to the feature class. Create UUIDs; mark the longest.
flds = ["SHAPE@", "POLY_GUID", "LINE_GUID", "LONGEST"]
with arcpy.da.InsertCursor(trans_fc, flds) as icurs:
for i, line_geom in enumerate(line_geoms_by_length):
if i == 0:
icurs.insertRow([line_geom, guid, uuid.uuid4(), "X"])
else:
icurs.insertRow([line_geom, guid, uuid.uuid4(), None])
stopwatch.stamptime(msg=f"WROTE {guid} TRANSECTS\n")
# Remove records where the geometry is identical.
# This should only happen when an edge is extended through a polygon, and ends
# up being perfectly coincident with an internal line.
before_lines = int(arcpy.management.GetCount(transects)[0])
arcpy.management.DeleteIdentical(in_dataset=transects, fields="Shape")
after_lines = int(arcpy.management.GetCount(transects)[0])
stopwatch.stamptime(msg=f"{before_lines - after_lines} IDENTICAL LINE(S) REMOVED")
stopwatch.stamptime(msg="PROCESS COMPLETE")
... View more
03-18-2023
09:25 PM
|
1
|
0
|
3594
|
POST
|
Thanks for the response, I stand corrected! This is very interesting. I have not explored Esri's CIM nearly enough, clearly. To visualize this for myself, I took some derived bookmark extents (from your code, and using my own scratch APRX) and built them as polygons in a new feature class. Sure enough, the geometry of the new polygons matches the original camera (viewport? I might be confusing the terminology) at the time the bookmark was saved, taking into account the limitations on map size (on screen) based on surrounding windows like Table of Contents and/or docked Catalog panes. As an example, the polygon on the left represents the extent of a bookmark saved with a normal, more wide-screen view of the map. The polygon on the right represents a bookmark that was saved when I expanded my docked toolbars way out, limiting the actual view of the map. Thanks again for the reply.
... View more
11-29-2022
04:11 PM
|
1
|
0
|
2338
|
POST
|
Thanks for looking into this and confirming that the functionality is as-expected. I can appreciate the use-case of row-at-a-time editing, but I do think extending this functionality to a full quality assurance strategy could be worthwhile. A use-case example: as it stands, when I develop schemas to hand off to users to populate, CVs are a nice user-friendly enhancement. But they are apt to fail as a quality assurance measure--a user can still break them pretty easily with Field Calculator. Any user populating a large number of records, then realizing they've made a simple mistake, will be likely to resort to Field Calculator, opening this can of worms. Perhaps quality controls via data reviewer or custom scripts post-editing are more appropriate for this situation. Again, thanks for the input.
... View more
11-23-2022
07:49 PM
|
1
|
0
|
1388
|
POST
|
I responded to your idea about this with an option that might work for you: https://community.esri.com/t5/arcgis-pro-ideas/collapse-2-lines-into-1-centerline/idi-p/1233859#M22245
... View more
11-23-2022
03:57 PM
|
0
|
0
|
564
|
POST
|
Your code is hard to read because it is not formatted correctly. I suggest using the "Insert/Edit code sample" in the future, which will likely attract more responses. Can you expand on what you're trying to accomplish? At face value, I don't think it is possible, or I don't understand what you're after. I'm not 100% on this, but I don't think a bookmark can have an extent--by itself. If it could, I assume that would be a readable property of the object. I also don't think you can get extents that are related to bookmarks, UNLESS they are read from Map Frames in Layout objects (which is very possible), OR in the situation discussed below this code: import arcpy
# Get APRX.
aprx = arcpy.mp.ArcGISProject(PATH)
layouts = aprx.listLayouts()
for layout in layouts:
map_frames = layout.listElements("MAPFRAME_ELEMENT")
for mf in map_frames:
print(f"Map Frame: {mf.name}")
map_obj = mf.map
bookmarks = map_obj.listBookmarks()
for bookmark in bookmarks:
print(f"Bookmark: {bookmark.name}")
mf.zoomToBookmark(bookmark)
print(mf.camera.getExtent()) But it seems like the only way to iterate through bookmarks on Map objects would be by using the "activeView"... which I don't think is accessible outside of Pro; from the MapView docs: "The MapView returned from the activeView property is the only way to change the extent of the Camera associated with a map view." If we can't change the Camera, I don't think updating the bookmarks would be possible, and therefore we can't get an extent that changes along with bookmark updates. My guess for why this might be is that while the application is running, there can be an activeView, the dimensions of which are partly determined by how big your TOC, Catalog Pane, etc. are. When the software is not open, I don't think each APRX stores that information regarding window/sidebar size. So, a "Map" object alone (you need the "MapView") can't have an extent, and therefore, neither can a bookmark; not without it currently being open and used. Unless, that map is embedded on a map frame that does have specific boundaries (the map frame box on the layout sheet). Just my thoughts based on the documentation.
... View more
11-23-2022
03:01 PM
|
0
|
2
|
2355
|
IDEA
|
The way a lot of these "centerline" algorithms work is via Thiessen Polygons. I might try doing this with your problem here. Lots of third-party workarounds, but as described, you'll need an Advanced license I believe. This is all in Pro, by the way. Create a copy of your rail lines, because this will destroy them during densification. Densify the copy of the rail lines. Do some testing, smaller densification values will increase "accuracy," but can really hinder processing time. Convert all (now dense) vertices to a point feature class. Run Thiessen Polygons on those points. Do some simple Dissolving, Merging, Feature to Line, etc. whatever you need to do to isolate that middle line. Ideally, script the above. You can clearly see the center between the "rail" lines after the new polygons are created. And closer... I have no idea what algorithm was used in ArcMap, but I never had much luck with it. BUT, if you don't densify your input lines enough, you can get similarly choppy outputs using the method described above, so perhaps that is what is happening with their implementation (and your shown outputs). I use a similar algorithm to get the centerlines for streams. See below. Second image on the far left shows an example where it doesn't always get things right. Another good resource, created by user "tomdilts". Polygon to Centerline Tool for ArcGIS - Overview
... View more
11-23-2022
12:53 PM
|
0
|
0
|
1888
|
POST
|
Using Join and Field Calculator works, but would be far too tedious and error prone in my opinion, but that might depend on your jurisdiction's parcel dataset complexity. I would use Python and Search/Update Cursors. More flexible long-term, and avoids having to manually use Field Calculator on a ton of fields. Let's assume "ACCOUNT" is the common field name between the Geometry shapefile and the attribute update table. "PARCEL" is the original geometry table, "PARCEL_TBL" is the published CSV of tabular updates, and "PARCEL_COPY" is the updated version. Note that only the intended rows are modified, and any geometry-specific fields can be left alone during the update. This will not ADD new records--you'd have to write that into the code, or first manually draw in the geometry with an ACCOUNT ID into the geometry shapefile. This script reads through the update table and makes updates where appropriate in a COPY of the geometry table. import arcpy
# Overwrite the output copy. Set the working folder (table and shapefiles).
arcpy.env.overwriteOutput = True
arcpy.env.workspace = r"YOUR_DIRECTORY"
# Input datasets.
polygon = "PARCEL.shp"
table = "PARCEL_TBL.csv"
# Unique field. For parcels, assuming ACCOUNT ID or similar.
account_fld = 'ACCOUNT'
# Read relevant fields from each. Ignore various ObjectID/Geometry-type fields.
# Also remove the ACCOUNT ID field, will be added back in for Cursors.
poly_flds = [f.name for f in arcpy.ListFields(polygon)
if f.type not in ['Geometry', 'OID']
and f.name.lower() not in ['shape_area', 'shape_length'
'shape.starea(), shape.stlength()']]
tbl_flds = [f.name for f in arcpy.ListFields(table)
if f.name != account_fld]
# Get fields that are common to both the Geometry and the Update Table.
common_flds = list(set(poly_flds) & set(tbl_flds))
# Read the update table. Use the ACCOUNT ID as a dictionary key.
with arcpy.da.SearchCursor(table, [account_fld] + common_flds) as scurs:
fc_dict = {row[0]:row[1:] for row in scurs}
# Make a copy of the original Geometry shapefile.
# This could be modified so that the existing is archived as a backup, and then
# the current is updated.
poly_copy = f"{polygon.split('.')[0]}_COPY.shp"
# If the Account in the Geom shapefile requires updates (based on ACCOUNT ID),
# update the row.
arcpy.management.CopyFeatures(polygon, poly_copy)
with arcpy.da.UpdateCursor(poly_copy, [account_fld] + common_flds) as ucurs:
for acct, *row in ucurs:
if acct in fc_dict.keys():
print(f"OLD ROW: {[acct, *row]}")
ucurs.updateRow([acct, *fc_dict[acct]])
print(f"UPD ROW: {[acct, *fc_dict[acct]]}\n")
... View more
11-23-2022
12:01 PM
|
2
|
0
|
930
|
Title | Kudos | Posted |
---|---|---|
1 | 08-29-2024 02:37 PM | |
1 | 07-24-2024 05:13 PM | |
1 | 03-29-2024 01:30 PM | |
2 | 03-21-2024 03:12 PM | |
1 | 03-14-2024 09:45 PM |
Online Status |
Online
|
Date Last Visited |
4m ago
|