POST
|
Your point about data types (String vs. NoneType) is an interesting one I hadn't considered, thanks for that. I can't replicate Pro (3.X) throwing an error regarding "nulling" a non-nullable field though, unless you're talking about using Field Calculator to "<Null>" a field, in which case it does yield an error. Thanks again for the detailed response.
... View more
03-04-2024
03:39 PM
|
0
|
0
|
736
|
POST
|
I am curious if/when other users are implementing "non-nullable" fields in their feature class/table design? If so, why? What does it achieve/what are you trying to achieve? I ask because I don't understand the utility of this setting. If I have a TEXT field that is "non-nullable", ArcGIS Pro will just throw in an empty string (maybe a space " ", my point is the same) as I am digitizing features. If I have a non-nullable numeric field, ArcGIS Pro will populate it with a "0". Date field, 12/31/1899. So, while the field is technically "non-nullable", this doesn't really help me from a data-integrity standpoint. If Pro would throw an error during editing that would prevent saving a record (similar to an Attribute Rule that fails, or a correct domain value not being chosen), I would understand the purpose. But it doesn't throw an error--it just fills in a generic value. The worst offenders are numeric fields, where Pro will populate a "0", which to me, is much harder to detect as an issue than simply allowing a <NULL> in that field, indicating the record should be reviewed. At least 12/31/1899 stands out as being a weird value that might require further review. Is the point simply to prevent NULL values because... they are not acceptable in a particular database for some reason (why not? why is "" and "0" filler data better?)? Does the nullability setting result in a different experience for FieldMaps users, where it actually makes more sense? For whatever it's worth, I am speaking mostly of File Geodatabases here, but do work regularly in Enterprise Geodatabases (SQL Server). If the behavior is different or makes more sense in an EGDB, I would be happy to hear about that too. Thanks, genuinely curious how folks are using the "nullability" setting.
... View more
03-04-2024
11:37 AM
|
0
|
3
|
815
|
POST
|
I think I understand what you're after. I reworked some of the code internals to simplify things, so I'd try copying this version, rather than updating anything you're currently using (unless you're comfortable doing that). The outputs should be the same, with the addition of the new label field. I'm using a label expression to show the "point_id" + "quadrant" labels. This takes the angle of the line (say, 45 degrees) and then assigns the quadrant (NE). It then inverts that to get the opposing quadrant (SE). The code is set up so that if the line is perfectly north-south or east-west (90 degrees, 180 degrees) then those labels will be "N" and "S" or "W" and "E", without the secondary direction. Let me know if something doesn't work! Here's an W-E example, which I suppose would only occur... ~1% of the time? """
RANDOM ANGLE TRANSECT LINES CENTERED ON SAMPLING POINTS
"""
#-----------------------------------------------------------------------------#
# IMPORTS
import math
import os
import random
import sys
from typing import Literal, Union, Tuple
import arcpy
#-----------------------------------------------------------------------------#
# INPUTS
# Full path to the geodatabase where the sampling points are, and where outputs will go.
workspace = r"your_full_path\geodatabase_name.gdb"
# Name of the point feature class in the above geodatabase that houses the
# sampling points.
in_pnt_fc = "_SAMPLE_POINT"
# Distance between sampling points (along x and y, not diagonals).
# Units match coordinate system.
point_spacing = 400 # feet, in my case
box_width = 80
# NORTH_BOX: squares will be oriented to grid-north
# NORTH_DIAMOND: diamonds will be oriented to grid-north
# ANGLE_BOX: square sides oriented to the tranect lines
# ANGLE_DIAMOND: diamonds angled to the transect lines
box_rotation = 'ANGLE_BOX' # OPTIONS: (NORTH_BOX, NORTH_DIAMOND, ANGLE_BOX, ANGLE_DIAMOND)
# Name of the output feature class to be created.
fc_line = "TRANSECT_LINE"
fc_endpoints = "TRANSECT_LINE_ENDPOINTS"
fc_box = f"TRANSECT_{box_rotation}"
# -----------------------------------------------------------------------------#
# SETTINGS
# This just overwrites previous outputs to help with testing.
arcpy.env.overwriteOutput = True
#-----------------------------------------------------------------------------#
# FUNCTIONS
def rotate_point(coordinate: tuple[int, int],
pivot_point: tuple[int, int],
angle: Union[int, float],
angle_format: Literal['DEGREES', 'RADIANS']='DEGREES'
) -> tuple[float, float]:
"""Rotates coordinate values around another point.
:coordinate: (x, y) coordinate pair to be rotated
:pivot_point: (x, y) coordinate pair around which point will be rotated
:angle: angle to rotate the coordinate
:angle_format: angle format, RADIANS or DEGREES
|(x_prime, y_prime)| rotated point coordinates"""
if angle_format == 'DEGREES':
angle = math.radians(angle)
x, y = coordinate
X, Y = pivot_point
x_trans = x - X
y_trans = y - Y
sin = math.sin(angle)
cos = math.cos(angle)
# Counter-clockwise rotation:
x_transprime = cos * x_trans - sin * y_trans
y_transprime = sin * x_trans + cos * y_trans
x_prime = x_transprime + X
y_prime = y_transprime + Y
# print(f"XY TRANS: {x_trans:,.0f}, {y_trans:,.0f}\n"
# f"XY TRANSPRIME: {x_transprime:,.0f}, {y_transprime:,.0f}\n"
# f"XY PRIME: {x_prime:,.0f}, {y_prime:,.0f}\n")
return (x_prime, y_prime)
def create_box(method: Literal['NORTH_BOX', 'NORTH_DIAMOND', 'ANGLE_BOX', 'ANGLE_DIAMOND'],
center_pnt: arcpy.Point,
width: Union[int, float],
angle: Union[int, float],
spat_ref: int) -> arcpy.Polygon:
"""Create a square box around a given point. Optionally rotate it.
:center_pnt: centroid of the square box
:width: width of side of box
:angle: angle the box should be rotated
|box_geom| geometry object representing the box"""
angle_dict = {'NORTH_BOX': 45,
'NORTH_DIAMOND': 0,
'ANGLE_BOX': angle + 45,
'ANGLE_DIAMOND': angle}
# Distance to move point based on known hypotenuse (1/2 width of box).
offset = math.sqrt(2 * ((width / 2) ** 2))
x, y = center_pnt.X, center_pnt.Y
Nx, Ny = x, y + offset
Ex, Ey = x + offset, y
Sx, Sy = x, y - offset
Wx, Wy = x - offset, y
box_corners = []
for coordinate in [(Nx, Ny), (Ex, Ey), (Sx, Sy), (Wx, Wy), (Nx, Ny)]:
box_corners.append(rotate_point(coordinate=coordinate,
pivot_point=(x, y),
angle=angle_dict[method],
angle_format='DEGREES'))
pnt_array = arcpy.Array([arcpy.Point(*corner) for corner in box_corners])
box_geom = arcpy.Polygon(pnt_array, spatial_reference=spat_ref)
return box_geom
def get_angle_quadrant(a):
"""As previously defined, angle resctricted [1-360]."""
# Check for 90 degree cardinal directions.
if a == 90:
return "N"
elif a == 180:
return "W"
elif a == 270:
return "S"
elif a == 360:
return "E"
# Determine non-90 quadrants.
if 0 < a < 90:
return "NE"
elif 90 < a < 180:
return "NW"
elif 180 < a < 270:
return "SW"
elif 270 < a < 360:
return "SE"
else:
raise ValueError("Angle value must be numeric and between 1-360, inclusive.")
#-----------------------------------------------------------------------------#
# MAIN
in_points = os.path.join(workspace, in_pnt_fc)
transect_lines = []
sample_box_dict = {}
pnt_desc = arcpy.da.Describe(in_points)
pnt_oid_fld = pnt_desc['OIDFieldName']
pnt_spat_ref = pnt_desc['spatialReference'].factoryCode
with arcpy.da.SearchCursor(in_points, [pnt_oid_fld, 'SHAPE@XY']) as scurs:
for oid, (x, y) in scurs:
# Random angle in degrees, 1-360; convert to radians.
angle_deg = random.randint(1, 360)
angle_rad = math.radians(angle_deg)
print(f"POINT {oid}, ANGLE: {angle_deg}")
# Get x/y offset of random point on the imaginary circle around each point.
start_x = (point_spacing/2) * math.cos(angle_rad)
start_y = (point_spacing/2) * math.sin(angle_rad)
# Using the sample point coordinate as the starting point,
# calculate real-world x and y of both start and end of transect line.
start_point = arcpy.Point(x + start_x, y + start_y)
end_point = arcpy.Point(x - start_x, y - start_y)
# -S/-E suffixes differentiate start/end boxes. Not currently used, but
# documented just in case (also, unique keys are required).
sample_box_dict[f'{oid}-S'] = create_box(box_rotation, start_point,
box_width, angle_deg, pnt_spat_ref)
sample_box_dict[f'{oid}-E'] = create_box(box_rotation, end_point,
box_width, angle_deg, pnt_spat_ref)
# Create a Polyline Geometry Object. Append dict entry to list.
new_line = arcpy.Polyline(arcpy.Array([start_point, end_point]))
transect_lines.append({'OID': oid, 'GEOM': new_line, 'ANGLE': angle_deg})
# Create three feature classes.
for fc, geom in [(fc_line, 'POLYLINE'), (fc_endpoints, 'POINT'), (fc_box, 'POLYGON')]:
arcpy.management.CreateFeatureclass(out_path=workspace, out_name=fc,
geometry_type=geom, spatial_reference=pnt_spat_ref)
print(f"CREATED FC {fc}")
# FC paths.
ln_fc = os.path.join(workspace, fc_line)
pt_fc = os.path.join(workspace, fc_endpoints)
box_fc = os.path.join(workspace, fc_box)
# Add and ID field to the points, the lines, and box FCs.
for fc in (ln_fc, pt_fc, box_fc):
arcpy.management.AddField(fc, 'POINT_OID', 'SHORT')
# Add angle field to the line FC.
arcpy.management.AddField(ln_fc, 'LINE_ANGLE', 'SHORT')
# Add the point type and the point angle to the point FC.
arcpy.management.AddField(pt_fc, 'POINT_TYPE', 'TEXT', field_length=5)
arcpy.management.AddField(pt_fc, 'POINT_QUAD', 'TEXT', field_length=5)
print("ALL FIELDS ADDED")
# Write transect line features----------------------------------#
with arcpy.da.InsertCursor(ln_fc, ['SHAPE@', 'POINT_OID', 'LINE_ANGLE']) as icurs:
for feature_dict in transect_lines:
icurs.insertRow([feature_dict['GEOM'],
feature_dict['OID'],
feature_dict['ANGLE']])
print("TRANSECTS WRITTEN")
# Write endpoint features----------------------------------#
invert_quad = {"NE": "SW", "SE": "NW", "SW": "NE", "NW": "SE",
"N": "S", "S": "N", "E": "W", "W": "E", }
with arcpy.da.InsertCursor(pt_fc, ['SHAPE@', 'POINT_OID', 'POINT_TYPE', 'POINT_QUAD']) as icurs:
for feature_dict in transect_lines:
start_pnt = feature_dict['GEOM'].firstPoint
end_pnt = feature_dict['GEOM'].lastPoint
angle = feature_dict['ANGLE']
quadrant_1 = get_angle_quadrant(angle)
quadrant_2 = invert_quad[quadrant_1]
icurs.insertRow([start_pnt, feature_dict['OID'], "START", quadrant_1])
icurs.insertRow([end_pnt, feature_dict['OID'], "END", quadrant_2])
print("ENDPOINTS WRITTEN")
# Create Sampling Zones----------------------------------#
# Transect Line Oriented
with arcpy.da.InsertCursor(box_fc, ['SHAPE@', 'POINT_OID']) as icurs:
for oid, geom in sample_box_dict.items():
icurs.insertRow([geom, oid.split('-')[0]])
print("BOXES WRITTEN")
... View more
11-27-2023
12:27 PM
|
0
|
1
|
873
|
POST
|
I tested several methods of adding fields to a blank feature class in an attempt to overcome the sluggish process that is "AddField" in a for loop. Posting all the code might make this unreadable, so here is a summary. All tests result in the same output**--field names are the same, domains are the same, etc. In this example, I am adding 500 fields to a newly created feature class. There is no data involved here, this is schema setup only. Methods & Times: Add fields in a for loop to a feature class already saved to file – 6:19 Add fields in a for loop to a memory feature class, then use CreateFeatureclass with the memory copy as the template – 0:56 Using AddFields (batch version with the “s”) – 4:56 FieldMappings() to create fields, ExportFeatures to file, AssignDomainToField in a loop – 2:09 FieldMappings() to create fields, ExportFeatures to memory, AssignDomainToField, CreateFeatureclass to file using memory as template – 0:19 So, in my experiments anyway, using a FieldMappings object and working in memory for as long as possible is the absolute fastest. BUT, simply adding fields to a memory feature class and using that as a template during the export to file is probably the simplest solution in terms of lines of code and maintenance. Results from a more reasonable 40 fields, in the same order as above: 0:25 0:02 0:14 0:09 0:02 So even when generating a handful of feature classes with am more modest number of fields, it might best to avoid using AddField in a for loop for a feature class that is already saved to file. **AddFields does not allow the user to specify Scale and Precision, and I didn’t see a quick way to modify those attributes after the fact. In a FGDB, it is irrelevant, but this might matter in an enterprise environment. Here's an example of the randomly generated fields for three of the five output methods (all three should be the same).
... View more
08-08-2023
04:12 PM
|
1
|
0
|
1169
|
POST
|
I suppose it is basically the same thing as manipulating fields as one normally would using FieldMappings in FeatureClassToFeatureClass (arcpy or the GUI in Pro)--but in this case, there aren't any fields to start with, just a blank feature class in memory. A bit unintuitive, but it's fast to process! Especially if you don't need to worry about defaults or domains, apparently. As you said, I'll have to just implement the standard AssignDomainToField() and AssignDefaultToField() geoprocessing tools after the initial FieldMapping setup. It's frustrating to not be able to access those properties of the Field object in a writeable way during setup though. Thanks again for all the input!
... View more
08-03-2023
05:46 AM
|
0
|
0
|
1217
|
POST
|
That's correct, that is what I'm doing. The Field Mappings object also contains the names of the fields, the lengths of the fields, the types of the fields, etc. (in this example, it's just one field, but it could be more). I am not changing any fields (they don't exist yet), I am creating a new FieldMappings object to be used during the creation of a new Feature Class. Those other properties are correctly transferred from the FieldMappings object to the new Feature Class with FeatureClassToFeatureClass and the field_mappings= parameter. Why wouldn't a writeable "domain" attribute function the same way? I am explicitly trying to get away from AddField (the GP tool you have linked). As I stated, it is much slower than using field maps as above. For example, when adding the same 100 fields using two different methods: A for loop using AddField takes 334 seconds Using field mappings in the FeatureClassToFeatureClass parameter followed by AssignDomainToField takes 109 seconds. 104 of those seconds is the time spent looping over AssignDomainToField. AddFields (with the "s") was mentioned below by @DavidSolari, which was helpful, but doesn't work for me without access to Precision, Scale, and Nullability. So, ideally, I would be able to get the writeable ".domain" attribute to work, along with the "defaultValue" property. Maybe I need to modify these during a second pass over the feature class? As it is, I'm getting that exception listed above thrown when trying to simply access "defaultValue". I presume it's usable without error somehow... At this point, it has become a curiosity thing, more than an optimization thing. I appreciate your time, and thanks for the input!
... View more
08-02-2023
03:09 PM
|
0
|
2
|
1236
|
POST
|
I'll certainly take a look, but that's essentially what I'm doing with the FieldMappings--creating a template set of fields in memory, and then using FeatureClassToFeatureClass's field_mappings as the template. If I was constantly generating the same outputs over and over, using the template parameter in CreateFeatureclass would indeed be a good solution, but this is sort of for one-off scenarios (in which case I shouldn't get bogged down in over-optimization anyway...) Again, thanks for the input!
... View more
08-02-2023
02:59 PM
|
1
|
0
|
1240
|
POST
|
"AddFields" has its place, however the inability to set Precision, Scale, and Nullability is a deal breaker for my purposes. If iterating back over the fields to change those values is possible, then that could work. In looking at "AlterField," it would appear that only Nullability can be modified using that tool, not Scale or Precision. So then I'm back to FieldMappings. I appreciate the input anyway.
... View more
08-02-2023
02:41 PM
|
1
|
2
|
1250
|
POST
|
I am attempting to use Field/FieldMap/FieldMappings objects to add fields to a new feature class. In testing, it appears that the approach below is much faster than looping over a set of field settings and using the "AddField" geoprocessing tool, particularly for a large number of fields. Here I'm just testing one field. Creating the geodatabase, the feature class, the domain (and the code) all work fine. So does adding the field (with some of the properties). What does not work is assigning the domain to the field--line 23 where I am assigning the domain using the "domain" property appears to have no effect. Additionally, trying to set a "defaultValue" throws an exception. Both of these properties appear to be writable according to the Field Object documentation: Field—ArcGIS Pro | Documentation Anyone know what's going on here, or what a workaround might be? fldr = r"full\folder\path"
gdb = arcpy.management.CreateFileGDB(fldr, "TEST")
fc = arcpy.management.CreateFeatureclass("memory", "TEST_FC_MEMORY", "POLYGON")
arcpy.management.CreateDomain(in_workspace=gdb, domain_name="TEST_DOM",
domain_description="TEST", field_type="TEXT",
domain_type="CODED")
arcpy.management.AddCodedValueToDomain(in_workspace=gdb, domain_name="TEST_DOM",
code=1, code_description="A")
# Create Field, FieldMap, FieldMappings.
new_fld = arcpy.Field()
field_mapping = arcpy.FieldMap()
field_mappings = arcpy.FieldMappings()
new_fld.name = "TEST_FIELD"
new_fld.type = "String"
new_fld.length = 10
new_fld.aliasName = "ALIAS!"
new_fld.isNullable = False
# This doesn't seem to have any effect.
new_fld.domain = "TEST_DOM"
# This appears to be fine, but of course has no effect.
new_fld.madeUpAttribute = "X"
# Exception: NameError: The attribute 'defaultValue' is not
# supported on this instance of Field.
# new_fld.defaultValue = "X"
field_mapping.outputField = new_fld
field_mappings.addFieldMap(field_mapping)
arcpy.conversion.FeatureClassToFeatureClass(in_features=fc, out_path=gdb,
out_name="TEST_FC", field_mapping=field_mappings) Feature class is created, domain is created with test code/description, and field is added (without the domain being assigned).
... View more
08-02-2023
12:50 PM
|
1
|
9
|
1284
|
POST
|
Thanks for the assistance @Anonymous User, I was able to get this working. I have a follow up question that I'm hoping you may have some insight into. My modified code is below. The extent string is computed beforehand, but seems to work as expected, based on the polyline geometry being read via a SearchCursor. def save_raster(name, output_directory, ext_string):
""""""
url = r"https://elevation.nationalmap.gov/arcgis/rest/services/3DEPElevation/ImageServer/exportImage?"
params = {"f": "image",
"bbox": ext_string,
"bboxSR": 102005,
"imageSR": 102005,
"format": "tiff",
"pixelType": "F32",
"size": "600,600",
"noDataInterpretation": "esriNoDataMatchAny",
"interpolation": "RSP_BilinearInterpolation"}
response = requests.get(url=url, params=params, allow_redirects=True)
name = "".join([char if char in ascii_letters else "" for char in name])
fullpath = pathlib.Path(output_directory, f"{name}.tif")
with open(fullpath, mode="wb") as image_file:
image_file.write(response.content)
arcpy.management.CalculateStatistics(str(fullpath)) The question I have is about specifying pixel size, which oddly enough does not seem to be an option based on the documentation here: https://developers.arcgis.com/rest/services-reference/enterprise/export-image.htm Below my pixels are roughly 10x10 (meters) because I've specified a size of 600,600. Here my pixels are much, much larger because the size I've specified is 30x30, width/height. Using the dimensions of the polyline extent (extent.XMax - extent.XMin) and the intended pixel resulution (30 meters, for example), some math can be performed to get the "size=<width>,<height>" that yields approximately the intended pixel resolution, but it's pretty cumbersome compared to just passing the resolution as a parameter. It's very odd to me that the call takes a pixel count (when is this property relevant?) instead of the actual pixel resolution, which is one of the more important properties of a raster dataset...
... View more
07-24-2023
04:45 PM
|
0
|
0
|
736
|
POST
|
I'm not totally sure I follow what you're after, but here's an attempt based on my understanding. I have some points, and polygons both with fields "FIELD1" and "FIELD2". For visual purposes, FIELD1 are colors, and FIELD2s are some numeric value, that as per your instructions (I think) are only filled out in the polygon layer. OIDs are shown below for additional clarity, the white lettering with black halos being the polygon layer. My approach is perform a Spatial Join, and then perform comparisons between the FIELD1s--if there is a spatial match, and FIELD1 also matches, then FIELD2 should be copied from the polygon to the point. I don't know what your experience with Python is, but you should be able to update the values in the "INPUTS" section, and either run this externally in the IDE of your choice, or simply paste the below directly into the Python window in ArcGIS Pro and hit "enter." import arcpy
arcpy.env.overwriteOutput = True
# INPUTS----------------------------------------------------------------------#
# All of these "INPUTS" must be updated by the user.
# Leave the leading "r" and the quotations.
input_point = r"THIS IS A FILE PATH TO YOUR POINT FEATURE CLASS"
input_polygon = r"THIS IS A FILE PATH TO YOUR POLYGON FEATURE CLASS"
point_flds = ["FIELD1", "FIELD2"]
polygon_flds = ["FIELD1", "FIELD2"]
output_point = r"THIS IS A FILE PATH TO YOUR OUTPUT POINTS"
# MAIN----------------------------------------------------------------------#
# This block appends "_1" if input fields are the same between feature classes.
# Spatial join output cannot have fields of the same name.
updated_polygon_flds = []
for pnt_fld, poly_fld in zip(point_flds, polygon_flds):
if pnt_fld == poly_fld:
updated_polygon_flds.append(f"{poly_fld}_1")
else:
updated_polygon_flds.append(poly_fld)
# Join the polygon info to the point geometry. Output will be points.
arcpy.analysis.SpatialJoin(
target_features=input_point,
join_features=input_polygon,
out_feature_class=output_point,
join_operation="JOIN_ONE_TO_MANY",
join_type="KEEP_ALL",
match_option="INTERSECT", )
# Read the output feature class; specifically, field lists from both inputs.
with arcpy.da.UpdateCursor(output_point, point_flds + updated_polygon_flds) as ucurs:
for pnt_f1, pnt_f2, poly_f1, poly_f2 in ucurs:
# If spatially related, and Field1's are equiavalent, copy over Field2
# from the polygon feature class into the point feature class.
if pnt_f1 == poly_f1:
ucurs.updateRow([pnt_f1, poly_f2, poly_f1, poly_f2]) Here's the output. To be clear, the first FIELD1 and FIELD2 are from the point, and the second FIELD1 and FIELD 2 are from the polygons. These are aliases of course. For example, the original point #1 (TARGET_FID = 1 in the output) is RED. It's within a RED polygon, therefore we should calculate FIELD2 in the points to be 3, because that's FIELD2 in the polygon. OID 3 in the below table has a "JOIN_FID" of -1, signifying it's not within a polygon at all. So the point-version of FIELD2 is not calculated, and both FIELD1 and FIELD2 (at the end of the table, from the polygon, are null). TARGET_FID 4 is within a polygon, but the colors don't match--FIELD2 not calculated. TARGET_FID 5 is represented twice, because it overlaps two polygons. One of the copies of this point matches the underlying polygon, and is calculated. The other doesn't, and is not calculated.
... View more
07-11-2023
08:24 PM
|
0
|
0
|
1052
|
POST
|
I am not aware of a way to do this by adding the CSV and running a built-in geoprocessing tool (someone else certainly might be), but if I understand you correctly, I believe the below will achieve what you're looking for using Python. import arcpy
import csv
arcpy.env.overwriteOutput = True
# INPUTS---------------------------------------------#
input_csv = r"FULL PATH TO YOUR CSV"
# Output Geometry Information - type and the integer EPSG code that refers to the spatial reference.
geom_type = "POLYGON" # This could also be POINT, POLYLINE, etc.
geom_sr = 3857 # you can Google your spatial reference name and "EPSG" to get this number
# Output Location and Name
out_gdb = r"FULL PATH TO YOUR GEODATABASE"
out_name = "GEOMETRY_FROM_WKT" # Leave if you want, this is the output feature class name
#-------------------------------------------------------#
# Create a blank output FC using the provided geometry type and spatial reference.
out_fc = arcpy.management.CreateFeatureclass(out_path=out_gdb,
out_name=out_name,
geometry_type=geom_type,
spatial_reference=geom_sr, )
# Open CSV. Create Reader object.
with open(input_csv, "r", newline="") as csvfile:
csvreader = csv.reader(csvfile, delimiter=",")
# Create InsertCursor.
# For each row in CSV, attempt to convert WKT in column 1 to Esri Geometry object.
# All other columns are read, but ignored.
with arcpy.da.InsertCursor(out_fc, "SHAPE@") as icurs:
for wkt_geom, *other_columns in csvreader:
# Try to convert the WKT to a geom object.
try:
print(f"CONVERTING WKT GEOM...\n{wkt_geom}")
geom_from_wkt = arcpy.FromWKT(wkt_geom, arcpy.SpatialReference(3857))
# Skip if conversion fails, could be bad formatting or a spreadsheet header.
except TypeError:
print(f"SKIPPING {wkt_geom}; IS THIS A HEADER?")
continue
# Attempt to write the Geometry object to the new feature class.
icurs.insertRow([geom_from_wkt])
If you update the five items in the "INPUTS" section, you should be able to then copy and paste this whole thing into the Python window in ArcGIS Pro, and hit "enter" to run it. You want to leave the quotes, and also the preceding "r" where applicable. Here is what my input CSV looks like. Note that headers are optionally used, and other fields (columns) can be included, but they will be ignored in the output: And the outputs: Note there is a single part, a multipart (two triangles in the middle), and a polygon with internal rings (a hole). This script will ignore headers in the CSV, and it essentially ignores fields other than column 1, which is where the WKT geometry must be stored.
... View more
06-29-2023
11:16 AM
|
3
|
0
|
3691
|
POST
|
I'm not super familiar with WKT representations of geometry objects, but would something like this work for you? import arcpy
import csv
# Leave the quotes and the leading "r" in both lines below. Replace my text with your paths.
input_fc = r"THE FULL PATH TO YOUR FEATURE CLASS"
out_csv = r"THE FULL PATH OF A CSV TO BE CREATED"
# Get all the field names. Explicity specify "SHAPE@" for the full geometry object.
fields_curs = ["SHAPE@"] + [f.name for f in arcpy.ListFields(input_fc) if f.type != "Geometry"]
# Create a writeable CSV file. Create the writer object.
with open(out_csv, "w", newline="") as csvfile:
csvwriter = csv.writer(csvfile, delimiter=",")
csvwriter.writerow(fields_curs) # Remove this line if you don't want headers.
# Write each row in the feature class, including the WKT geometry, to the CSV.
with arcpy.da.SearchCursor(input_fc, fields_curs) as scurs:
for geom, *flds in scurs:
csvwriter.writerow([geom.WKT, *flds]) Assuming you replace the two variables at the top with your actual paths, this should give you an output that contains those geometries as WKT. After updating those variables, you should be able to copy/paste this directly into the Python window within ArcGIS Pro. OID 2 is a multipart polygon. Output: Assuming I have understood what you're after, this would allow you to stay in the Esri environment. But, if all it takes is a couple clicks in QGIS, then that sounds like a pretty good option too.
... View more
06-28-2023
11:14 AM
|
0
|
0
|
2077
|
POST
|
I am trying to download/export a raster based on a query of USGS's 3DEP Image Service. The endpoint (if that's the correct terminology) is here: https://elevation.nationalmap.gov/arcgis/rest/services/3DEPElevation/ImageServer/exportImage?bbox=-2.00375070672E7,-1689391.823360186,2.0037507532527123E7,1.88090019719E7 Here is what this looks like using the HTML interface. Clicking "Export Image (GET)" initiates a download of the raster as a .tif file, exactly as I need. I would like to be able to do this using Python/ArcPy exclusively (updating the parameters as needed), as part of a larger script. I have had a shockingly difficult time Googling this or finding any resources about how to do this, so I'm missing some very basic concepts here. The code below returns... something, but I don't know how to interact with whatever it is. I just need the actual raster data (a tif? a numpy array? anything?) to do some other geoprocessing with. import requests
url = \
r"https://elevation.nationalmap.gov/arcgis/rest/services/3DEPElevation/ImageServer/exportImage?"
params = {"f": "html",
"bbox": [-2.00375070672E7, -1689391.823360186, 2.0037507532527123E7, 1.88090019719E7],
"imageSR": 102005,
"format": "tiff",
"pixelType": "S16",
"noDataInterpretation": "esriNoDataMatchAny",
"interpolation": "RSP_BilinearInterpolation"}
response = requests.get(url=url, params=params, allow_redirects=True)
print(response.text)
# Something like this...
# queried_DigitalElevationModel = response.getData()?
# arcpy.Raster(queried_DigitalElevationModel).save("localfilepath or memory")
... View more
05-27-2023
03:09 PM
|
0
|
2
|
878
|
POST
|
Thanks for sharing! None of the below is criticism, just observations; I think both of our approaches are good for different scenarios, and your version has shown me some errors and improvements in my own version. I like your approach because it may be more applicable than mine for some real-world situations. My computer would catch fire (or just take forever) if I tried to use my code to process a real-world island polygon because there would likely be hundreds of thousands (more?) of vertex pairs, a problem that you solve by "generalizing" with "Generate Points Along Line." So, I will implement this change in my own tool based on a total vertex count or something like that. Reviewing your approach has also shown me that while I still stand behind my edge transect approach, my internal line approach is INCORRECT. So, thanks for offering a different perspective. My longest line (measurements all in US Feet) is shown in Black, yours is in Red, and my (I am now realizing) incorrect "entirely internal" line is shown in Blue. So, disregard Blue, I need to remove that feature. For absolute accuracy, I think my two-vertex algorithm is the best approach, but for realistic use on complex polygons (islands, parcels, etc.) your approach could likely get pretty darn close, and probably requires far less processing time. Another exampl--again, mine in Black, yours in Red: Another thing to watch out for with your toolbox would be holes in polygons, as well as concave polygons. As you can see, your line is "longer" here, but passes through a hole. The second image shows your line going out of bounds in a concave polygon. Thanks for the alternate approach, very helpful in thinking about this problem further.
... View more
04-14-2023
02:36 PM
|
0
|
0
|
263
|
Title | Kudos | Posted |
---|---|---|
1 | a week ago | |
1 | 07-24-2024 05:13 PM | |
1 | 03-29-2024 01:30 PM | |
2 | 03-21-2024 03:12 PM | |
1 | 03-14-2024 09:45 PM |
Online Status |
Offline
|
Date Last Visited |
2 hours ago
|