POST
|
Can't replicate. Can you be more specific about the tool you are using or what the issue is? I'm on Pro 3.3.1. Can't comment on ArcMap. Here are two domains, one numeric, one text. Here they are applied to a FeatureClass: Exporting to Excel: Export shows the descriptions, not the codes. This doesn't work for you?
... View more
3 weeks ago
|
0
|
0
|
74
|
POST
|
Sure, happy to help you work through anything further if you have additional questions!
... View more
3 weeks ago
|
1
|
0
|
87
|
POST
|
For posterity, this seems like a much more efficient option (working with Pandas Spatial Data Frames, instead of Cursors and SelectionByLocation/Attribute): def new_method(pnt: Union[str, os.PathLike], # full path to points
box: Union[str, os.PathLike], # full path to boxes
box_id: str, # field name containing Box ID
sample_count: int, # how many points per box?
output: Union[str, os.PathLike] # full path to output FC
) -> None: # function has no return
"""SpatJoin points with boxes. Convert to SDF. Groups points by joined BOX ID.
Select 3 points per group. Exports sampled Spatial Data Frame back to FC."""
pnt_box_join = arcpy.analysis.SpatialJoin(pnt, box, r"memory\pnt_box_join").getOutput(0)
# Convert FC to SDF. Drop unused columns. GroupBy BOX_ID column.
# Without replacing, sample X# rows from each group. Export SDF back to FC.
sdf = pd.DataFrame.spatial.from_featureclass(pnt_box_join)
sdf = sdf.drop(columns=["Join_Count", "TARGET_FID", "Shape_Length", "Shape_Area"])
sampled_sdf = sdf.groupby(box_id, group_keys=False).sample(n=sample_count)
sampled_sdf.spatial.to_featureclass(location=output, sanitize_columns=False)
print("SAMPLED SDF -> FC EXPORTED")
... View more
4 weeks ago
|
0
|
0
|
137
|
POST
|
Here's an attempt. Hexagons have "populations" between 1 and 5 (light to dark). Black points are the child care centers and their capacities are shown in the adjacent black labels. Regions made up of hexagons that have aggregated enough "population" to reach each capacity for each center are shown in the colors, with the "actual" population gathered labeled with the callouts. The caveats: Regions can overlap if centers are too close (see red/blue). Lots of fixes for this, depending on what you're after. Childcare centers can be "over" capacity. See green region, where the max capacity is 80, and 81 have been "gathered" from adjacent hexagons. Also lots of fixes, depending on needs. Code below, in simple terms: for each center, add the closest hexagon to a region, followed by the next closest, etc., until enough hexagons have been added so the summed population reaches the capacity of the child care center. This runs extremely quickly with ~1,300 hexagons, but is definitely not the most optimized. You'll have to edit this code if you want to use it with your own feature classes. Happy to provide more details if you need them. import arcpy
import math
import os
import sys
arcpy.env.workspace = r"YOUR_GEODATABASE_PATH"
arcpy.env.overwriteOutput = True
# Names of hexagon bins and the associated points.
hex_bin = "HexBin"
hex_pnt = "HexAggPoint"
# Create an info dictionary about the hexagons
# {oid: {HEX_GEOM: geometry, POP: integer}}
with arcpy.da.SearchCursor(hex_bin, ["OID@", "SHAPE@", "POP"]) as scurs:
hex_pop_dict = {oid: {"HEX_GEOM": geom, "POP": pop} for oid, geom, pop in scurs}
# Empty dictionary to store info about our output regions.
clusters = {}
# Loop over points, logging assembled region information in 'clusters' dict.
with arcpy.da.SearchCursor(hex_pnt, ["OID@", "SHAPE@XY", "CAPACITY"]) as scurs:
for oid, centroid, capacity in scurs:
# Get list of (hex_id, distance_to_point) tuples.
distances = []
for hex_id, hex_record in hex_pop_dict.items():
hex_centroid_coor = hex_record["HEX_GEOM"].centroid
dist = math.dist(centroid, (hex_centroid_coor.X, hex_centroid_coor.Y))
distances.append((hex_id, dist), )
# Sort tuples ascending distance from the point in question.
distances.sort(key=lambda x: x[1])
# Loop distance tuples, using the hex ID to accumulate population per hex.
# Log the hex IDs that are used. Stop adding more hexes onces the capacity
# of the point is reached.
gathered_hexes = []
accumulated_pop = 0
for hex_id, _ in distances:
gathered_hexes.append(hex_id)
accumulated_pop += hex_pop_dict[hex_id]["POP"]
if accumulated_pop >= capacity:
break
# For this Point ID, associated a list of hexes that will create region.
clusters[oid] = gathered_hexes
print(f"HEX ID: {oid}\nCAPACITY: {capacity}"
f"\nACTUAL ACCUM: {accumulated_pop}"
f"\nHEX IDS: {gathered_hexes}\n")
# CREATE OUTPUT FC, ADD FIELDS
region_fc = arcpy.management.CreateFeatureclass(out_path=arcpy.env.workspace,
out_name="HEX_REGION", geometry_type="POLYGON", spatial_reference=2232)
for fld_name in ["POINT_ID", "HEX_ID", "POP"]:
arcpy.management.AddField(region_fc, fld_name, "SHORT")
# WRITE OUTPUT RECORDS
with arcpy.da.InsertCursor(region_fc, ["SHAPE@", "POINT_ID", "HEX_ID", "POP"]) as icurs:
# For this point and list of hexes that should create the region...
for point_id, hex_id_list in clusters.items():
# ...and for each hex in the list...
for hex_id in hex_id_list:
# Using hex's ID, insert Geometry and Population,
# plus the ID of both the parent point and the hex.
icurs.insertRow([hex_pop_dict[hex_id]["HEX_GEOM"],
point_id,
hex_id,
hex_pop_dict[hex_id]["POP"]])
# Dissolve the hex polygons based on the associated point. Sum the pop of the region.
dissolve_fc_path = os.path.join(arcpy.env.workspace, "HEX_REGION_DISSOLVE")
arcpy.management.Dissolve(in_features="HEX_REGION", out_feature_class=dissolve_fc_path,
dissolve_field="POINT_ID", statistics_fields=[["POP", "SUM"], ])
... View more
10-09-2024
07:08 PM
|
2
|
0
|
103
|
POST
|
I would explore using Generate Rectangles Along Lines or Strip Map Index Features: Generate Rectangles Along Lines (Data Management)—ArcGIS Pro | Documentation Strip Map Index Features (Cartography)—ArcGIS Pro | Documentation
... View more
10-09-2024
02:24 PM
|
0
|
0
|
99
|
POST
|
Not sure if I understand your exact needs, but you can try something like below. I think a more efficient method would be doing a spatial join in memory and then working with a pandas Data Frame to select three points from each subgroup, but this may get you started. Code below will randomly select exactly three points within each "box". import sys
import numpy as np
# a list of point IDs to eventually export
point_oids_to_export = []
# Iterate over your sampling boxes
with arcpy.da.SearchCursor('SamplingPointBox', "OID@") as scurs:
for oid, in scurs:
# Select the current box. Select all the points with that box.
box_selection = arcpy.management.SelectLayerByAttribute('SamplingPointBox', 'NEW_SELECTION', f"OID = {oid}")
point_selection = arcpy.management.SelectLayerByLocation('SamplingPoint', 'INTERSECT', 'SamplingPointBox')
# Get a list of the OIDs of the points within the box.
oidValueList = [r[0] for r in arcpy.da.SearchCursor(point_selection, ["OID@"])]
# Select three of those OIDs, not selecting the same one twice.
rng = np.random.default_rng()
chosen_oids = rng.choice(oidValueList, 3, False, None, 0, False)
# Add those three new points to the running list.
point_oids_to_export.extend(chosen_oids)
# Select all the collected points based on the long list of OIDs
query = f"OBJECTID IN {tuple(point_oids_to_export)}"
arcpy.management.SelectLayerByAttribute('SamplingPoint', 'NEW_SELECTION', query)
## export your selection from here, or add "arcpy.conversion.ExportFeatures()"
... View more
10-09-2024
01:54 PM
|
2
|
2
|
214
|
IDEA
|
Not entirely sure I follow, but have you looked into Attribute Rules (specifically, "Calculation"-style rules)? They can do a lot of automatic calculations for you in your tables. https://pro.arcgis.com/en/pro-app/latest/help/data/geodatabases/overview/an-overview-of-attribute-rules.htm
... View more
10-09-2024
11:55 AM
|
0
|
0
|
225
|
POST
|
Can you clarify this? Where is "1234" coming from out of "Site1_UAV_042018"? What parts of the "Name" field would you like to calculate over to the "Site_ID" field? It's not really clear what you're asking.
... View more
10-09-2024
09:18 AM
|
0
|
0
|
36
|
POST
|
Is this a one-time calculation? Or does this need to be dynamic, as your table undergoes changes? What would happen if two rows got deleted, for some reason? So BY0001 and BY0002 were removed (accidentally maybe?), how would those IDs be assigned if those rows were added back in? Is there any reasons that once an ID is assigned, it must remain as that same ID? Or can these IDs be recalculated any time changes are made to the table, so that it is ensured that they are always unique? I think if this is a one-time Field Calculate, this is straightforward, but perhaps less straightforward if this table will be changing a lot for some reason, and all those IDs need to be managed/updated on the fly.
... View more
10-09-2024
08:46 AM
|
0
|
0
|
38
|
POST
|
I think @BarryNorthey is right. See below, where my third point, OBJECTID 7, does not show because the whole field is just spaces (so it is labeling, but it's unseen because it's spaces!). An alternative to the suggestion above using Python would be to improve your label expression to TRIM white space. // Use Trim() to reduce a field full of spaces down to "".
DefaultValue(Trim($feature.Label_Main), $feature.Label_Backup) And now here is what we have, because the "Label_Main" values is reduced from " " down to "", and therefore is considered Empty, and then defaults to the Label_Backup.
... View more
10-09-2024
08:20 AM
|
0
|
0
|
113
|
POST
|
Can you clarify this a bit, perhaps with a diagram or picture? Do you want to dynamically count the number of points that fall within a given polygon? Or simply count the number of points that have the same "id" as a polygon? I think both Polygons and Points have an "id" field, but where is the "No" field? In the Polygon feature class?
... View more
10-09-2024
08:05 AM
|
0
|
0
|
34
|
POST
|
Arcade uses "==" as it is written to test for null specifically. IsEmpty() is used frequently as well, but that will also return true for empty strings.
... View more
10-09-2024
07:47 AM
|
0
|
0
|
110
|
POST
|
The below works for me when I recreate your situation above, assuming you only want to delete the first point found with a matching ID to the polygon, and not any subsequent ones. Other than some line formatting, I think the only difference between mine and yours is that I am explicitly creating the var "delete" dictionary using the "Dictionary()" constructor on line 14. I have had a similar problem when loading things into these return keyword dictionaries as you are doing, no idea why, but this has fixed my issue. At this point I pretty much always try to use the explicit constructor when I am able to. // henter teig_id fra tilknytningspunkt som slettes
var teig_id = $feature.Teig_ID
var fs_tilknytningspunkter = FeatureSetByName($datastore, "Tilknytningspunkter", ["GlobalID", "TEIG_ID"])
var tilknytningspunkt = First(Filter(fs_tilknytningspunkter, "TEIG_ID=@teig_id"))
Console(tilknytningspunkt)
var deleteList = []
if (tilknytningspunkt == null) {
return
} else {
// ***ONLY DIFFERENCE?***
var delete = Dictionary('globalID', tilknytningspunkt.GlobalID)
Push(deleteList, delete)
}
Console(deleteList)
return {
'edit': [{
'className': 'Tilknytningspunkter',
'deletes': deleteList
}]
}
... View more
10-09-2024
07:22 AM
|
1
|
0
|
111
|
POST
|
Your code is not checking if the VALUES in those fields are not empty, your code is checking if those specific strings/text values are not empty. It would be like writing !IsEmpty("here is a big long text string"). You want to be checking "$feature.M_InstallYear", where "$feature" is a token representing the row (or a subset thereof, depending), and you're asking for the value that is in the M_InstallYear field. Also, you're trying to return three results, so you need to specify those fields in your return results dictionary, I believe. You also don't need to return the blank brackets. Just don't return anything if you don't need to perform a calculation. Alternatively, you could just always set these three values to NULL, without even doing the "if" statement. I can't imagine you're saving that much processing overhead by checking if you need to first, but someone else might know more about that. // Specifically list fields to be used in this AR.
// Not always clear on when this is necessary (don't think it is here), but useful to know about.
Expects($feature, "M_AssetNum", "M_InstallYear", "M_Material")
if (!IsEmpty($feature.M_AssetNum) || !IsEmpty($feature.M_InstallYear) || !IsEmpty($feature.M_Material)) {
return {
"result": {
"attributes": {
"M_AssetNum": null,
"M_InstallYear": null,
"M_Material": null
}
}
}
}
... View more
10-08-2024
04:26 PM
|
0
|
1
|
128
|
POST
|
I am confused by behavior I am experiencing when using a Composite Relationship Class (with forward messaging) in conjunction with a Topology. Below I have a "parent" polygon (black) that has relationship classes with a "child" polygon (blue) and "child" point (black plus sign). Both are composite relationships, and both have FORWARD messaging turned on. Things work as expected, meaning when I rotate ONLY the parent polygon, the children rotate with it. There are no Feature Datasets, Subtypes, or Topologies involved in this example: The below example is more complex, and where I am having problems. There are subtypes and a topology involved. There are again 3 feature classes: wotus_poly [subtypes: WETLAND, WATER] - a "WETLAND" type shown in green pool - no subtypes, shown in blue test_plot_pt [subtypes: WETLAND, UPLAND] - two "WETLAND" and one "UPLAND" shown in black and red, respectively Relationships (2): wotus_poly one-to-many composite with pool, forward messaging turned on. A second relationship class with the same properties with the test_plot_pt. Topology: also a topology rule set up that requires all WETLAND subtype test plots must be within a WETLAND subtype polygon. Here is what the structure in Catalog View looks like (this is within a Feature Dataset for the topology): Here is a visual of a possible arrangement, and one where the topology rule would be validated correctly: The issue is that the "FORWARD" messaging aspect of the relationships DOES NOT WORK as expected. Rotating the parent has no impact at all on the location/arrangement of the child pools or points. Is this because these features are involved in a Feature Dataset? Is it because there's a topology involved in at least two of these feature classes? Are subtypes problematic here? I have verified that DELETING the parent polygon, in green, will also delete the related children, as expected in a COMPOSITE relationship. So, there is some interference happening that prevents the geometric updates via forward messaging. Any help would be appreciated!
... View more
08-29-2024
02:37 PM
|
1
|
0
|
234
|
Title | Kudos | Posted |
---|---|---|
1 | 10-09-2024 07:22 AM | |
1 | 07-17-2024 02:57 PM | |
1 | 3 weeks ago | |
1 | 04-23-2021 06:57 AM | |
2 | 10-09-2024 01:54 PM |
Online Status |
Offline
|
Date Last Visited |
2 weeks ago
|