POST
|
Here's a very unnecessarily verbose version of this, but I use this basic template structure a lot for these "constraint" or domain type on-the-fly validation checks. You can incorporate several different checks and error messages, depending on what you're checking. /*
SCRIPT SUMMARY:
AUTHOR:
DATE:
*/
//---------------------------------------------------------------------//
// INPUTS
var enteredDate = $feature.A_Date
// OUTPUT FIELDS
var dateFldName = "A_Date"
//---------------------------------------------------------------------//
// FUNCTIONS
function dateValidate(dateVal) {
// Initialize results dictionary
var result = Dictionary("VALUE", dateVal, "ERROR", null)
// Null is fine, return value as-is
if (IsEmpty(dateVal)) {
return result
}
// CHECK 1: if entered date is in the future, return error
var curDate = Now()
if (dateVal > curDate) {
result["ERROR"] = "This date is in the future, and therefore invalid."
return result
}
// Otherwise return value as-is
return result
}
//---------------------------------------------------------------------//
// MAIN
var fldUpdates = Dictionary()
// Validate using custom function.
var result = dateValidate(enteredDate)
// If error found, return message and block the edit
if (!IsEmpty(result["ERROR"])) {
var msg = `\n\n• SUBMITTED VALUE: '${Text(result["VALUE"], "MM/DD/YYYY")}'`
+ `\n\n• ERROR: ${result["ERROR"]}`
+ `\n\n• GENERAL NOTE: Date must be today or earlier.`
return {"errorMessage": msg}
}
// Else, update field value (in this case, return the value as-is)
else {
fldUpdates[dateFldName] = result["VALUE"]
}
return {"result": {"attributes": fldUpdates}}
... View more
08-13-2025
11:22 AM
|
0
|
0
|
162
|
POST
|
I like to use "Calculation" style attribute rules as constraints and/or validators (rather than using either of those specifically named flavors of Attribute Rules, but your opinions may vary). Here is a "Date" field called "A_Date" where I've entered a date in the past, today's date, left a NULL value, and then finally entered a date in the future (8/15/2025 at the time of writing this). This rule accepts the first three, and fails with a specific error message when a future date is entered. This is sort of hack together "dynamic" domain, which seems like what you want. Using similar logic, I think you could also set something up where you only allow "yesterday," "today," or "tomorrow," that sort of dynamic date range scenario. I believe there are time zone options available in Arcade too, if you want to go down that road. // My date field.
var enteredDate = $feature.A_Date
// Get current date using built-in function.
var curDate = Now()
// If entered date is in the future, block it from being entered. Return message to user.
if (enteredDate > curDate) {
return {
"errorMessage": "This value is in the future, and therefore invalid."
}
}
... View more
08-13-2025
11:10 AM
|
0
|
1
|
162
|
POST
|
Can't replicate. Can you be more specific about the tool you are using or what the issue is? I'm on Pro 3.3.1. Can't comment on ArcMap. Here are two domains, one numeric, one text. Here they are applied to a FeatureClass: Exporting to Excel: Export shows the descriptions, not the codes. This doesn't work for you?
... View more
10-24-2024
08:11 AM
|
0
|
1
|
496
|
POST
|
Sure, happy to help you work through anything further if you have additional questions!
... View more
10-24-2024
07:51 AM
|
1
|
0
|
910
|
POST
|
For posterity, this seems like a much more efficient option (working with Pandas Spatial Data Frames, instead of Cursors and SelectionByLocation/Attribute): def new_method(pnt: Union[str, os.PathLike], # full path to points
box: Union[str, os.PathLike], # full path to boxes
box_id: str, # field name containing Box ID
sample_count: int, # how many points per box?
output: Union[str, os.PathLike] # full path to output FC
) -> None: # function has no return
"""SpatJoin points with boxes. Convert to SDF. Groups points by joined BOX ID.
Select 3 points per group. Exports sampled Spatial Data Frame back to FC."""
pnt_box_join = arcpy.analysis.SpatialJoin(pnt, box, r"memory\pnt_box_join").getOutput(0)
# Convert FC to SDF. Drop unused columns. GroupBy BOX_ID column.
# Without replacing, sample X# rows from each group. Export SDF back to FC.
sdf = pd.DataFrame.spatial.from_featureclass(pnt_box_join)
sdf = sdf.drop(columns=["Join_Count", "TARGET_FID", "Shape_Length", "Shape_Area"])
sampled_sdf = sdf.groupby(box_id, group_keys=False).sample(n=sample_count)
sampled_sdf.spatial.to_featureclass(location=output, sanitize_columns=False)
print("SAMPLED SDF -> FC EXPORTED")
... View more
10-18-2024
08:27 AM
|
0
|
0
|
960
|
POST
|
Here's an attempt. Hexagons have "populations" between 1 and 5 (light to dark). Black points are the child care centers and their capacities are shown in the adjacent black labels. Regions made up of hexagons that have aggregated enough "population" to reach each capacity for each center are shown in the colors, with the "actual" population gathered labeled with the callouts. The caveats: Regions can overlap if centers are too close (see red/blue). Lots of fixes for this, depending on what you're after. Childcare centers can be "over" capacity. See green region, where the max capacity is 80, and 81 have been "gathered" from adjacent hexagons. Also lots of fixes, depending on needs. Code below, in simple terms: for each center, add the closest hexagon to a region, followed by the next closest, etc., until enough hexagons have been added so the summed population reaches the capacity of the child care center. This runs extremely quickly with ~1,300 hexagons, but is definitely not the most optimized. You'll have to edit this code if you want to use it with your own feature classes. Happy to provide more details if you need them. import arcpy
import math
import os
import sys
arcpy.env.workspace = r"YOUR_GEODATABASE_PATH"
arcpy.env.overwriteOutput = True
# Names of hexagon bins and the associated points.
hex_bin = "HexBin"
hex_pnt = "HexAggPoint"
# Create an info dictionary about the hexagons
# {oid: {HEX_GEOM: geometry, POP: integer}}
with arcpy.da.SearchCursor(hex_bin, ["OID@", "SHAPE@", "POP"]) as scurs:
hex_pop_dict = {oid: {"HEX_GEOM": geom, "POP": pop} for oid, geom, pop in scurs}
# Empty dictionary to store info about our output regions.
clusters = {}
# Loop over points, logging assembled region information in 'clusters' dict.
with arcpy.da.SearchCursor(hex_pnt, ["OID@", "SHAPE@XY", "CAPACITY"]) as scurs:
for oid, centroid, capacity in scurs:
# Get list of (hex_id, distance_to_point) tuples.
distances = []
for hex_id, hex_record in hex_pop_dict.items():
hex_centroid_coor = hex_record["HEX_GEOM"].centroid
dist = math.dist(centroid, (hex_centroid_coor.X, hex_centroid_coor.Y))
distances.append((hex_id, dist), )
# Sort tuples ascending distance from the point in question.
distances.sort(key=lambda x: x[1])
# Loop distance tuples, using the hex ID to accumulate population per hex.
# Log the hex IDs that are used. Stop adding more hexes onces the capacity
# of the point is reached.
gathered_hexes = []
accumulated_pop = 0
for hex_id, _ in distances:
gathered_hexes.append(hex_id)
accumulated_pop += hex_pop_dict[hex_id]["POP"]
if accumulated_pop >= capacity:
break
# For this Point ID, associated a list of hexes that will create region.
clusters[oid] = gathered_hexes
print(f"HEX ID: {oid}\nCAPACITY: {capacity}"
f"\nACTUAL ACCUM: {accumulated_pop}"
f"\nHEX IDS: {gathered_hexes}\n")
# CREATE OUTPUT FC, ADD FIELDS
region_fc = arcpy.management.CreateFeatureclass(out_path=arcpy.env.workspace,
out_name="HEX_REGION", geometry_type="POLYGON", spatial_reference=2232)
for fld_name in ["POINT_ID", "HEX_ID", "POP"]:
arcpy.management.AddField(region_fc, fld_name, "SHORT")
# WRITE OUTPUT RECORDS
with arcpy.da.InsertCursor(region_fc, ["SHAPE@", "POINT_ID", "HEX_ID", "POP"]) as icurs:
# For this point and list of hexes that should create the region...
for point_id, hex_id_list in clusters.items():
# ...and for each hex in the list...
for hex_id in hex_id_list:
# Using hex's ID, insert Geometry and Population,
# plus the ID of both the parent point and the hex.
icurs.insertRow([hex_pop_dict[hex_id]["HEX_GEOM"],
point_id,
hex_id,
hex_pop_dict[hex_id]["POP"]])
# Dissolve the hex polygons based on the associated point. Sum the pop of the region.
dissolve_fc_path = os.path.join(arcpy.env.workspace, "HEX_REGION_DISSOLVE")
arcpy.management.Dissolve(in_features="HEX_REGION", out_feature_class=dissolve_fc_path,
dissolve_field="POINT_ID", statistics_fields=[["POP", "SUM"], ])
... View more
10-09-2024
07:08 PM
|
2
|
0
|
600
|
POST
|
I would explore using Generate Rectangles Along Lines or Strip Map Index Features: Generate Rectangles Along Lines (Data Management)—ArcGIS Pro | Documentation Strip Map Index Features (Cartography)—ArcGIS Pro | Documentation
... View more
10-09-2024
02:24 PM
|
0
|
0
|
698
|
POST
|
Not sure if I understand your exact needs, but you can try something like below. I think a more efficient method would be doing a spatial join in memory and then working with a pandas Data Frame to select three points from each subgroup, but this may get you started. Code below will randomly select exactly three points within each "box". import sys
import numpy as np
# a list of point IDs to eventually export
point_oids_to_export = []
# Iterate over your sampling boxes
with arcpy.da.SearchCursor('SamplingPointBox', "OID@") as scurs:
for oid, in scurs:
# Select the current box. Select all the points with that box.
box_selection = arcpy.management.SelectLayerByAttribute('SamplingPointBox', 'NEW_SELECTION', f"OID = {oid}")
point_selection = arcpy.management.SelectLayerByLocation('SamplingPoint', 'INTERSECT', 'SamplingPointBox')
# Get a list of the OIDs of the points within the box.
oidValueList = [r[0] for r in arcpy.da.SearchCursor(point_selection, ["OID@"])]
# Select three of those OIDs, not selecting the same one twice.
rng = np.random.default_rng()
chosen_oids = rng.choice(oidValueList, 3, False, None, 0, False)
# Add those three new points to the running list.
point_oids_to_export.extend(chosen_oids)
# Select all the collected points based on the long list of OIDs
query = f"OBJECTID IN {tuple(point_oids_to_export)}"
arcpy.management.SelectLayerByAttribute('SamplingPoint', 'NEW_SELECTION', query)
## export your selection from here, or add "arcpy.conversion.ExportFeatures()"
... View more
10-09-2024
01:54 PM
|
2
|
2
|
1037
|
IDEA
|
Not entirely sure I follow, but have you looked into Attribute Rules (specifically, "Calculation"-style rules)? They can do a lot of automatic calculations for you in your tables. https://pro.arcgis.com/en/pro-app/latest/help/data/geodatabases/overview/an-overview-of-attribute-rules.htm
... View more
10-09-2024
11:55 AM
|
0
|
0
|
730
|
POST
|
Can you clarify this? Where is "1234" coming from out of "Site1_UAV_042018"? What parts of the "Name" field would you like to calculate over to the "Site_ID" field? It's not really clear what you're asking.
... View more
10-09-2024
09:18 AM
|
0
|
0
|
400
|
POST
|
Is this a one-time calculation? Or does this need to be dynamic, as your table undergoes changes? What would happen if two rows got deleted, for some reason? So BY0001 and BY0002 were removed (accidentally maybe?), how would those IDs be assigned if those rows were added back in? Is there any reasons that once an ID is assigned, it must remain as that same ID? Or can these IDs be recalculated any time changes are made to the table, so that it is ensured that they are always unique? I think if this is a one-time Field Calculate, this is straightforward, but perhaps less straightforward if this table will be changing a lot for some reason, and all those IDs need to be managed/updated on the fly.
... View more
10-09-2024
08:46 AM
|
0
|
0
|
352
|
POST
|
I think @BarryNorthey is right. See below, where my third point, OBJECTID 7, does not show because the whole field is just spaces (so it is labeling, but it's unseen because it's spaces!). An alternative to the suggestion above using Python would be to improve your label expression to TRIM white space. // Use Trim() to reduce a field full of spaces down to "".
DefaultValue(Trim($feature.Label_Main), $feature.Label_Backup) And now here is what we have, because the "Label_Main" values is reduced from " " down to "", and therefore is considered Empty, and then defaults to the Label_Backup.
... View more
10-09-2024
08:20 AM
|
0
|
0
|
523
|
POST
|
Can you clarify this a bit, perhaps with a diagram or picture? Do you want to dynamically count the number of points that fall within a given polygon? Or simply count the number of points that have the same "id" as a polygon? I think both Polygons and Points have an "id" field, but where is the "No" field? In the Polygon feature class?
... View more
10-09-2024
08:05 AM
|
0
|
0
|
307
|
POST
|
Arcade uses "==" as it is written to test for null specifically. IsEmpty() is used frequently as well, but that will also return true for empty strings.
... View more
10-09-2024
07:47 AM
|
0
|
0
|
991
|
POST
|
The below works for me when I recreate your situation above, assuming you only want to delete the first point found with a matching ID to the polygon, and not any subsequent ones. Other than some line formatting, I think the only difference between mine and yours is that I am explicitly creating the var "delete" dictionary using the "Dictionary()" constructor on line 14. I have had a similar problem when loading things into these return keyword dictionaries as you are doing, no idea why, but this has fixed my issue. At this point I pretty much always try to use the explicit constructor when I am able to. // henter teig_id fra tilknytningspunkt som slettes
var teig_id = $feature.Teig_ID
var fs_tilknytningspunkter = FeatureSetByName($datastore, "Tilknytningspunkter", ["GlobalID", "TEIG_ID"])
var tilknytningspunkt = First(Filter(fs_tilknytningspunkter, "TEIG_ID=@teig_id"))
Console(tilknytningspunkt)
var deleteList = []
if (tilknytningspunkt == null) {
return
} else {
// ***ONLY DIFFERENCE?***
var delete = Dictionary('globalID', tilknytningspunkt.GlobalID)
Push(deleteList, delete)
}
Console(deleteList)
return {
'edit': [{
'className': 'Tilknytningspunkter',
'deletes': deleteList
}]
}
... View more
10-09-2024
07:22 AM
|
1
|
0
|
992
|
Title | Kudos | Posted |
---|---|---|
1 | 07-07-2020 01:01 PM | |
1 | 10-09-2024 07:22 AM | |
1 | 07-17-2024 02:57 PM | |
1 | 10-24-2024 07:51 AM | |
1 | 04-23-2021 06:57 AM |
Online Status |
Offline
|
Date Last Visited |
Tuesday
|