|
POST
|
You missed a few lines that still refer to birth and death dates. In my code example, you should also comment out or remove the references in lines 88-92: # born = fieldValues[1]
# died = fieldValues[2]
# print(xmin, ymin, row[1], labelKey, name) # if printing, remove: born, died
insertCursor.insertRow([xmin, ymin, row[1], labelKey, name])# remove: born, died If you have created a new feature, you will need to either delete it or comment out the code section that creates it. Otherwise the code will error out saying the feature already exists. It helps to use the syntax highlighter when sharing code. The best help is found here: Code Formatting... the basics++.
... View more
10-26-2020
11:17 AM
|
1
|
1
|
4025
|
|
POST
|
Here is some test code that will give you an idea how to read the data from a feature and related table and create a point feature for use on AGO. I have made some assumptions, so you will probably need to do some tweaking. import arcpy # if not in python window
import os
# Prep work : set up some parameters
# feature with geometry - specifically polygons
fc = r"R:\Jeff\City_Projects\Cemetery\CemeteryMgmt.gdb\BSACemeteriesCopy"
# related table with data
relateFC = r"R:\Jeff\City_Projects\Cemetery\CemeteryMgmt.gdb\DBOPlotOccJoin"
relateFieldsList = ["UserField4", "Name", "Born", "Died"] # Born and Died are optional, others can be added
# new point feature we will create
outPath = r'R:\Jeff\City_Projects\Cemetery\CemeteryMgmt.gdb'
outName = 'AGO_cemetery'
outFields = ['SHAPE@X', 'SHAPE@Y', 'Cemetery', 'Plot', 'Name', 'Born', 'Died']
# full path to new feature
outFC = os.path.join(outPath, outName)
# Step 1 : create a new feature
# spatial reference for new feature
sr = arcpy.Describe(fc).SpatialReference # same spatial reference as source feature
# create the new feature
arcpy.CreateFeatureclass_management(out_path = outPath, out_name = outName, geometry_type = "POINT",
template = "#", has_m = "DISABLED", has_z = "DISABLED",
spatial_reference = sr)
# create fields (all text): Cemetery, Plot, Name, Born, Died
arcpy.AddField_management(in_table = outFC, field_name = 'Cemetery',
field_type = "STRING", field_precision = "#", field_scale = "#",
field_length = 50, field_alias = "Cemetery", field_is_nullable = "NULLABLE",
field_is_required = "NON_REQUIRED", field_domain = "#")
arcpy.AddField_management(in_table = outFC, field_name = 'Plot',
field_type = "STRING", field_precision = "#", field_scale = "#",
field_length = 50, field_alias = "Plot", field_is_nullable = "NULLABLE",
field_is_required = "NON_REQUIRED", field_domain = "#")
arcpy.AddField_management(in_table = outFC, field_name = 'Name',
field_type = "STRING", field_precision = "#", field_scale = "#",
field_length = 50, field_alias = "Name", field_is_nullable = "NULLABLE",
field_is_required = "NON_REQUIRED", field_domain = "#")
arcpy.AddField_management(in_table = outFC, field_name = 'Born',
field_type = "STRING", field_precision = "#", field_scale = "#",
field_length = 20, field_alias = "Born", field_is_nullable = "NULLABLE",
field_is_required = "NON_REQUIRED", field_domain = "#")
arcpy.AddField_management(in_table = outFC, field_name = 'Died',
field_type = "STRING", field_precision = "#", field_scale = "#",
field_length = 20, field_alias = "Died", field_is_nullable = "NULLABLE",
field_is_required = "NON_REQUIRED", field_domain = "#")
# Step 2 : build the related dictionary
relateDict = {}
# read the related table's data into a dictionary
# the key will be the first field in the relateFieldsList
# the value will be a list of tuples starting with the second field in the relatedFields list and using index [0]
with arcpy.da.SearchCursor(relateFC, relateFieldsList) as relateRows:
for relateRow in relateRows:
relateKey = relateRow[0]
if not relateKey in relateDict:
relateDict[relateKey] = [relateRow[1:]]
else:
relateDict[relateKey].append(relateRow[1:])
del relateRows, relateRow # clean up
# Step 3 : populate the new feature
# ready an insert cursor and loop through source feature and dictionary
insertCursor = arcpy.da.InsertCursor(outFC, outFields)
with arcpy.da.SearchCursor(fc,['SHAPE@', 'Cemetery', 'GIS_ID']) as cursor:
for row in cursor:
labelKey = row[2] # the link
if labelKey in relateDict:
sortedList = sorted(relateDict[labelKey])
listCount = len(sortedList)
# calculate values for point geometry
xstep = (row[0].extent.XMax - row[0].extent.XMin)/listCount
ystep = (row[0].extent.YMax - row[0].extent.YMin)/listCount
xmin = row[0].extent.XMin + (xstep/2) # x coord for first point
ymin = row[0].extent.YMin + (ystep/2) # y coord for first point
# final data
for fieldValues in sortedList:
name = fieldValues[0]
# assuming the dates are type text/string and not type date
# otherwise some conversion will be required
born = fieldValues[1]
died = fieldValues[2]
# print(xmin, ymin, row[1], labelKey, name, born, died)
insertCursor.insertRow([xmin, ymin, row[1], labelKey, name, born, died])
xmin += xstep # add step to x coord
ymin += ystep # add step to y coord
else: # not in dictionary
pass # substitute with error code if necessary
del insertCursor In the prep section (lines 4-15), the features and table are named along with the specific fields that are being accessed. Modify as needed. Lines 20-49 create a new point feature to store the combined data from the source feature and the related table. All fields have been set as type text. It is possible that you may wish to omit the born and died fields or set them as dates. In my explorations of cemeteries and genealogy records, you don't often have a complete date for some of these events. A text field can handle a variety of situations at the cost of sorting, etc. I assumed the source feature is a polygon layer, and the new point feature will use the same spatial reference as the polygon layer. If the feature already exists, an error will occur. Step 2 (lines 52-66) builds the relationship dictionary. It follows Richard Fairhurst's code for labels with few changes. Step 3 (lines 69+) starts an insert cursor and combines the data from the source feature and related table. I selected the geometry data, the link to the related table, and any other fields of interest. If a link is found, points are added to the new feature; if not found, some error code can run to handle the issue. The point geometry is calculated using the x-y min and max of the polygon's extent. The polygon is expected to be a simple rectangle or square. The number of expected points is the count of values in the dictionary (or sortedList). A proportion of the distance from x-y min to x-y max is added to the previous point. This results in a series of points being created in a lower-left to upper-right pattern. The points may not fall precisely on an individual grave. You may need to make modifications for handling the birth and death dates in this section. In addition, modifications may be required if adding or dropping other fields. The goal would be to create a new point feature that can be published on AGO. It should have the desired attributes so that someone could search for a name, etc. The name could be used for a label. If you do not wish to use individual points for each name, another option would be to use a single text field and concatenate the desired information into that field. Some HTML line break codes <BR/> can be added to assist in the final formatting of a label. Hope this helps.
... View more
10-23-2020
11:03 AM
|
1
|
17
|
4025
|
|
POST
|
If it is helpful, I can post some code later that will read data from the feature and related table to create a new feature with one point per name that might meet your needs on AGO. You could also add dates or other information if it is in the related table.
... View more
10-22-2020
09:56 AM
|
2
|
1
|
4025
|
|
POST
|
Glad to help. Regarding your follow-up question, I would not expect this procedure to work on AGO. I suggest asking the question "can Arcade create labels using data from related tables" in the ArcGIS Online forum. Hopefully Xander Bakker can provide an answer for you.
... View more
10-22-2020
09:06 AM
|
1
|
5
|
5948
|
|
POST
|
Thanks for the screen capture. I am going to make one assumption: the field "Name" is in the "DBOPlotOccJoin" table. If that is the case, then here is a simplified example of the labels and data I used for my testing. The following code gets pasted in the expression section of the Label Expression window. Be sure to check 'Advanced' and select Python as the 'Parser' before pasting. # Initialize a global dictionary for a related feature class/table
relateDict = {}
def FindLabel ( [GIS_ID] ):
# declare the dictionary global so it can be built once and used for all labels
global relateDict
# only populate the dictionary if it has no keys
if len(relateDict) == 0:
# Provide the path and table name to the relate feature class/table
relateFC = r"R:\Jeff\City_Projects\Cemetery\CemeteryMgmt.gdb\DBOPlotOccJoin"
# create a field list with the relate field first (GIS_ID),
# followed by sort field(s) (Name), then label field(s) (...)
relateFieldsList = ["UserField4", "Name"]
# process a da search cursor to transfer the data to the dictionary
with arcpy.da.SearchCursor(relateFC, relateFieldsList) as relateRows:
for relateRow in relateRows:
# store the key value in a variable so the relate value
# is only read from the row once, improving speed
relateKey = relateRow[0]
# if the relate key of the current row isn't found
# create the key and make it's value a list of a list of field values
if not relateKey in relateDict:
# [searchRow[1:]] is a list containing
# a list of the field values after the key.
relateDict[relateKey] = [relateRow[1:]]
else:
# if the relate key is already in the dictionary
# append the next list of field values to the
# existing list associated with the key
relateDict[relateKey].append(relateRow[1:])
# delete the cursor, and row to make sure all locks release
del relateRows, relateRow
# store the current label feature's relate key field value
# so that it is only read once, improving speed
labelKey = [GIS_ID]
# start building a label expression.
# My label has a bold key value header in a larger font
expression = '<FNT name="Arial" size="12"><BOL>{}</BOL></FNT>'.format(labelKey)
# determine if the label key is in the dictionary
if labelKey in relateDict:
# sort the list of the list of fields
sortedList = sorted(relateDict[labelKey])
# add a record count to the label header in bold regular font
# expression += '\n<FNT name="Arial" size="10"><BOL>Name Count = {}</BOL></FNT>'.format(len(sortedList))
# process the sorted list
for fieldValues in sortedList:
# append related data to the label expression
# expression += '\n{0} - {1} - {2} - {3}'.format(fieldValues[0], fieldValues[1], fieldValues[2], fieldValues[3])
expression += '\n{0}'.format(fieldValues[0])
# clean up the list variables after completing the for loop
del sortedList, fieldValues
else:
# expression += '\n<FNT name="Arial" size="10"><BOL>Name Count = 0</BOL></FNT>'
pass
# return the label expression to display
return expression Line 3 is the field in the feature that we are using for labeling and linking to the related table: GIS_ID. Line 12 contains the linking field in the related table along with the related data used for the labels: UserField4 and Name Line 41 is sorting the dictionary values, in this case the "Names". You may wish to comment this line out if you do not want the names sorted. I commented out line 43 as this adds a line to the label indicating how many names are associated with the feature. Line 47 was also commented out, as there is only one field value being used, in this case "Name". The else block, lines 51-53, was left in, but the name count is not added to the label. Hope this helps.
... View more
10-21-2020
09:58 PM
|
2
|
3
|
5948
|
|
POST
|
I'm still confused with your tables. Is 'DBOPlotOccJoin' the related table? And, is 'BSACemeteriesCopy' the feature layer you are symbolizing? Perhaps a few sample rows of the feature layer and the related table would clear up my confusion. If you have already joined the two tables into one that you are symbolizing, you won't need the steps in Richard's code. You would just pass multiple fields to the 'FindLabel' function and add some formatting. Again, some sample rows would be helpful; it can be made-up data as it is the structure that I'm trying to understand.
... View more
10-20-2020
09:10 PM
|
0
|
1
|
5948
|
|
POST
|
Since the OID from the CSV is text, try: if csvdict.has_key(str(myOID)): # convert myOID to string
print('found myOID for {}'.format(myOID))
row[1] = csvdict[str(myOID)][0] # convert myOID to string
cursor.updateRow(row)
#...
This section starts at line 69 in Joe's code, and use the correct indentation for this line. Also, the values in csvdict are also text, so you need to convert them to the correct type. If your OIDs start at 321972, you may wish to use a where clause so you are not checking unnecessary rows.
... View more
10-20-2020
08:34 PM
|
1
|
0
|
905
|
|
POST
|
I noticed that your origin table may be a join, so it is possible that the field names have been modified because of the join. I would also confirm that UserField4 and GIS_ID have some matching values. Are you using Desktop or Pro? Since this is based on Richard Fairhurst's code, I'll tag him into the conversation.
... View more
10-18-2020
01:49 PM
|
0
|
1
|
5948
|
|
POST
|
Perhaps something like this, assuming a search cursor or some such loop contains your geometry, address range and attributes. (You could add a small amount to your x,y coordinates as you do the loop, if you didn't want the points on top of each other.) searchCursor = [['geo1', '8909-8927', 'attr1'],
['geo2', '8908-8926', 'attr2']]
# start an insert cursor here
# then continue with search cursor or other loop for address ranges
for row in searchCursor:
begin = int(row[1].split('-')[0])
end = int(row[1].split('-')[1])+1 # 1 past address range
for address in range(begin, end, 2): # step by 2
print(row[0], address, row[2]) # this is your data to insert
''' results:
('geo1', 8909, 'attr1')
('geo1', 8911, 'attr1')
('geo1', 8913, 'attr1')
('geo1', 8915, 'attr1')
('geo1', 8917, 'attr1')
('geo1', 8919, 'attr1')
('geo1', 8921, 'attr1')
('geo1', 8923, 'attr1')
('geo1', 8925, 'attr1')
('geo1', 8927, 'attr1')
('geo2', 8908, 'attr2')
('geo2', 8910, 'attr2')
('geo2', 8912, 'attr2')
('geo2', 8914, 'attr2')
('geo2', 8916, 'attr2')
('geo2', 8918, 'attr2')
('geo2', 8920, 'attr2')
('geo2', 8922, 'attr2')
('geo2', 8924, 'attr2')
('geo2', 8926, 'attr2')
'''
... View more
10-17-2020
12:53 PM
|
1
|
2
|
1554
|
|
POST
|
I need a bit more information on the calculation. But I would add a field to a targetField list. Then do the calculation just before the insert row. The idea would be something like this, but the code is untested and needs modification to do the calculation. fSource = '' # some source
fTarget = '' # the target
sourceFields = ['SHAPE@', 'Country', 'Product', 'CNSTR_YEAR']
targetFields = ['SHAPE@', 'Country', 'Product', 'CNSTR_YEAR', 'CalcField' ] # add calculated field
with with arcpy.da.InsertCursor(fTarget, targetFields) as insertCursor:
with arcpy.da.SearchCursor(fSource, sourceFields) as cursor:
for row in cursor:
newRow = list(row) # get sourceField values
x = 5+2 # some calculation
newRow.append(x) # append calculation to end
insertCursor.insertRow(newRow) # insert newRow to fTarget
... View more
10-16-2020
06:20 PM
|
1
|
1
|
7381
|
|
POST
|
As Joshua Bixby states, using a wildcard for the fields is a bad idea. In your case, you cannot be sure the order you are assembling the row matches the order the data was selected. Also, for testing you should avoid a try/except block so the error messages will be clear and you can see what exceptions you may need to guard against. Can we assume that the field names are identical (as well as field type and length) between the source and target features? If so, you might test the following with copies of your data: fSource = '' # some source
fTarget = '' # the target
fields = ['SHAPE@', 'Country', 'Product', 'CNSTR_YEAR'] # assuming same name used in both target/source
with with arcpy.da.InsertCursor(fTarget, fields) as insertCursor:
with arcpy.da.SearchCursor(fSource, fields) as cursor:
for row in cursor:
insertCursor.insertRow(row)
... View more
10-16-2020
10:26 AM
|
1
|
3
|
7381
|
|
POST
|
Android or iOS? I don't think the GPS metadata functioning has been added to the new Collector for Android. As you noticed, it works with "classic". What's new has no mention that GPS metadata does/doesn't work with the new Collector for Android. There are some features that do not work with the Android version. Frustrating.
... View more
10-15-2020
12:35 PM
|
1
|
2
|
840
|
|
POST
|
The "CHECK" is causing some of the problem. I also noticed on GitHub there is an unanswered question about updating the module for Python 3. I haven't done any testing with that version. However, I have some code that might give you some ideas for the "RepeatedLabelError" and how to deal with it. import usaddress
def address_exception(a, b):
d = {}
print("Bad address: {}".format(b))
print(a)
for row in a:
if row[1] not in d.keys():
d[row[1]] = 1
else:
d[row[1]] += 1
for k, v in d.items():
if v > 1:
print(" {} (repeated {} times)".format(k,v))
cursor = [
("CHECK S ST LOUIS ST LOT 34 ELWOOD, IL 60421", "addrNum", "stNm", "zip", 1),
("S ST LOUIS ST LOT 34 ELWOOD, IL 60421", "addrNum", "stNm", "zip", 2),
("5318 S 86 CT APT 3 APT 412, OMAHA, NE 68137", "addrNum", "stNm", "zip", 3),
("2765 HAZEL ST, OMAHA, NE 68105", "addrNum", "stNm", "zip", 4)
]
for addr, addrNum, stNm, zip, oid in cursor: # simulate an UpdateCursor
try:
parse = usaddress.tag(addr)[0]
addrNum = parse.get("AddressNumber", "")
stNm = parse.get("StreetName", "")
zip = parse.get("ZipCode", "")
print('ok', [addr, addrNum, stNm, zip, oid])
except usaddress.RepeatedLabelError as e :
address_exception(e.parsed_string, e.original_string)
except Exception as e:
# print('ERROR: {}'.format(type(e))) # may provide some additional info
print("Unknown Error: oid={}".format(oid))
print('ERROR', [addr, addrNum, stNm, zip, oid])
''' output:
Bad address: CHECK S ST LOUIS ST LOT 34 ELWOOD, IL 60421
[(u'CHECK', 'StreetName'), (u'S', 'StreetNamePostDirectional'), (u'ST', 'PlaceName'), (u'LOUIS', 'StreetName'), (u'ST', 'StreetNamePostType'), (u'LOT', 'OccupancyType'), (u'34', 'OccupancyIdentifier'), (u'ELWOOD,', 'PlaceName'), (u'IL', 'StateName'), (u'60421', 'ZipCode')]
PlaceName (repeated 2 times)
StreetName (repeated 2 times)
('ok', ['S ST LOUIS ST LOT 34 ELWOOD, IL 60421', '', u'ST LOUIS', u'60421', 2])
Bad address: 5318 S 86 CT APT 3 APT 412, OMAHA, NE 68137
[(u'5318', 'AddressNumber'), (u'S', 'StreetNamePreDirectional'), (u'86', 'StreetName'), (u'CT', 'StreetNamePostType'), (u'APT', 'OccupancyType'), (u'3', 'OccupancyIdentifier'), (u'APT', 'OccupancyType'), (u'412,', 'OccupancyIdentifier'), (u'OMAHA,', 'PlaceName'), (u'NE', 'StateName'), (u'68137', 'ZipCode')]
OccupancyIdentifier (repeated 2 times)
OccupancyType (repeated 2 times)
('ok', ['2765 HAZEL ST, OMAHA, NE 68105', u'2765', u'HAZEL', u'68105', 4])
''' I have also used the regular expressions module along with dictionaries to do some pre-cleanup before passing the address to the usaddress module. These are mostly things like substituting 'STE' for 'Suite' or 'N' for 'North'. This basically helps standardize the address to USPS guidelines. The usaddress module isn't perfect, but it can be very helpful.
... View more
10-13-2020
10:28 PM
|
2
|
0
|
3567
|
|
POST
|
I also tested the code snippet and was able to generate the same error message only when the path was incomplete or contained an error. With a correct path to the shapefile, the file loaded fine.
... View more
10-12-2020
10:08 AM
|
0
|
0
|
7761
|
|
POST
|
I created a quick feature for illustration. It contains the following: To copy the value from Field2 to Field1 using a dictionary, I modified your code and added some explanation: fc1 = 'DictionaryTest'
flds = ['OBJECTID','Field1','Field2']
# for clarity, put f[1:] in parenthsis - this will be a tuple
search_feats = {f[0]:(f[1:]) for f in arcpy.da.SearchCursor(fc1, flds)}
# Produces this dictionary: {1: (u'', u'AA2'), 2: (u'bb1', u'BB2'), 3: (None, u'CC2')}
# key = 1, value[0] = empty string, value[1] = 'AA2'
# key = 2, value[0] = bb1, value[1] = 'BB2'
# key = 3, value[0] = None/Null, value[1] = 'CC2'
with arcpy.da.UpdateCursor(fc1, flds) as upd_cur:
for upd_row in upd_cur:
# if upd_row[0] == None:
# this is the OBJECTID, it should always have a value
# is the OBJECTID in the search_feats dictionary?
if upd_row[0] in search_feats.keys():
# upd_row[1] is 'Field1' and upd_row[2] is 'Field2'
# search_feats[upd_row[0]] is the dictionary key, 'Field2' is the second value of the dictionary [1]
upd_row[1] = search_feats[upd_row[0]][1]
upd_cur.updateRow(upd_row)
# OBJECTID not in dictionary
else:
pass The result is: Field1 now contains the same values as Field2. For something this simple, I wouldn't use a dictionary. It would work best in place of joining two features on a common field to copy values from one feature to the other. It may also be helpful in some other cases. Hope this helps.
... View more
10-09-2020
02:57 PM
|
3
|
1
|
3140
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 10-27-2016 02:23 PM | |
| 1 | 09-09-2017 08:27 PM | |
| 2 | 08-20-2020 06:15 PM | |
| 1 | 10-21-2021 09:15 PM | |
| 1 | 07-19-2018 12:33 PM |
| Online Status |
Offline
|
| Date Last Visited |
09-15-2025
02:54 PM
|