Select to view content in your preferred language

Insert Cursor into enterprise GDB doesn't like tuple for shape field?

792
7
Jump to solution
10-18-2023 04:06 PM
AlfredBaldenweck
MVP Regular Contributor

So, I'm trying to populate a point feature class using features from another feature class with the same schema.

The general workflow is:

 

inputList = []
# tempTop is the source table. 
# It is in memory, if that makes a difference
with arcpy.da.SearchCursor(tempTop, '*') as cursor:
        for row in cursor:
            inputList.append(row)
'''code'''
#nadFC is the destination table.
    with arcpy.da.InsertCursor(nadFC, '*') as cursor:
        for r in inputList:
            cursor.insertRow(r)

 

 

This worked great when writing to a file geodatabase.

Writing to a (non-versioned, have full permissions for it) enterprise geodatabase, I get the following error:

Traceback (most recent call last):
File "<string>", line 93, in <module>
File "<string>", line 78, in <module>
TypeError: value #1 - unsupported type: tuple

The value in question is supposed to be the shape field. 

So, I investigate, and value #1 is in fact a tuple instead of a geometry object.

 

print(inputList[0])
# Yields:
#    (1, (-105.88538999999997, 42.19521000000003), ...)

 

So, I've tried the following:

 

inputList = []
# tempTop is the source table
with arcpy.da.SearchCursor(tempTop, '*') as cursor:
        for row in cursor:
            inputList.append(list(row))
'''code'''
#nadFC is the destination table.
    with arcpy.da.InsertCursor(nadFC, '*') as cursor:
        for r in inputList:
            r[1] = arcpy.PointGeometry(arcpy.Point(r[1][0], r[1][1]))
            cursor.insertRow(r)

 

I get the same error:

Traceback (most recent call last):
File "<string>", line 93, in <module>
File "<string>", line 78, in <module>
TypeError: value #1 - unsupported type: PointGeometry

Tried the same thing, just using arcpy.Point(), not including arcpy.PointGeometry(), but also got an error.

So, what changed between the file GDB and the eGDB? Why could the fGDB take a pair of coordinates for the shape field, but the eGDB can't? How can I get around this?

Thanks in advance.

 

0 Kudos
1 Solution

Accepted Solutions
AlfredBaldenweck
MVP Regular Contributor

Well, I figured it out.

Basically, for this workflow, when creating the feature class in the file geodatabase, the geometry field moves to the front.

However, when creating the feature class in the enterprise geodatabase, the geometry field moves to the back of the field list.

To get around this, I did the following:

    with arcpy.da.SearchCursor(tempTop, ['*', 'SHAPE@']) as cursor:
        for row in cursor:
            lRow = list(row)
            del lRow[1]
            inputList.append(lRow)
'''code'''
    with arcpy.da.InsertCursor(nadFC, '*') as cursor:
        for r in inputList:
            cursor.insertRow(r)

This, of course, breaks the exact same way when I try to run this code in the file GDB now, so I have to figure out how to check for that, but at least I have a working product again.

 

View solution in original post

0 Kudos
7 Replies
AlfredBaldenweck
MVP Regular Contributor

Well, I figured it out.

Basically, for this workflow, when creating the feature class in the file geodatabase, the geometry field moves to the front.

However, when creating the feature class in the enterprise geodatabase, the geometry field moves to the back of the field list.

To get around this, I did the following:

    with arcpy.da.SearchCursor(tempTop, ['*', 'SHAPE@']) as cursor:
        for row in cursor:
            lRow = list(row)
            del lRow[1]
            inputList.append(lRow)
'''code'''
    with arcpy.da.InsertCursor(nadFC, '*') as cursor:
        for r in inputList:
            cursor.insertRow(r)

This, of course, breaks the exact same way when I try to run this code in the file GDB now, so I have to figure out how to check for that, but at least I have a working product again.

 

0 Kudos
VinceAngelo
Esri Esteemed Contributor

Best practice would be to capture the source column names (arcpy.ListFields), replacing the 'Geometry' column with 'Shape@', then using explicit column names for both cursors. Using a wildcard is likely to cause trouble if the tables are altered with a new column.

- V

clt_cabq
Occasional Contributor III

I do this using a nested insert and search cursor, I wonder if that will work for you? Your '''code'''' line suggests you are doing some data processing between when you do the search and insert operations, so I'm not sure how that fits in this scenario, though you can manipulate field values before the insert operation in my method below. The way I interpret what you are doing is to create a list of lists essentially, each item in the list is itself a list of the records you are inserting into the new table and that feels cumbersome.

Here's some code that I use and runs well (processes ~30K records in less than a minute), though I will say I'm not pushing data into a EGDB and I'm not sure how much difference that makes. the number of fields being input has to match the number of fields in the output, and they do have to match in order of the output. I am also able to manipulate incoming data to apply updates such applying a current date to a given field or standardizing some values before it the insert operation. 

# insert cursor updated from results of a search cursor
new_fc = <path to fc>
orgin_fc = <path to fc>
# these two rows create lists of field values.. they have to match in position relative to the output dataset.
outfields = ['SHAPE@', 'field1', 'field2'...'Pgm']
in_fields = ['SHAPE@','field1', 'field2'...']
with arcpy.da.InsertCursor(new_fc,outfields) as insertcursor:
    with arcpy.da.SearchCursor(orgin_fc,in_fields) as cursor:
        for row in cursor:
            new_row = list(row)
            new_row[12] = <value> # shows how to update a value to something current
            pgm = '' # this essentially creates an empty value that will be inserted into a new field that isn't the input dataset but needs to be in the output - the 'PGM' field at the end of the outfields list.
            new_row.append(pgm) # inserts a blank value into a field in the output dataset.
            insertcursor.insertRow(new_row) # inserts the input row into the new feature class.

 

VinceAngelo
Esri Esteemed Contributor

If the data is small enough*, I prefer to cache the array and not use nested cursors. This is especially true with EGDB feature classes, which do not like with nesting.

- V

*Nowadays, "small enough" is "less than 10-20 million rows".

BlakeTerhune
MVP Regular Contributor

I might be missing something, but can't you use Append()? If you need to modify values, just run calculate field (or UpdateCursor) after you append the data.

0 Kudos
AlfredBaldenweck
MVP Regular Contributor

So, I hate hate hate Append() because it always adds in that nasty "originalID" field, so I never use it if I can avoid it.

Unfortunately, while my test data was unversioned, I found out that the final product will be versioned/replicated so I have to change my approach. I was just overwriting the output file every time, but now I'll have to actually go through and look for differences.

In any case, I think @VinceAngelo 's suggestion of getting an actual hard-coded list of fields is worth pursuing. This data is consistent enough that I don't think I need to worry too much about its fields changing in the future, but it never hurts to be safe on purpose. Perhaps a [f for f in arcpy.listFields(fc) where f.name != "OBJECTID"], then sorted to ensure they're all in the same order?

My process for the output table was to create it using the input as a template, so I had (justifiably) assumed that the field order would be the same. And it was, in a file gdb. But the enterprise GDB popped the Shape field from the front to the very end. I wish I had been able to predict that.

 

0 Kudos
VinceAngelo
Esri Esteemed Contributor

Be careful. The polygon area or line length can be in the list of fields. You only need to query one table's columns, so sorting shouldn't be necessary, but you can cross-check to make sure they're all present (again, as a safety thing).

- V