Runtime error Traceback (most recent call last): File "<string>", line 23, in <module> TypeError: sequence size must match size of the row
import arcpy, collections from arcpy import env env.workspace = r"C:\Users\cc1\Desktop\NEW.gdb\WAYNE" table = "WAYNE" table2 = arcpy.CreateTable_management("in_memory", "WAYNE") arcpy.AddField_management(table2, "SOS_VOTERID","TEXT", field_length=25) arcpy.AddField_management(table2, "FEAT_SEQ","LONG") newList = {row[0]: row[1] for row in arcpy.da.SearchCursor(table, ["SOS_VOTERID","FEAT_SEQ"])} tbl = arcpy.ListTables("*") for table in tbl: fieldList = arcpy.ListFields(table) for field in fieldList: newList.append([table,field.name]) # this populates the new list with table and field to directly insert to new table with arcpy.da.InsertCursor(table2, ['SOS_VOTERID', 'FEAT_SEQ']) as insert: for f in newList: insert.insertRow(f) del insert
Solved! Go to Solution.
import arcpy inputTbl = r"C:\Users\cc1\Desktop\NEW.gdb\WAYNE" outputTbl = str(arcpy.CreateTable_management("in_memory", "WAYNE").getOutput(0)) arcpy.AddField_management(outputTbl, "SOS_VOTERID","TEXT", field_length=25) arcpy.AddField_management(outputTbl, "FEAT_SEQ","LONG") insertRows = arcpy.da.InsertCursor(outputTbl, ["SOS_VOTERID","FEAT_SEQ"]) searchRows = arcpy.da.SearchCursor(inputTbl, ["SOS_VOTERID","FEAT_SEQ"]) for searchRow in searchRows: insertRows.insertRow(searchRow) del searchRow, searchRows, insertRows
I think you would probably want some code that looked more like this:
http://forums.arcgis.com/threads/66434-A-better-way-to-run-large-Append-Merge-jobs?p=230850&viewfull...
No need to store the data in a list or dictionary. Just read it via the search cursor and then write it directly to the in_memory table.
Nice dictionary comprehension BTW! Forgot that was supported now in v2.7... I learned something today.
import arcpy inputTbl = r"C:\Users\cc1\Desktop\NEW.gdb\WAYNE" outputTbl = str(arcpy.CreateTable_management("in_memory", "WAYNE").getOutput(0)) arcpy.AddField_management(outputTbl, "SOS_VOTERID","TEXT", field_length=25) arcpy.AddField_management(outputTbl, "FEAT_SEQ","LONG") insertRows = arcpy.da.InsertCursor(outputTbl, ["SOS_VOTERID","FEAT_SEQ"]) searchRows = arcpy.da.SearchCursor(inputTbl, ["SOS_VOTERID","FEAT_SEQ"]) for searchRow in searchRows: insertRows.insertRow(searchRow) del searchRow, searchRows, insertRows
It'd look something like this I think - see how for each loop of the search cursor, the data (the searchRow tuple) gets fed dirtectly to the insertRow?import arcpy inputTbl = r"C:\Users\cc1\Desktop\NEW.gdb\WAYNE" outputTbl = str(arcpy.CreateTable_management("in_memory", "WAYNE").getOutput(0)) arcpy.AddField_management(outputTbl, "SOS_VOTERID","TEXT", field_length=25) arcpy.AddField_management(outputTbl, "FEAT_SEQ","LONG") insertRows = arcpy.da.InsertCursor(outputTbl, ["SOS_VOTERID","FEAT_SEQ"]) searchRows = arcpy.da.SearchCursor(inputTbl, ["SOS_VOTERID","FEAT_SEQ"]) for searchRow in searchRows: insertRows.insertRow(searchRow) del searchRow, searchRows, insertRows
outputTbl = str(arcpy.CreateTable_management("in_memory", "WAYNE").getOutput(0))
I have plenty of memory, but I am not seeing any increase in performance 😞 My workflow consists of creating a new table in memory, adding the fields and indexes, appending the data in, and then performing about 7 field calculations using data cursors. I have runt he process both in memory and regular, and they are both running at around 23 hours to complete. I have seen that once your dataset gets a to a certain size, you lose your in memory performance gains. I guess that I am seeing that with my data set. Thanks for all your help!
Clinton