Hello, I'm currently working with one feature class table containing 400 000 objects and one database table containing 700 objects. I'm currently working in ArcMap 10.4 and using a Geodatabase with Python 2.7. What I'm trying to do is match the database table containing 700 objects against the FC. What I want is to match them, treating all 700 objects as wildcars %700 ojects%, and match those towards the FC that contains text from the table. I have 2 tables with 1 column in each that im trying to match. I have not yet found a good way to treat all rows in a column as wildcard to search against another column.
This is my code so far. This only will print out what the database table values. Next i wanted to get the FC values and compare them. I've read about maybe using a dictionary to make it easier to extract values but I've yet to figure out how.
import arcpy, unicodedata, datetime, openpyxl
arcpy.env.workspace =r"C:\Users\kaaper\Documents\DataDrivenPages\DataDrivenPages.gdb"
def rows_as_dicts(cursor):
colnames = cursor.fields
for row in cursor:
yield dict(zip(colnames, row)
with arcpy.da.SearchCursor(r"C:\Users\kaaper\Documents\DataDrivenPages\DataDrivenPages.gdb\Ordbok", '*') as sc:
for row in rows_as_dicts(sc):
print row['field']
Any input will be appreciated.
Solved! Go to Solution.
Thank you very much, that did exactly what I needed. And I see that the new code had to be implemented where I thought it would be, but I must admit I haven't used validate that much with other tools in python. Is it just to check if the results from the tool is readable or that it gets data from the process?
I'm glad it worked for you. In the code I provided "validate" is just a string for each element of the dictionary of strings to match to the annotation. It is different from for instance a validation class that could be used for scripting tools in a toolbox. In that case it executes a validation before running the tool.
In the case above validate could be replaced by any other variable name and this is what it does:
While there are many ways to do this, it is important here to address what are the best in terms of performance.
So in this case it is recommended:
i) run SQL with an ORM like SQLAlchemy or Peewee
ii) use numpy conversion (which is internally like sql of some sorts) accordingly