In post earlier today, (Another da.SearchCursor Issue ) through an oversight on my part I couldn't execute a select layer by attributes function. Silly me.... At rate, I've moved past that issue but now faced with constructing a where clause that involves membership in a dictionary.
Essentially, I've constructed a dictionary of street types that looks something like this:
typeDict = {"AVE":1,"BLVD":2,"CIR":3} #etc, etc
As best as I can tell, the way to see if a given string is a member of a dictionary, one can :
>>> typeDict
{'BLVD': 2, 'AVE': 1, 'CIR': 3}
>>> "AVE" in typeDict
True
>>> "ST" in typeDict
False
>>> "ST" not in typeDict
True
>>> if "somevalue" not in typeDict:
... print "nope"
...
nope
Okay, easy enough. The problem is constructing a valid "where clause' for the SelectLayerByAttribute() when considering membership in a dictionary. I've tried dozens of {}.format() approaches but the red font on my screen looks something like the U.S. spend deficit. I need some advice to successfully select a record or records based on the dictionary:
intable = "C:\Path\To\File.gdb\Table
arcpy.MakeTableView_management(intable,"view")
typeDict = {"AVE":1,"BLVD":2,"CIR":3} #etc, etc
fields = ["Type","StreetName"]
where = "StreetName >= 'A' #just concerned with named streets, no numbers
with arcpy.da.SearchCursor("view",fields,where) as cursor:
for row in cursor:
Select = #need to format a where clause that
# evalutes row[0] not in typeDict
arcpy.SelectLayer,ByAttribute(view,"ADD_TO_SELECTION",Selection)
(I should just create a signature block with you guys in it; maybe I can convince my boss to put you on retainer....)
Solved! Go to Solution.
Joe... for the back burner... but using Feature or Table to NumpyArray... even for one column, conversions can be quasi-vectorized easily, once you have a couple of steps. Shown verbosely, but in essence, if you already have a dictionary of key/value pairs, they can be converted to an array, then the whole field can be queried by case rather than by row... a small demo to give you ideas for the future.
# ---- Start with the beginning fluff of taking the data, making a temporary
# storage array for output
# Convert your dictionary... then run the 'for' loop which queries by case
# rather than by row (this of course can be done all at once, but loops are
# good for now
#
a = ['AVE', 'Not', 'BLVD', 'Nope', 'CIR', 'AVE', 'AVE', 'BLVD', 'CIR', 'nada']
a = np.array(a)
result = np.full(a.shape, -9, dtype='int32')
typeDict = {"AVE": 1, "BLVD": 2, "CIR": 3} # ad nauseum
dt = [('Lookfor', '<U5'), ('ReplaceWith', '<i4')]
arr = np.array(list(typeDict.items()), dtype=dt)
ins = arr['Lookfor']
outs = arr['ReplaceWith']
frmt = """
Input data array
{}
Temporary holding array
{}\n
Dictionary to array conversion
{}
lookfor ..... {}
replace with. {}
-----
"""
print(frmt.format(a, result, arr, ins, outs))
for i in range(len(arr)):
#result[np.where(a == arr['Lookfor'])[0]] = arr['ReplaceWith']
result[np.where(a == ins[i])[0]] = outs[i]
frmt = """
magic done
In again, then out
{}
{}
"""
print(frmt.format(a, result))
Then the results of the demo are
input data
['AVE' 'Not' 'BLVD' 'Nope' 'CIR' 'AVE' 'AVE' 'BLVD' 'CIR' 'nada']
output result
[-9 -9 -9 -9 -9 -9 -9 -9 -9 -9]
dictionary to array
[('AVE', 1) ('BLVD', 2) ('CIR', 3)]
lookfor ..... ['AVE' 'BLVD' 'CIR']
replace with. [1 2 3]
-----
magic done
In again, then out
['AVE' 'Not' 'BLVD' 'Nope' 'CIR' 'AVE' 'AVE' 'BLVD' 'CIR' 'nada']
[ 1 -9 2 -9 3 1 1 2 3 -9]
# ---- you use arcpy.da.ExtendTable to attach the output result to the original
# table... Works best when your cases are obviously less than your records
I might have it now (I think)
Basically you want to check a street address to see if a street type occurs and if it does, you want to produce a new column which reclasses the street type.
Lets consider the following
from_s = ['Road', 'Street','Lane', 'Court']
to_s = [10, 20, 30, 40]
So in the above, if the address name contains any of the values in the 'from_s' you want to reclass that address to a new type for whatever purpose.
It isn't fully vectorized, but here is what I did in verbose form
# ---- Read in your street index and query fields shortcutting the searchcursor stuff
#
in_fc = r'C:\Your_spaceless\path_to_your\address.gdb\streets'
flds = ['OID@', 'Address_']
args = [in_fc, flds, None, None, False, (None, None)]
cur = arcpy.da.SearchCursor(*args)
a = cur._as_narray()
# ----
# ---- Establish your 'from' and 'to' classes
# Make a temporary array for filling and ultimately joining
# ----
from_s = ['Road', 'Street','Lane', 'Court']
to_s = [10, 20, 30, 40]
dt = [('IDs', '<i4'), ('Str_class', '<i4')]
out = np.zeros((len(a),), dtype=dt)
out['IDs'] = np.arange(1, 1001, dtype='int32')
#
# ---- Let's rock and roll
cnt = 0
for f in from_s:
idx = np.array([i for i, item in enumerate(a['Address_'])
if f in item])
out['Str_class'][idx] = to_s[cnt]
cnt += 1
arcpy.da.ExtendTable(in_fc, 'OBJECTID', out, 'IDs')
The reveal... Is this what you want to do?
I should point out that would be prudent to use a null value for an integer field this could be 'None' for text fields or another value for integer fields, for example on line 15.5 you could add
out['Str_class'].fill(-9)
Thanks Dan, but way more than I need. Boiled down to extreme basics: I want to examine a table of records and determine if each record has a valid value in a given field. Validity is confirmed by virtue of comparing the field value against a list. If the record's field value is confirmed as valid, move on to the next record. If the record's field value is confirmed as not valid, write that record to another table: then move on to the next record in the original table for examination....
darn... so I guess you don't want to see my next incarnation... I will leave that to a blog
I'll read it there!
Thanks!