Delete all table records with special character, # , '

654
7
Jump to solution
05-18-2020 09:56 AM
2Quiker
Occasional Contributor II

I have a feature class table that i need to delete the rows with special characters of # , ' or other certain special characters. I have tried the following but nothing happens so I am not coding it correctly, I am in need of some help please.

A2 = "ArcSDE Personal Server/featureclass"

# sql SubName LIKE '%#'
with arcpy.da.UpdateCursor(A2 , ["OBJECTID","Field"]) as cursor:  
    for row in cursor:      
        if row[1] in ["'", "#"]:
            print ("Deleting record: 'OBJECTID' = {}".format(row[0])) 
            cursor.deleteRow()

0 Kudos
1 Solution

Accepted Solutions
JoshuaBixby
MVP Esteemed Contributor

Joe, that won't work.  It would have to be written

if "'" in row[1] or "#" in row[1]:‍‍

View solution in original post

7 Replies
JoeBorgione
MVP Emeritus

Does Field only contain a special character or are there other characters there as well?  Your line #6 is defined such that the contents of the field is only a single quote or a hash tag.  If you have other characters in that field you may want to consider some regular expression searching.  Take a look here:  Program to check if a string contains any special character - GeeksforGeeks 

That should just about do it....
0 Kudos
2Quiker
Occasional Contributor II

It does but I am looking to delete rows with just hash tag and apostrophe.

0 Kudos
JoeBorgione
MVP Emeritus

Then your line 6 is backwards.  See if this works:

if "'" or "#" in row[1]:
   do something
That should just about do it....
0 Kudos
JoshuaBixby
MVP Esteemed Contributor

Joe, that won't work.  It would have to be written

if "'" in row[1] or "#" in row[1]:‍‍
JoeBorgione
MVP Emeritus

yeah.  that sounds more better!

That should just about do it....
0 Kudos
JoshuaBixby
MVP Esteemed Contributor

A different approach would be to use a SQL WHERE clause on your update cursor so that all records already meet the criteria, then you can just delete them without having to check them in Python.

sql = "FIELD LIKE '%'%' or FIELD LIKE '%#%'"

0 Kudos
BlakeTerhune
MVP Regular Contributor

I know your question has already been answered but I couldn't resist sharing a couple little tricks I use. First, consider using the @ field name token options to access Esri fields more reliably.

with arcpy.da.UpdateCursor(A2 , ["OID@", "Field"]) as cursor:
    for oid, field in cursor:
        if "'" in field or "#" in field:
            print ("Deleting record: 'OBJECTID' = {}".format(oid))

Also used in the sample above is a method called sequence unpacking to easily assign variable names to your cursor row indexes (field values). This works best with only a few fields.

If you have many fields in your cursor, you can use this helper function to make a dictionary of the values so it's cleaner to access them instead of using the row index like row[1].

with arcpy.da.UpdateCursor(A2 , ["aField", "anotherField", "moreField"]) as cursor:
    for row in row_as_dict(cursor, cursor.fields):
        if "'" in row["aField"] or "#" in row["anotherField"]:
            print ("Deleting record: 'moreField' = {}".format(row["moreField"]))

def row_as_dict(data, fields):
    for row in data:
        yield OrderedDict(zip(fields, row))‍‍‍‍‍‍‍‍