AnsweredAssumed Answered

ascii codec can't encode character u'\xe1' in position 6: ordinal not in range(128)

Question asked by cbrannin on Apr 20, 2015

I am trying to convert a dbf to a csv but come by this error when I hit certain characters (in this case á I think). I know there are many more.. Is there a way to force this to convert to utf-8? Is this the way to handle this? Any help is greatly appreciated!

 

import arcpy, os, csv

masterTable = r"M:\Spelling_Project\Master_Table\FeaturesTable.dbf"


CSVFile = r"M:\Spelling_Project\Master_Table\FeaturesTable.csv"
fields = arcpy.ListFields(masterTable)
fieldNames = [field.name for field in fields]


with open(CSVFile,'w') as f:
    dw = csv.DictWriter(f,fieldNames)
    dw.writeheader()


    with arcpy.da.SearchCursor(masterTable,fieldNames) as cursor:
        for row in cursor:
            dw.writerow(dict(zip(fieldNames,row)))
            print row
            print "converted " +  masterTable + " to a CSV file!"
del row, cursor

 

 

*EDIT*

For anyone who will find this helpful, this is what I used to solve this.

 

import arcpy, os, csv, codecs


def Utf8EncodeArray( oldArray ):
    newArray = []
    for element in oldArray:
        if isinstance(element, unicode):
            newArray.append(element.encode("utf-8"))
        else:
            newArray.append(element)
    return newArray


masterTable = r"M:\Spelling_Project\Master_Table\FeaturesTable.dbf"


CSVFile = r"M:\Spelling_Project\Master_Table\FeaturesTable.csv"
fields = arcpy.ListFields(masterTable)
fieldNames = [field.name for field in fields]


with open(CSVFile, 'w') as f:
    dw = csv.DictWriter(f,fieldNames)
    dw.writeheader()


    with arcpy.da.SearchCursor(masterTable,fieldNames) as cursor:
        for row in cursor:
            dw.writerow(dict(zip(fieldNames,Utf8EncodeArray(row))))
        print "converted " +  masterTable + " to a CSV file!"
del row, cursor

 

Message was edited by: Chris Brannin

Outcomes