Hi Everyone, I am trying to convert dbf to csv using the following code:
import arcpy import os import csv def DBFtoCSV(path): '''Convert every DBF table into CSV table. ''' arcpy.env.workspace = path tablelist=arcpy.ListTables('*', 'dBASE') for table in tablelist: outputFile = '{}.csv'.format(table.split('.dbf')[0]) # Get the fields in the dbf to use for the cursor and csv header row. fields = [] for field in arcpy.ListFields(table): fields.append(str(field.name)) # Make the csv. with open((os.path.join(path,outputFile)),'wb') as output: dataWriter = csv.writer(output, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL) # Write header row. dataWriter.writerow(fields) # Write each row of data to the csv. with arcpy.da.SearchCursor(table, fields) as cursor: for row in cursor: dataWriter.writerow(row) print('Finished creating {}'.format(outputFile)) if __name__ == '__main__': path=r'F:\DataLocation' DBFtoCSV(path)
I am getting an error at the SearchCursor because it says field names must be string or non empty. I added the print statement and it turns out my list of fields is empty. Can anyone tell me why this is?
To be honest, I am not sure. I am going to speak with the contact next week, but all I know now is the agency that collects this data uses a really old database software, don't know the specifics however. Can ArcGIS open a subset? And if not, am i limited to non-ARCGIS methods?
As far as I understand the messages there are probably two fields that have a field type which ArcGIS considers not compatible. If the system is old, it may have an export option to ASCII (plain text) file. This would make it easier to parse (using python if necessary). The size of the file or number of records surely is not the problem.
I think it's coming from the cursor but could be wrong.