Select to view content in your preferred language

ListFields returning blanklist

6505
42
07-26-2016 02:26 PM
BlakeBarr
Deactivated User

Hi Everyone, I am trying to convert dbf to csv using the following code:

import arcpy
import os
import csv


def DBFtoCSV(path):
    '''Convert every DBF table into CSV table.
    '''
    arcpy.env.workspace = path
    tablelist=arcpy.ListTables('*', 'dBASE')
    for table in tablelist:
        outputFile = '{}.csv'.format(table.split('.dbf')[0])
        # Get the fields in the dbf to use for the cursor and csv header row.
        fields = []
        for field in arcpy.ListFields(table):
            fields.append(str(field.name))


# Make the csv.
        with open((os.path.join(path,outputFile)),'wb') as output:
            dataWriter = csv.writer(output, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)


    # Write header row.
            dataWriter.writerow(fields)


    # Write each row of data to the csv.
            with arcpy.da.SearchCursor(table, fields) as cursor:
                for row in cursor:
                    dataWriter.writerow(row)


        print('Finished creating {}'.format(outputFile))
   


if __name__ == '__main__': 
    path=r'F:\DataLocation'
    DBFtoCSV(path)

I am getting an error at the SearchCursor because it says field names must be string or non empty. I added the print statement and it turns out my list of fields is empty. Can anyone tell me why this is?

0 Kudos
42 Replies
BlakeBarr
Deactivated User

Okay Thanks. Right now I can only open them in OpenOffice. once I figure out how to access ArcMap at my institution(new job) I will get back to you, but that is very helpful.

0 Kudos
BlakeBarr
Deactivated User

Hey Dan,

Just I just loaded the file into ArcMap. Was able to view it alright, but two fields with dates got wiped and every obs was <NULL> instead of the date.

0 Kudos
XanderBakker
Esri Esteemed Contributor

Are the .dbf files part of shapefiles? In case this is true those will not be found by arcpy.ListTables .

I just did a test on local my machine with the slightly altered code:

import arcpy
import os
import csv

def DBFtoCSV(path):
    '''Convert every DBF table into CSV table.
    '''
    arcpy.env.workspace = path
    tablelist = arcpy.ListTables('*', 'dBASE')
    for table_name in tablelist:
        outputFile = '{}.csv'.format(table_name.split('.dbf')[0])
        table = os.path.join(path, table_name)
        print table

        # Get the fields in the dbf to use for the cursor and csv header row.
        fields = [fld.name for fld in arcpy.ListFields(table)]
        print fields

        # Make the csv.
        with open((os.path.join(path, outputFile)), 'wb') as output:
            dataWriter = csv.writer(output, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)

            # Write header row.
            dataWriter.writerow(fields)

            # Write each row of data to the csv.
            with arcpy.da.SearchCursor(table, fields) as cursor:
                for row in cursor:
                    dataWriter.writerow(list(row))

        print('Finished creating {}'.format(outputFile))

if __name__ == '__main__':
    # path = r'F:\DataLocation'
    path = r'D:\Temp\test'
    DBFtoCSV(path)

and it printed:

D:\Temp\test\myTable1.dbf
[u'OID', u'OBJECTID', u'COD_DIV_TE', u'COD_PAIS', u'NOM_DIV_TE', u'SHAPE_Leng', u'SHAPE_Area']
Finished creating myTable1.csv
D:\Temp\test\myTable2.dbf
[u'OID', u'OBJECTID', u'COD_DIV_TE', u'NOM_DIV_TE', u'COD_DIV_SU', u'AREA_KM', u'SHAPE_Leng', u'SHAPE_Area']
Finished creating myTable2.csv

Conversion worked, but my shapefile in that folder was not converted. You should in that case list the featureclasses in that folder.

BlakeBarr
Deactivated User

Thanks Xander,

I am not sure how to check if they are part of shape files to be honest. Is there a way to check that?

0 Kudos
BlakeBarr
Deactivated User
import dbf
import arcpy
import os
import csv


def DBFtoCSV(path):
    '''Convert every DBF table into CSV table.
    '''
    print path
    arcpy.env.workspace = path
    #tablelist=arcpy.ListTables('*', 'dBASE')
    tables = arcpy.ListFeatureClasses('*.shp')
    tables.extend(arcpy.ListTables('*', 'dBASE'))
    print tables
    for table in tables:
        print table
        outputFile = '{}.csv'.format(table.split('.dbf')[0])
        # Get the fields in the dbf to use for the cursor and csv header row.
        fields = []
        for field in arcpy.ListFields(os.path.join(path,table)):
            fields.append(field.name)
        print fields


# Make the csv.
        with open((os.path.join(path,outputFile)),'wb') as output:
            dataWriter = csv.writer(output, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)


    # Write header row.
            dataWriter.writerow(fields)


    # Write each row of data to the csv.
            with arcpy.da.SearchCursor(table, fields) as cursor:
                for row in cursor:
                    dataWriter.writerow(row)


        print('Finished creating {}'.format(outputFile))
   


if __name__ == '__main__': 
    path='F:/Datalocation'
    DBFtoCSV(path)

I added the list feature classes using code I found but am getting the same problem.

0 Kudos
BlakeBarr
Deactivated User

Not sure if this sheds any light on the issue

but when I try to use

def DBFtoCSV(path):
    '''Convert every DBF table into CSV table. 
    '''
    arcpy.env.workspace = path
    #tablelist=arcpy.ListTables('*', 'dBASE')
    tables = arcpy.ListFeatureClasses('*.shp')
    print tables
    tables.extend(arcpy.ListTables('*', 'dBASE'))
    print tables
    for table in tables:
        print table
        outputFile = '{}.xls'.format(table.split('.dbf')[0])
        arcpy.TableToExcel_conversion(table, outputFile)
        
      


if __name__ == '__main__':  
    path='F:/DataLocation'
    DBFtoCSV(path)

I get an error that says that the dbf file cannot be opened. When I did not get that before. Just trying different approaches in the hopes that I can find one that works haha

0 Kudos
XanderBakker
Esri Esteemed Contributor

Sounds like the data might be corrupt. Can you include the prints of the code (the tables list twice) to see what it finds?

0 Kudos
BlakeBarr
Deactivated User

Yup. I also just recieved permission from my supervisor to share the directory. Here is the code

import arcpy
import os
import csv

def DBFtoCSV(path):
    '''Convert every DBF table into CSV table. 
    '''
    arcpy.env.workspace = path
    #tablelist=arcpy.ListTables('*', 'dBASE')
    tables = arcpy.ListFeatureClasses('*.shp')
    print tables
    tables.extend(arcpy.ListTables('*', 'dBASE'))
    print tables
    for table in tables:
        print table
        outputFile = '{}.csv'.format(table.split('.dbf')[0])
        # Get the fields in the dbf to use for the cursor and csv header row.
        fields = []
        for field in arcpy.ListFields(table):
            fields.append(field.name)

# Make the csv.
        with open((os.path.join(path,outputFile)),'wb') as output:
            dataWriter = csv.writer(output, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)

 # Write header row.
            dataWriter.writerow(fields)

 # Write each row of data to the csv.
            with arcpy.da.SearchCursor(table, fields) as cursor:
                for row in cursor:
                    dataWriter.writerow(row)


        print('Finished creating {}'.format(outputFile)) 




if __name__ == '__main__':  
    path='F:/FERC/raw_data/eqr_transaction_2009/EQRDATA'
    DBFtoCSV(path)

Here is the error:

C:\>python dbf_convert_stack2.py

First print: []

Second print: [u'EQR_TRANSACTION_200903_AL.dbf', u'._EQR_TRANSACTION_200903_AL.dbf', u'EQR_TRA

NSACTION_200903_MZ.dbf', u'._EQR_TRANSACTION_200903_MZ.dbf', u'eqr_transaction_2

00903.dbf', u'._eqr_transaction_200903.dbf', u'EQR_TRANSACTION_200906_AL.dbf', u

'._EQR_TRANSACTION_200906_AL.dbf', u'EQR_TRANSACTION_200906_MZ.dbf', u'._EQR_TRA

NSACTION_200906_MZ.dbf', u'eqr_transaction_200906.dbf', u'._eqr_transaction_2009

06.dbf', u'eqr_transaction_200909_388.dbf', u'._eqr_transaction_200909_388.dbf',

u'EQR_TRANSACTION_200909_AL.dbf', u'._EQR_TRANSACTION_200909_AL.dbf', u'EQR_TRA

NSACTION_200909_MZ.dbf', u'._EQR_TRANSACTION_200909_MZ.dbf', u'eqr_transaction_2

00909.dbf', u'._eqr_transaction_200909.dbf', u'eqr_transaction_200912_388.dbf',

u'._eqr_transaction_200912_388.dbf', u'EQR_TRANSACTION_200912_AL.dbf', u'._EQR_T

RANSACTION_200912_AL.dbf', u'EQR_TRANSACTION_200912_MZ.dbf', u'._EQR_TRANSACTION

_200912_MZ.dbf', u'eqr_transaction_200912.dbf', u'._eqr_transaction_200912.dbf']

Third Print: EQR_TRANSACTION_200903_AL.dbf

Traceback (most recent call last):

  File "dbf_convert_stack2.py", line 39, in <module>

    DBFtoCSV(path)

  File "dbf_convert_stack2.py", line 30, in DBFtoCSV

    with arcpy.da.SearchCursor(table, fields) as cursor:

TypeError: 'field_names' must be string or non empty sequence of strings

My end goal is to import these into STATA, so CSV or Excel doesn't really matter. Just trying anything that may work. It's weird that I get a cannon open error with the Tableto Excel  but not here.

0 Kudos
XanderBakker
Esri Esteemed Contributor

Could you post the entire code (which generates the error). Now it seems you posted part of the code and the errors refer to the wrong lines. I also notice that you are trying to convert to Excel now, but still use the dbf output name.

0 Kudos
BlakeBarr
Deactivated User

Just updated above. Thanks for your help.

0 Kudos