GISJim121

Import a CSV document to a File Geodatabase using Python

Blog Post created by GISJim121 on Mar 18, 2020

Documentation is a crucial part of any process and I wanted to share my experience with importing a CSV file into a file geodatabase.  My CSV file has a mixed bag of field types that need to be correctly assigned when importing to the file geodatabase.  However, if you take all the defaults when import a CSV file you will also inherit the 8000 field size for Text data types and other incorrectly assigned data types.

 

In my particular scenario, I needed to explicitly define the data types and size of fields.  My code was originally built from scratch but I started running into an issues of controlling the data types.  So, I decided to build a model and export the process to Python.  For some odd reason I assumed this would be correct.  The code did work, but the output data types were still off (Text data types with a size of 8000).

 

So, I decided to use FieldMappings, FieldMap and Field objects to control how the data was imported into my file geodatabase.  Unfortunately, there was a lack of documentation on how to perform this using a CSV file.  That is why I’m here, writing to you, so you don’t have to struggle like I did.

 

My CSV document contained the following fields: objected, maptaxlot, parcel_number, mapsymbol, soil_name, uniqueid, primecode, classid, farmvalue, soilperc.  In order to control the data and data types I had to define a FieldMappings object and then create individual FieldMap objects to control the data types.  Below are a few examples of how I did this.  I’m sure there is a more elegant way of performing this task but the lack of documentation proved challenging.

 

# Create field mapping object

fms = arcpy.FieldMappings()

 

# Setup how I want the maptaxlot field imported

fm_maptaxlot = arcpy.FieldMap()

fm_maptaxlot.addInputField(<csv_file>, "maptaxlot")

# The line below returns a Field object that allows you to control the data type, length, precision

f_maptaxlot = fm_maptaxlot.outputField

f_maptaxlot.name = "maptaxlot"

f_maptaxlot.length = 14

fm_maptaxlot.outputField = f_maptaxlot

  

# Setup how I want the parcel number field imported

fm_parcelnumber = arcpy.FieldMap()

fm_parcelnumber.addInputField(<csv_file>, "parcel_number")

f_parcelnumber = fm_parcelnumber.outputField

f_parcelnumber.name = "parcel_number"

f_parcelnumber.type = "TEXT"

f_parcelnumber.length = 8

fm_parcelnumber.outputField = f_parcelnumber

 

** Default #py###method of importing parcelnumber field, the data type was a Long Integer

 

# Setup how I want the soil_name field imported

fm_soilname = arcpy.FieldMap()

fm_soilname.addInputField(tbl_tax_soilvalues_csv, "soil_name")

f_soilname = fm_soilname.outputField

f_soilname.name = "soil_name"

f_soilname.length = 75

fm_soilname.outputField = f_soilname

 

# Setup how I want the percentage field imported

fm_soilperc = arcpy.FieldMap()

fm_soilperc.addInputField(tbl_tax_soilvalues_csv, "soilperc")

f_soilperc = fm_soilperc.outputField

f_soilperc.name = "soilperc"

f_soilperc.type = "DOUBLE"

f_soilperc.precision = 5

f_soilperc_scale = 2

fm_soilperc.outputField = f_soilperc

 

# After you have all the FieldMap objected defined, then you must add them to the FieldMappings object (fms object defined above).

fms.addFieldMap(fm_maptaxlot)

fms.addFieldMap(fm_parcelnumber)

fms.addFieldMap(fm_soilname)

fms.addFieldMap(fm_soilperc)

 

# Process Table to Table

arcpy.TableToTable_conversion(<csv_file>, <file_geodatabase>, <output_table_fgdb>, "", field_mapping=fms)

 

I’m hoping this document will help others when it comes to understanding how to utilize the FieldMappings, FieldMap, and Field objects to control data types when importing from a CSV file.

 

Thanks

Outcomes