Select to view content in your preferred language

Conversion from CSV to DBF altering data...

7649
30
Jump to solution
02-04-2016 01:19 PM
DannyLackey
Deactivated User

So, I successfully converted my .csv to a .dbf with this script:

env.workspace = r"c:\Output"

inTable = r"c:\Output\test.csv"
outLocation = r"c:\Output"
outTable = "test.dbf"

arcpy.TableToTable_conversion(inTable, outLocation, outTable)

The problem is, in the resulting .dbf file, its adding a decimal with trailing zeroes to the value:

750050

becomes

750050.00000000000

How can I avoid this?

30 Replies
DannyLackey
Deactivated User

Here is a sample of my data:

    

XYSTART_DTTMEND_DTTMValue
14680507501501/7/20161/31/20160.13237318
14681007501501/7/20161/31/2016-0.60834539
14681507501501/7/20161/31/2016-0.17450202
14682007501501/7/20161/31/2016-0.34773338
14677007501001/7/20161/31/2016-0.19723971
14677507501001/7/20161/31/20160.16580771
0 Kudos
DarrenWiens2
MVP Honored Contributor

Hmmm. That works for me. Can you get it to work properly through the GUI dialog? Do both X and Y come in as Long?

DannyLackey
Deactivated User

You can get it to run programmatically as well?

0 Kudos
XanderBakker
Esri Esteemed Contributor

It works for me (using your code), BUT you will need to use: Format=TabDelimited in your schema.ini is the file is delimited by Tabs.

0 Kudos
DannyLackey
Deactivated User

Well, my actual csv file is comma delimited.  The sample I posted here I opened in excel and copied and pasted a table-like structure.  Should it read Format=CommaDelimited?

0 Kudos
XanderBakker
Esri Esteemed Contributor

To avoid errors in the interpretation of the file it is best to attach the actual file instead of pasting it in a different format. The attach option is available (lower right corner) when you activate the advanced editor (link upper right corner).

0 Kudos
DarrenWiens2
MVP Honored Contributor

Yes, this results in exactly the same output table:

>>> inTable = r"c:\junk\Book3.csv"
... outLocation = r"c:\junk"
... outTable = "test.dbf"
... arcpy.TableToTable_conversion(inTable, outLocation, outTable)

If I change the csv to include a decimal (e.g. 750050.00001), then it gets read as Double, which will display decimal places.

Using the field map, I can force it back to Long (no more decimal places).

The code via Copy As Python Snippet changes to:

arcpy.TableToTable_conversion(in_rows="c:/junk/Book3.csv",out_path="c:/junk",out_name="test.dbf",where_clause="#",field_mapping="""X "X" true true false 4 Long 0 0 ,First,#,c:/junk/Book3.csv,X,-1,-1;Y "Y" true true false 8 Long 0 0 ,First,#,c:/junk/Book3.csv,Y,-1,-1;START_DTTM "START_DTTM" true true false 8 Date 0 0 ,First,#,c:/junk/Book3.csv,START_DTTM,-1,-1;END_DTTM "END_DTTM" true true false 8 Date 0 0 ,First,#,c:/junk/Book3.csv,END_DTTM,-1,-1;Value "Value" true true false 8 Double 0 0 ,First,#,c:/junk/Book3.csv,Value,-1,-1""",config_keyword="#")
0 Kudos
DannyLackey
Deactivated User

Did you run it with column names like: Col1=A Long or Col1=X Double  ?  Wondering what you may have done differently.

0 Kudos
XanderBakker
Esri Esteemed Contributor

This is my schema.ini

[test2.csv]

ColNameHeader=True

Format=TabDelimited

Col1=X Long

Col2=Y Long

Col3=START_DTTM DateTime

Col4=END_DTTM DateTime

Col5=Value Double

DannyLackey
Deactivated User

SUCCESS!  I used the schema.ini example you just posted changing ONLY TabDelimited to CSVDelimited.  Thank you!

0 Kudos