Why does ArcMap mess up with the order of fields when importing a txt?

133
6
Jump to solution
10-19-2017 03:28 AM
Highlighted
Esri Regular Contributor

I thought first I should put it as a discussion but actually I am interested in an answer...

More details on my question: 

When I open (drag&drop) a txt file (table) to my mxd, the order of the fields is randomly changed compared to what I see in excel or an text editor. For example, the first two columns end up being colums 18 and 19, column 8 becomes column 20. When I export it to the gdb I have my old order back. 

It is not really a problem at the moment but I am a bit confused why ArcMap does this. 

Reply
0 Kudos
1 Solution

Accepted Solutions
Highlighted
Esri Regular Contributor

This is how the table looks like after I opened the txt in ArcMap: 

txt after import into ArcMap

This is how the table looks like after I exported it into the gdb (which is the correct order of columns):

table after export into gdb

So what I can say (just seeing it now :-)) is that ArcMap seems to display first those columns that have valid field names and after that those columns that have blanks or special characters in the original field name, thus are converted into alias names. 

Good to know...

View solution in original post

Reply
0 Kudos
6 Replies
Highlighted
MVP Esteemed Contributor

image missing

Reply
0 Kudos
Highlighted
MVP Esteemed Contributor

but on a guess it could be an internal sorting issue...

a = ['A', 'a', 'aA', 'B', 'b']

a
Out[2]: ['A', 'a', 'aA', 'B', 'b']

a.sort()

a
Out[4]: ['A', 'B', 'a', 'aA', 'b']

If not, a good lesson in what is expected, isn't always what 'is'

Reply
0 Kudos
Highlighted
Esri Regular Contributor

This is how the table looks like after I opened the txt in ArcMap: 

txt after import into ArcMap

This is how the table looks like after I exported it into the gdb (which is the correct order of columns):

table after export into gdb

So what I can say (just seeing it now :-)) is that ArcMap seems to display first those columns that have valid field names and after that those columns that have blanks or special characters in the original field name, thus are converted into alias names. 

Good to know...

View solution in original post

Reply
0 Kudos
Highlighted
MVP Esteemed Contributor

It seems to drop a lot of fields as well,

it would have been useful to see the original and alias name list for testing purposes.

Reply
0 Kudos
Highlighted
Esri Regular Contributor

It's a very long table that's why not all field names are visible. From what I found until now, none of the fields is dropped, just moved to a different column. 

I am happy to share with you a file with the field names (will come by pm).

Reply
0 Kudos
Highlighted
MVP Esteemed Contributor

thanks

Reply
0 Kudos