<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Writing data from feature class to delimited text file in Python Questions</title>
    <link>https://community.esri.com/t5/python-questions/writing-data-from-feature-class-to-delimited-text/m-p/743104#M57437</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;I have several feature classes with values that I would like to export to a tilda (~) delimited text file using a Python script.&amp;nbsp; My customer says tildas are the easiest delimiters for him to work with.&amp;nbsp; I found the csv.writer module available in Python and it was working perfectly when I was only exporting 6 fields of data.&amp;nbsp;&amp;nbsp; However, when I added another 6 fields of data to the export, the script began to crash, giving me the error "IOError: [Errno 22] Invalid argument."&amp;nbsp; After a lot of searching on the web, I've figured out this is an issue with Windows, not Python, and appears to be related to this article (&lt;/SPAN&gt;&lt;A href="http://support.microsoft.com/default.aspx?scid=kb;en-us;899149" rel="nofollow noopener noreferrer" target="_blank"&gt;http://support.microsoft.com/default.aspx?scid=kb;en-us;899149&lt;/A&gt;&lt;SPAN&gt;).&amp;nbsp; The feature class is about 375Mb.&amp;nbsp; I've already tried adjusting the csv.writer to open the file as "w+b," as the article suggests, but that didn't make a difference.&amp;nbsp; What's weird is there's no consistency when the error pops up.&amp;nbsp; Sometimes all of the files are written successfully, sometimes the script crashes after writing a few lines to teh first file, and literally anywhere in between.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;I'm at a loss on how to solve this.&amp;nbsp; I'm still pretty new to Python and feel like maybe there's a way to break up the data so it only X number of megabytes of data at a time.&amp;nbsp; However, I don't know how to do that or even how to determine how small to break of the files.&amp;nbsp; The article mentions a 64Mb limit but, as I said, I've seen the script crash after a few lines are written to the first file (i.e. far less than 64Mb of data).&amp;nbsp; Any help would be greatly appreciated.&amp;nbsp; FYI, I'm running this script on a Windows Server 2008 R2 OS.&amp;nbsp; Below is my code.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;PRE class="lia-code-sample line-numbers language-none"&gt;# ConvertFCtoTXT.py
# Created on: March 20, 2013
# Description: Converts DCYF/OLCR geocoded feature classes to tilda-delimited text files.
# Notes: Logging module requires this script be called through the DCYF-OLCR Daily Geocode Batch.py script.
# ---------------------------------------------------------------------------------------------------------------

import arcpy, csv, logging
from arcpy import env

#Set up variables
env.workspace = "C:\\Daily Geocode Processes\\DCYF OLCR Daily Geocode\\Workspace\\Workspace GDB.gdb\\GeocodedFeatures"
ProjectGDB = "C:\\Daily Geocode Processes\\DCYF OLCR Daily Geocode\\Workspace\\Workspace GDB.gdb\\GeocodedFeatures"
CubeWorkspace = "\\\\cubed03\\Geocoding"

# Log start date/time.
logging.info("Started ConvertFCtoTXT.py.")

# Create iterator to run process for each feature class in WorkspaceGDB.
for fc in arcpy.ListFeatureClasses():
&amp;nbsp;&amp;nbsp; InputFC = ProjectGDB + "\\" + fc
&amp;nbsp;&amp;nbsp; OutputTXT = CubeWorkspace + "\\" + fc.rstrip("_GC") + ".txt"

&amp;nbsp;&amp;nbsp; # Module defines get_fields function. Allows writerow to iterate through each row in the table.
&amp;nbsp;&amp;nbsp; # Yield iterates through each row, without storing the rows in memory.&amp;nbsp; Yield stops when no more records exist.
&amp;nbsp;&amp;nbsp; def get_fields(InputFC, Fields):
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; with arcpy.da.SearchCursor(InputFC, Fields) as cursor:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; for row in cursor:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; yield row

&amp;nbsp;&amp;nbsp; # Describe opens the feature class properties, including the field names, to function
&amp;nbsp;&amp;nbsp; DescribeFC = arcpy.Describe(InputFC)

&amp;nbsp;&amp;nbsp; # Selects only the fields necessary to export
&amp;nbsp;&amp;nbsp; FieldNames = [field.name for field in DescribeFC.fields if field.name
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; in ["Key", "GCAcc", "Address_Std", "City_Std", "State_Std", "ZIPCode_Std", "County_Std", "X_Coordinate", "Y_Coordinate"]]

&amp;nbsp;&amp;nbsp; # Defines rows using the get_fields function (see above)
&amp;nbsp;&amp;nbsp; rows = get_fields(InputFC, FieldNames)

&amp;nbsp;&amp;nbsp; # Opens the output file, prepares it to write as ~ delimited, and writes the headers and rows to the table
&amp;nbsp;&amp;nbsp; with open(OutputTXT,'w+') as out_file:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; out_writer = csv.writer(out_file, delimiter="~")
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; out_writer.writerow(FieldNames) #writes the headers to the file
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; for row in rows:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; out_writer.writerow(row) #writes the rows to the file

&amp;nbsp;&amp;nbsp; del row
&amp;nbsp;&amp;nbsp; del rows
&amp;nbsp;&amp;nbsp; out_file.close()
&amp;nbsp;&amp;nbsp; print OutputTXT + " done"

del fc

# Log completion date/time
logging.info("Completed ConvertFCtoTXT.py.")&lt;/PRE&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Sun, 12 Dec 2021 07:37:25 GMT</pubDate>
    <dc:creator>LucasMurray</dc:creator>
    <dc:date>2021-12-12T07:37:25Z</dc:date>
    <item>
      <title>Writing data from feature class to delimited text file</title>
      <link>https://community.esri.com/t5/python-questions/writing-data-from-feature-class-to-delimited-text/m-p/743104#M57437</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;SPAN&gt;I have several feature classes with values that I would like to export to a tilda (~) delimited text file using a Python script.&amp;nbsp; My customer says tildas are the easiest delimiters for him to work with.&amp;nbsp; I found the csv.writer module available in Python and it was working perfectly when I was only exporting 6 fields of data.&amp;nbsp;&amp;nbsp; However, when I added another 6 fields of data to the export, the script began to crash, giving me the error "IOError: [Errno 22] Invalid argument."&amp;nbsp; After a lot of searching on the web, I've figured out this is an issue with Windows, not Python, and appears to be related to this article (&lt;/SPAN&gt;&lt;A href="http://support.microsoft.com/default.aspx?scid=kb;en-us;899149" rel="nofollow noopener noreferrer" target="_blank"&gt;http://support.microsoft.com/default.aspx?scid=kb;en-us;899149&lt;/A&gt;&lt;SPAN&gt;).&amp;nbsp; The feature class is about 375Mb.&amp;nbsp; I've already tried adjusting the csv.writer to open the file as "w+b," as the article suggests, but that didn't make a difference.&amp;nbsp; What's weird is there's no consistency when the error pops up.&amp;nbsp; Sometimes all of the files are written successfully, sometimes the script crashes after writing a few lines to teh first file, and literally anywhere in between.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;I'm at a loss on how to solve this.&amp;nbsp; I'm still pretty new to Python and feel like maybe there's a way to break up the data so it only X number of megabytes of data at a time.&amp;nbsp; However, I don't know how to do that or even how to determine how small to break of the files.&amp;nbsp; The article mentions a 64Mb limit but, as I said, I've seen the script crash after a few lines are written to the first file (i.e. far less than 64Mb of data).&amp;nbsp; Any help would be greatly appreciated.&amp;nbsp; FYI, I'm running this script on a Windows Server 2008 R2 OS.&amp;nbsp; Below is my code.&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;PRE class="lia-code-sample line-numbers language-none"&gt;# ConvertFCtoTXT.py
# Created on: March 20, 2013
# Description: Converts DCYF/OLCR geocoded feature classes to tilda-delimited text files.
# Notes: Logging module requires this script be called through the DCYF-OLCR Daily Geocode Batch.py script.
# ---------------------------------------------------------------------------------------------------------------

import arcpy, csv, logging
from arcpy import env

#Set up variables
env.workspace = "C:\\Daily Geocode Processes\\DCYF OLCR Daily Geocode\\Workspace\\Workspace GDB.gdb\\GeocodedFeatures"
ProjectGDB = "C:\\Daily Geocode Processes\\DCYF OLCR Daily Geocode\\Workspace\\Workspace GDB.gdb\\GeocodedFeatures"
CubeWorkspace = "\\\\cubed03\\Geocoding"

# Log start date/time.
logging.info("Started ConvertFCtoTXT.py.")

# Create iterator to run process for each feature class in WorkspaceGDB.
for fc in arcpy.ListFeatureClasses():
&amp;nbsp;&amp;nbsp; InputFC = ProjectGDB + "\\" + fc
&amp;nbsp;&amp;nbsp; OutputTXT = CubeWorkspace + "\\" + fc.rstrip("_GC") + ".txt"

&amp;nbsp;&amp;nbsp; # Module defines get_fields function. Allows writerow to iterate through each row in the table.
&amp;nbsp;&amp;nbsp; # Yield iterates through each row, without storing the rows in memory.&amp;nbsp; Yield stops when no more records exist.
&amp;nbsp;&amp;nbsp; def get_fields(InputFC, Fields):
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; with arcpy.da.SearchCursor(InputFC, Fields) as cursor:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; for row in cursor:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; yield row

&amp;nbsp;&amp;nbsp; # Describe opens the feature class properties, including the field names, to function
&amp;nbsp;&amp;nbsp; DescribeFC = arcpy.Describe(InputFC)

&amp;nbsp;&amp;nbsp; # Selects only the fields necessary to export
&amp;nbsp;&amp;nbsp; FieldNames = [field.name for field in DescribeFC.fields if field.name
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; in ["Key", "GCAcc", "Address_Std", "City_Std", "State_Std", "ZIPCode_Std", "County_Std", "X_Coordinate", "Y_Coordinate"]]

&amp;nbsp;&amp;nbsp; # Defines rows using the get_fields function (see above)
&amp;nbsp;&amp;nbsp; rows = get_fields(InputFC, FieldNames)

&amp;nbsp;&amp;nbsp; # Opens the output file, prepares it to write as ~ delimited, and writes the headers and rows to the table
&amp;nbsp;&amp;nbsp; with open(OutputTXT,'w+') as out_file:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; out_writer = csv.writer(out_file, delimiter="~")
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; out_writer.writerow(FieldNames) #writes the headers to the file
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; for row in rows:
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; out_writer.writerow(row) #writes the rows to the file

&amp;nbsp;&amp;nbsp; del row
&amp;nbsp;&amp;nbsp; del rows
&amp;nbsp;&amp;nbsp; out_file.close()
&amp;nbsp;&amp;nbsp; print OutputTXT + " done"

del fc

# Log completion date/time
logging.info("Completed ConvertFCtoTXT.py.")&lt;/PRE&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sun, 12 Dec 2021 07:37:25 GMT</pubDate>
      <guid>https://community.esri.com/t5/python-questions/writing-data-from-feature-class-to-delimited-text/m-p/743104#M57437</guid>
      <dc:creator>LucasMurray</dc:creator>
      <dc:date>2021-12-12T07:37:25Z</dc:date>
    </item>
  </channel>
</rss>

