POST
|
I have a street file with 5 alternative name fields for each street. I'd like to use all 5 alternatives for the street-range address locator I'm building. However, it looks like the Create Address Locator tool only allows me to use 1 alternative name. One workaround I could do is create multiple address locators using the alternative name fields and combine them using a composite address locator but I thought there may be an easier way.
... View more
04-18-2013
08:58 AM
|
0
|
5
|
2609
|
POST
|
I have several feature classes with values that I would like to export to a tilda (~) delimited text file using a Python script. My customer says tildas are the easiest delimiters for him to work with. I found the csv.writer module available in Python and it was working perfectly when I was only exporting 6 fields of data. However, when I added another 6 fields of data to the export, the script began to crash, giving me the error "IOError: [Errno 22] Invalid argument." After a lot of searching on the web, I've figured out this is an issue with Windows, not Python, and appears to be related to this article (http://support.microsoft.com/default.aspx?scid=kb;en-us;899149). The feature class is about 375Mb. I've already tried adjusting the csv.writer to open the file as "w+b," as the article suggests, but that didn't make a difference. What's weird is there's no consistency when the error pops up. Sometimes all of the files are written successfully, sometimes the script crashes after writing a few lines to teh first file, and literally anywhere in between. I'm at a loss on how to solve this. I'm still pretty new to Python and feel like maybe there's a way to break up the data so it only X number of megabytes of data at a time. However, I don't know how to do that or even how to determine how small to break of the files. The article mentions a 64Mb limit but, as I said, I've seen the script crash after a few lines are written to the first file (i.e. far less than 64Mb of data). Any help would be greatly appreciated. FYI, I'm running this script on a Windows Server 2008 R2 OS. Below is my code. # ConvertFCtoTXT.py
# Created on: March 20, 2013
# Description: Converts DCYF/OLCR geocoded feature classes to tilda-delimited text files.
# Notes: Logging module requires this script be called through the DCYF-OLCR Daily Geocode Batch.py script.
# ---------------------------------------------------------------------------------------------------------------
import arcpy, csv, logging
from arcpy import env
#Set up variables
env.workspace = "C:\\Daily Geocode Processes\\DCYF OLCR Daily Geocode\\Workspace\\Workspace GDB.gdb\\GeocodedFeatures"
ProjectGDB = "C:\\Daily Geocode Processes\\DCYF OLCR Daily Geocode\\Workspace\\Workspace GDB.gdb\\GeocodedFeatures"
CubeWorkspace = "\\\\cubed03\\Geocoding"
# Log start date/time.
logging.info("Started ConvertFCtoTXT.py.")
# Create iterator to run process for each feature class in WorkspaceGDB.
for fc in arcpy.ListFeatureClasses():
InputFC = ProjectGDB + "\\" + fc
OutputTXT = CubeWorkspace + "\\" + fc.rstrip("_GC") + ".txt"
# Module defines get_fields function. Allows writerow to iterate through each row in the table.
# Yield iterates through each row, without storing the rows in memory. Yield stops when no more records exist.
def get_fields(InputFC, Fields):
with arcpy.da.SearchCursor(InputFC, Fields) as cursor:
for row in cursor:
yield row
# Describe opens the feature class properties, including the field names, to function
DescribeFC = arcpy.Describe(InputFC)
# Selects only the fields necessary to export
FieldNames = [field.name for field in DescribeFC.fields if field.name
in ["Key", "GCAcc", "Address_Std", "City_Std", "State_Std", "ZIPCode_Std", "County_Std", "X_Coordinate", "Y_Coordinate"]]
# Defines rows using the get_fields function (see above)
rows = get_fields(InputFC, FieldNames)
# Opens the output file, prepares it to write as ~ delimited, and writes the headers and rows to the table
with open(OutputTXT,'w+') as out_file:
out_writer = csv.writer(out_file, delimiter="~")
out_writer.writerow(FieldNames) #writes the headers to the file
for row in rows:
out_writer.writerow(row) #writes the rows to the file
del row
del rows
out_file.close()
print OutputTXT + " done"
del fc
# Log completion date/time
logging.info("Completed ConvertFCtoTXT.py.")
... View more
04-04-2013
12:08 PM
|
0
|
0
|
322
|
POST
|
Lance, Thanks for the link. I haven't had a chance to test the solutions out but I'll try that next time I come across this problem, which will probably be sooner rather than later. I saw the article said there was a higher chance of the FGDB corrupting after compacting if both ArcCatalog and ArcMap are running. However, I've had this problem show up pretty regularly even with only ArcCatalog running (I've even gotten into the habit of making sure ArcMap is killed in task manager before compacting an FGDB). I've already lost one FGDB to this problem. Luckily it was small and easy to recreate. However, I now use the tool sparingly and cautiously. I've gotten in the habit of backing-up my FGDB before compacting it and only compact the database before archiving it. FYI, I'm running version 10, sp2 on Windows XP sp3.
... View more
07-12-2011
10:10 AM
|
0
|
0
|
832
|
POST
|
Chris, did you ever find a solution to this problem? The same thing is happening to me with a file geodatabase I created in 10. Thanks, Hello, I am using 10 (sp1), info license. I was editing some data this morning. After I completed the editing, I compacted the file geodatabase. After compacting, I was no longer able to access the database, receiving a "Failed to connect to database" message. The description tab states "The item's XML contains errors." This is not an issue of a 9x verses a 10 geodatabase. I created this in 10 and have been successfully storing and accessing data since I created it. Any ideas, thoughts, or suggestions before I recreate it from scratch Chris
... View more
05-03-2011
12:31 PM
|
0
|
0
|
832
|
POST
|
Does anyone know if there�??s a limit to the number of records explorer can show from a single feature class? I�??ve got a feature class with over a million records in it. When I try adding it to Explorer 1500, I get an error: �??Failed to add Geodatabase layer(s).�?� When I try adding it as a layer or layer package, the file will load but no symbology shows up. It�??s only when I break the feature class into smaller pieces that the file will load with no trouble. I'd like to know how small the feature class needs to be for it to load successfully into explorer. FYI, the feature class is being stored in a file geodatabase created with version 10.
... View more
12-30-2010
03:47 AM
|
0
|
0
|
577
|
Online Status |
Offline
|
Date Last Visited |
11-11-2020
02:23 AM
|