POST
|
Is there any way to repair a FGDB? Or in any case get some sort of diagnostics out of it to know for sure? Thanks,
... View more
11-24-2010
11:59 AM
|
0
|
0
|
249
|
POST
|
Thanks for the input - unfortunately I can't list the contents of the geodatabase without incurring an out of memory error. The strangest facet of this is arcpy.ListFeatureClasses() returns null, but if I already know the name of a feature class, it can be manipulated, but without being able to list them, I don't know the individual names of the feature classes to compress... arcpy.ListFiles() crashed the last two times I attempted it also. Ugh!
... View more
11-24-2010
11:20 AM
|
0
|
0
|
249
|
POST
|
Hi All, I've been importing shapefiles into one geodatabase by way of a python script. The database is now 13gb, and even after compacting it, I cannot view or in any way work with the geodatabase, as I get "out of memory" errors. My computer has 8gb of RAM - are geodatabase sizes limited to the amount of installed memory? I am surprised that they do not operate using virtual memory (swap). Examples of some of the error messages: Not enough storage is available to complete this operation.; (code 8007000E) Out of memory Failed to execute CompressFileGeodatabaseData Thanks for the insights,
... View more
11-23-2010
10:59 AM
|
0
|
5
|
365
|
POST
|
Yes, also UNC pathnames (as opposed to mapped drive letters) are ALWAYS the way to go... I wonder why that is? I got into the habit of doing this as a number of programs don't recognize relative paths, and so I could switch the drive letter to, for example, an external hard drive or network project depending on what I was working on (collaborating with others) - unfortunate that this has such performance hindrances, but at least I know now. Shapefiles are importing at a rate of 1 per 2-3 seconds now, instead of 1 per 1.5 minutes. (!) Re: setproduct(), the documentation lists using "import arcinfo" before "import arcpy" as an alternate (legacy) way of doing this. Do you have a preference / notice a difference? Thanks again,
... View more
11-22-2010
02:26 PM
|
0
|
0
|
350
|
POST
|
Update - I previously had the code pointing to a locally mapped network drive (IE mapped as 127.0.0.1/some/directory as the letter W: so that I didn't have to type in a long directory name each time. I changed this to the full path directory from C: and had 1000% performance improvement, for whatever reason, making each shapefile import in a matter of seconds. Wow! So, looks like the issue wasn't with the code, but rather with some sort of windows networking performance in stead?
... View more
11-22-2010
02:18 PM
|
0
|
0
|
350
|
POST
|
Hi All, I have the following script running on roughly 15gb of shapefiles, with the purpose of importing them into a geodatabase. I had previously tried this by finding all of the shapefiles first, and passing them to "Feature class to Feature class" as one block, however it overloaded the tool. This version uses a loop to import them to the geodatabase one-by-one. In this way, the code is stable, and has successfully processed a few thousand shapefiles already, however I am convinced that the process is taking much longer than it should. For example, the code takes about 35-40 seconds just to check if the feature class already exists in the geodatabase. This is far, far too long, and I suspect it has something to do with checking out the arcInfo license each time it uses an arcpy function. Does anyone have ideas about increasing the speed and efficiency? I was considering trying something like this, but the documentation says it is legacy. Thanks for the brainstorming, Matt (I run this code through a normal python console, not the one in arc for stability reasons)
# Import system modules
import sys, string, os, fnmatch, arcpy
from arcpy import env
### Set user variables here, if no user-input desired: ###
searchPath = os.path.abspath("/path/to/data") #path to search
searchParam = "*.shp" #search parameter - wildcards OK
shapefileType = "Polyline" #shapefile type: Polyline, Polygon, Point, etc.
OutputDatabase = os.path.abspath("/path/to/output.gdb")
### End user variables ###
### Check and print the input parameters ###
if os.path.isdir(searchPath):
print "Search: " + searchPath + " OK"
else:
print searchPath + "\n is not a valid path!"
sys.exit(0)
print "Searching for " + searchParam
print "shapefileType = " + shapefileType
if arcpy.Exists(OutputDatabase): # verify that output database exists.
print "OutputDatabase = " + OutputDatabase + " OK..."
else:
print searchPath + "\n is not valid"
sys.exit(0)
### End variable check ###
env.workspace = OutputDatabase # set the current workspace as the target database.
env.overwriteOutput = True
# Search function
resultList = [] #The list for collecting results with
for root, dirs, files in os.walk(searchPath): # crawl through the search directory
for f in files: # for each file in the search directory
if fnmatch.fnmatch(f, searchParam): # check if the filename matches
shpDescr = arcpy.Describe(os.path.join(root, f)) # reads the shapefile
shpType = shpDescr.shapeType # reads the shapefile type
if str(shpType) == shapefileType: # check if the shapefile type matches
outFeatureClass = arcpy.ValidateTableName("cont"+f.replace(".shp",""), OutputDatabase) # generates a database-friendly fieldclass name
if arcpy.Exists(outFeatureClass): # check to see if already imported
print os.path.join(root, f) + " already imported as " + outFeatureClass
else:
print "Exporting " + f + " to " + outFeatureClass + "..."
try:
arcpy.FeatureClassToFeatureClass_conversion(os.path.join(root, f), OutputDatabase, outFeatureClass) #export to geodatabase
except: # error handling
print "Error with " + os.path.join(root, f)
for msg in range(0, arcpy.GetMessageCount()):
if arcpy.GetSeverity(msg) == 2:
arcpy.AddReturnMessage(msg)
tempString = "'" + os.path.join(root, f) + "'"
resultList.append(tempString)
resultList.append(";")
continue
if not resultList: # only runs the next lines if there were problems with some of the files.
resultList.pop() #removes last list value (trailing semicolon)
resultString = "\"" + "".join(resultList) + "\"" #Convert list to a single string
outFilesMV = resultString.replace("\\","\\\\")
# Outputs a list of problem files in the searchPath directory:
outputFilename = searchPath + "/ImportErrors.txt"
fu = open(os.path.abspath(outputFilename), 'w')
fu.write(outFilesMV)
fu.close()
... View more
11-22-2010
01:51 PM
|
0
|
5
|
559
|
POST
|
Thanks for the updates; I'll monitor the other thread for progress. In the mean time, support for complex queries is limited with an access-based database?
... View more
11-19-2010
12:55 PM
|
0
|
0
|
375
|
POST
|
Update - this problem originated by trying to describe() a non-shapefile. The search parameters should include *.shp to avoid this issue. Oops!
... View more
11-19-2010
12:51 PM
|
0
|
0
|
1286
|
POST
|
Hi All, I'm pretty confused as to the limitations of the various formats of geodatabases - personal vs file. I'm much more familiar with MS access databases, and have a few built that run through a series of queries to produce an x,y table with calculated fields. One of the fields is calculated using a Row Statistics module. Others are the result of a crosstab query. Unfortunately, Arc doesn't like the calculated fields much, and refuses to show the query in question from Catalog. I would really like to embrace the new geodatabase format, but my understanding is that this level of querying and data structure is not available? Is there a happy solution to get my data into Arc, without exporting it to excel first? Thanks for the insights, Matt
... View more
11-19-2010
11:57 AM
|
0
|
5
|
572
|