I'm really stumped...
simple code here....not working....
import arcpy
arcpy.env.workspace = "C:/Users/Ryan/Routes.gdb"
shplist = arcpy.ListFeatureClasses()
print shplist
my Routes.gdb has ~9,000 shapefiles in it. Each shapefile is very small, maybe 3 to 50 line features (a route) The list returns empty...the output is []
If i run this exact code above, but point it to a Routes.gdb with only 25 shapefiles, it returns a correct list. This smaller gdb was generated from the exact same code that generated the large gdb, just run against a very small sample of input data.
I know the ~9,000 shapefiles are in Routes.gdb because I can see them in ArcCatalog 10.1 ..it just takes 30minutes for them to load in the Catalog Tree, or the "select input datasets to Merge window".
I even tried getting a subset of the files in Routes.gdb by using a mask, shplist = arcpy.ListFeatureClasses("Route0*")still no go...yet it works with the small gdb.
I thought file geodatabases were suppose to be able to handle large datasets? I could even understand if later the merge failed hitting the 2GB limit I read everywhere, but I can't even get there without a file list...the file list should not fail...at least I have yet to see anyone post that it has a limit.
Only thing I can think of is the list is too long, in which case, a subset, selecting, ~300 files should be no problem but that fails too as mentioned above.
I'm going to try and wade through the lag that's encountered when trying to use Merge from ArcCatalog, but that may be the only way to merge large numbers of small shapefiles? I would rather NOT have to leave python, just to do this and then go back to python, thus forcing me to run multiple programs instead of one nice and clean, fully functional program.
what am I doing wrong, or could improve on? thanks!
I'm running this on a server-grade system octocore 2.66GHz w/ 20GB ECC FB-DIMMS so it's not a wimpy system issue.
...and YES 🙂 in hindsight (and i've already changed the code to do so) I should have been merging them in small chunks as I generated the files..but it's too late now. This process took over 100 hours to run non-stop (running on a RAMDISK) and I don't have time to run it again.