Python newbie problem

454
2
03-09-2012 06:45 AM
MichaelFutch
New Contributor
Hi,

I'm running a python script in 9.3 and running into a problem.  The code cycles through the 800 or so shapefiles and performs the function I need it to, but after some time I start to get the following error message (nested in my print commands):

2012-03-08 12:05:30.544000
Village to District Join Complete
Buffer Complete
NOT NULL = Z://futch//python//outputs//buffer//136_shpbuff.shp
Raster Extraction Complete
Raster to Point Complete
Spatial Join Complete
Field Delete Complete
['id_pseudo', 'id_text', 'BUFF_DIST', 'GRID_CODE']
found 4650229 rows
ERROR:
Traceback Info:
  File "Z:\futch\python\Py4_pointstats.py", line 159, in <module>
    f.write('\n'.join(output))
Error Info:
    <class 'Queue.Empty'>:

Deleting Intermediate Files
136.shp
2012-03-08 13:57:34.357000

I continue to get that error for more and more shapefiles.  If I run the function on that 136.shp by itself, it works fine.  Also if I start with 136.shp it will work on many more shapefiles until I start to get the same error.  I'd like to be able to run the code all at once and not have to restart once the error pops up.  Any ideas would be greatly appreciated!  Code pasted below:


#This script takes individual district polygon, village points, and slope raster data
#to create comma separated value files for each district.  Each entry will be a cell
#georeferenced to a village and containing an indicator for slope greater than 15 degrees.
#Stata file ruggedmerge.do aggregates and collapses to village level data.

import arcgisscripting, sys, os, traceback

from datetime import datetime
print str(datetime.now())
#Create the Geoprocessor object

GP = arcgisscripting.create()

GP.CheckOutExtension("spatial")
GP.AddToolbox("C:/Program Files (x86)/ArcGIS/ArcToolbox/Toolboxes/Data Management Tools.tbx")
GP.AddToolbox("C:/Program Files (x86)/ArcGIS/ArcToolbox/Toolboxes/Conversion Tools.tbx")
GP.AddToolbox("C:/Program Files (x86)/ArcGIS/ArcToolbox/Toolboxes/Analysis Tools.tbx")
GP.AddToolbox("Z:/futch/python/NACT/NACT.tbx")

GP.OverWriteOutput = 1

#Set the input workspace: Feature classes of village points only in this folder
GP.workspace = "Z:/futch/python/outputs/districts"

#Choose the distance in meters for circles around villages - "1000" or "1000;2000"
BufferDistance = "750;1250;1750;2250"

villagefull = "Z://futch//python//outputs//villagefull.shp"
districtpoint = "Z://futch//python//outputs//temp//distpoint.shp"
#Set the clip featureclass
rasterFeatures1 = "Z://futch//python//outputs//reclass"
rasterFeatures2 = "Z://futch/python//outputs//slope_india"
rasterFeatures3 = "Z://futch//python//outputs//indiaproj"
Extract_buff = "Z://futch//python//outputs//extract_buff"
reclass_point = "Z:\\futch\\python\\outputs\\temp\\reclass_point.shp"

#Set the output workspace

outWorkspace = "Z://futch//python//outputs//tables"
outWorkspace2 = "Z://futch//python//outputs//proj"
outWorkspace3 = "Z://futch//python//outputs//buffer"
scratchspace = "Z://futch//python//outputs//scratchspace.gdb"


try:

    #Get a list of the featureclasses in the input folder "Z:/futch/python/outputs/districts"
    #If the function breaks down, move the completed districts to other folder
    fcs = GP.ListFeatureClasses()



    #Loop through the list of feature classes

    fcs.Reset()

    #Iterates over features in district folder
    fc = fcs.Next()


    #Add id_pseudo to village file, based off arcgis FID
    GP.AddField_management(villagefull, "id_pseudo", "text", "", "", "", "", "NON_NULLABLE", "NON_REQUIRED", "")
    GP.CalculateField_management(villagefull, "id_pseudo", "[FID]", "VB", "")



    while fc:

        #Validate the new feature class name for the output workspace.
        GP.workspace = "Z:/futch/python/outputs/districts"

        #Change extent to union in case the intersection is empty, avoids breakdown
        #in spatial join of villages to districts.
        GP.extent = "MAXOF"
        GP.SpatialJoin_analysis(villagefull, fc, districtpoint, "JOIN_ONE_TO_ONE", "KEEP_COMMON", "", "INTERSECTS", "0 Unknown", "")
        GP.extent = "Default"

        print "Village to District Join Complete"

        fc_project = outWorkspace2 + "//" + GP.ValidateTableName(fc,outWorkspace2)+ ".shp"
        bufferFeatureClass = outWorkspace3 + "//" + GP.ValidateTableName(fc, outWorkspace3) + "buff.shp"
        joinFeatureClass = scratchspace + "//" + "join" + GP.ValidateTableName(fc, scratchspace)                                                                
        
        GP.workspace = "Z:/futch/python/outputs"

        #Buffer around each village in meters
        GP.MultipleRingBuffer_analysis(districtpoint,bufferFeatureClass, BufferDistance, "Meters", "distance", "NONE", "FULL")
        print "Buffer Complete"

        #Delete unnecessary fields.
        GP.DeleteField_management(bufferFeatureClass, "Join_Count;NAME")

        ## gp.GetCount will blow up if the shapefile is NULL so this
        ##  try/except will prevent that from happening.
        try:
            if int(GP.GetCount_management(bufferFeatureClass)) != 0:
                print "NOT NULL = "+ bufferFeatureClass

                #Extract slope rasters around each village
                GP.ExtractByMask_sa(rasterFeatures1, bufferFeatureClass, Extract_buff)
                print "Raster Extraction Complete"

                #Convert Slope raster to point data
                GP.RasterToPoint_conversion(Extract_buff, reclass_point, "VALUE")
                print "Raster to Point Complete"

                #Join point data to buffers around villages
                GP.SpatialJoin_analysis(bufferFeatureClass, reclass_point, joinFeatureClass, "JOIN_ONE_TO_MANY", "KEEP_COMMON", "", "INTERSECTS", "0 Unknown", "")
                print "Spatial Join Complete"

                #Delete unnecessary fields
                GP.DeleteField_management(joinFeatureClass, "Id;POINTID;Join_Count")
                print "Field Delete Complete"


                #Next chunk of code converts pointfile with village data to csv
                #file for easy input into stata.  One file for each district.
                #Ruggedmerge.do will aggregate and collapse to village level data.
                
                GP.workspace = scratchspace ## workspace of the table or feature class
                table = joinFeatureClass ## table or feature class from wich the attributes should be exported

                outputpath = outWorkspace + "//" + GP.ValidateTableName(fc,outWorkspace) + ".csv" ## path to the file where the output should be written to
                csvseparator = ',' ## column separator field
                ignorefields = ['ANDHRA_ID','ARUNA_ID','ASSAM_ID','BIHAR_ID','CHANDI_ID','CHHATTIS_I','DADRA_ID','DAMAN_ID','DELHI_ID','DIU_ID','GOA_ID','GUJARAT_ID','HARYANA_ID','HIMACHAL_I','JAMMU_ID','JHARKHAN_I','KARNATAK_I','KERALA_ID','LAKSH_ID','MADHYA_ID','MAHARASH_I','MANIPUR_ID','MEGHALAY_I','MIZORAM_ID','NAGALAND_I','ORISSA_ID','PONDICH_ID','PUNJAB_ID','RAJSTHAN_I','SIKKIM_ID','TAMIL_ID','TRIPURA_ID','ITEM001','UTRANCH_ID','WBENGAL_ID','OBJECTID','Shape','Shape_Leng','Join_Count','distance','POINTID','Shape_Length','Shape_Area'] ##list with fields to ignore

                def print_exception():
                    tb = sys.exc_info()[2]
                    l = traceback.format_tb(tb)
                    l.reverse()
                    tbinfo = "".join(l)
                    pymsg = "ERROR:\nTraceback Info:\n" + tbinfo + "Error Info:\n    " +  str(sys.exc_type)+ ": " + str(sys.exc_value) + ""
                    print pymsg

                def get_fieldnames(fields, ignorefields=[]):
                    fields_output = []
                    for field in iter(fields.next, None):
                        if not field.name in ignorefields:
                            fields_output.append(field.name)
                    return fields_output

                try:
                    fields = GP.listfields(table)
                    fieldnames = get_fieldnames(fields, ignorefields)
                    print fieldnames
                    rows = GP.searchcursor(table)

                    output = []
                    output.append(csvseparator.join(fieldnames))
                    
                    for row in iter(rows.next, None):
                        outputrow = []
                        for fieldname in fieldnames:
                            outputrow.append(str(row.getvalue(fieldname)))
                        outputrow = csvseparator.join(outputrow)
                        output.append(outputrow)
                    print 'found', str(len(output)), 'rows'
                    f = open(outputpath, 'w')
                    f.write('\n'.join(output))
                    f.close()
                except:
                    print_exception()
                    print GP.getmessages(2)

                #Move to the next fc in the list.

                print "Deleting Intermediate Files"
                #GP.Delete_management(Extract_buff,"RasterDataset")
                GP.Delete_management(reclass_point,"")
                GP.Delete_management(joinFeatureClass,"")


        except:
            print "NULL = "+bufferFeatureClass

        print fc
        from datetime import datetime
        print str(datetime.now())

        #iterates to next district
        fc = fcs.Next()



except:

    print GP.GetMessages(2)
Tags (2)
0 Kudos
2 Replies
DuncanHornby
MVP Notable Contributor
Michael,

I'm just throwing some ideas around as I'm not so good with Python. I've never used that .join method before so I had to look at the online help. The format they give is:

string.join(list,separator)
 

you have

f.write('\n'.join(output))


maybe the following will work instead?
string.join(output,'\n')


Your error log says it found 4650229 rows, thats quite a lot of data! Maybe there is some sort of limit on doing a write to a text file with the method you are using? You could try stepping through the list and writing out each item one at a time?

If none of these work it maybe a dreaded geoprocessor memory leak? I find I cannot run python scripts more that a few hundred times when calling Spatial Analyst tools, it just bombs out.

Duncan
0 Kudos
ChrisMathers
Occasional Contributor III
That is a valid format for using the join method and is the one I always use. If this is working on some of the files but not others you may have a bad file in there somewhere. Try running them in batches and when you get an error printing the input shapefile that is blowing it up.
0 Kudos