|
POST
|
If you have v10.1+, I would advise you to use the "data access" cursors, as they are much faster than the old cursors. Also, your indentation (line 3 in your code) appears to be off - it shouldn't be indented there. Does your "wtlndUnits1" layer have a field indicating the wetland unit, or do you just want to tag them (basically) using the OBJECTID order as you are doing? You shouldn't have to copy the features to get the sum of area. A single pass with an update cursor should be faster than 2 field calcs. Here's a (UNTESTED) re-write assuming you have a "WETLAND_UNIT_ID" field that you are trying to tag your wetlands with. This would be the fastest way I can think of doing this if you want to take a cursor-based Python approach. However, I suspect it might be faster (like I said above) to use a spatial join, summarize the joined areas, and then a field calc. searchRows= arcpy.da.SearchCursor("wtlndUnits1",["SHAPE@","WETLAND_UNIT_ID")
for searchRow in searchRows:
shapeObj, wtlndUnitID = searchRow
arcpy.SelectLayerByLocation_management("wtlnd4Anlys","WITHIN",shapeObj, "", "NEW_SELECTION")
areaList = [r[0] for r in arcpy.da.SearchRows("wtlnd4Anlys", ["SHAPE@AREA"])]
totalArea = sum(areaList)
selectedCount = len(areaList)
if selectedCount > 1:
updateRows = arcpy.da.UpdateCursor("wtlnd4Anlys", ["UnitAcres","WtlndUnitID"])
for updateRow in updateRows:
updateRow[0] = totalArea * 0.00024711
updateRow[1] = wtlndUnitID
updateRows.updateRow(updateRow)
del updateRows, updateRows
del searchRow, searchRows
... View more
06-09-2014
10:16 AM
|
0
|
0
|
1481
|
|
POST
|
There are many things you could do to improve the code and make it faster/more efficient. However, your desired end result would probably be best accomplished by just using the SpatialJoin tool (to tag the wetlands with what analysis unit they happen to fall within) and then Frequency tool to (sum the area of the wetlands by wetland analysis unit).
... View more
06-03-2014
12:42 PM
|
0
|
0
|
1481
|
|
POST
|
What I do: import time
time1 = time.clock()
#do stuff
time2 = time.clock()
print "Stuff took " + str(time2-time1) + " seconds"
... View more
05-30-2014
12:30 PM
|
1
|
0
|
1357
|
|
POST
|
Can you not just run the search cursor (and populate the dictionary) directly on 'fullViewName'? I'm unsure why you are creating a query table. Note there is a parameter for applying SQL expressions directly in the search cursor, although as I recall, maybe you can't use that parameter unless there is a valid (ESRI-recognized) OID field in the table. To interrogate values in a dictionary, access them by key. So for example: >>> sampleDict = {"cat": ("Meow Man", 23, 12.34), "dog": ("Bark Bark", 36, 1.432), "mouse":("Minnie Me", 3, 0.123)} >>> sampleDict["dog"] ("Bark Bark", 36, 1.432)
... View more
05-29-2014
08:23 AM
|
0
|
0
|
883
|
|
POST
|
Woops, I mean: "NAME10 LIKE '%" + letter + "%' OR NAME10 LIKE '%" + letter.upper() + "%'"
... View more
05-28-2014
05:00 PM
|
0
|
0
|
1686
|
|
POST
|
Not sure what's wrong, but suspect something in your SQL. Not sure what all the / are for, but you should be able to get away with this: "NAME10 LIKE %'" + letter + "'% OR NAME10 LIKE %'" + letter.upper() + "'%") Also, perhaps instead of: arcpy.Describe(countylyr).FIDSet use: len([r[0] for r in arcpy.da.SearchCursor(countylyr. ["OID@"])]) I would think that the line: if arcpy.Describe(countylyr).FIDSet: would always evaluate as true (and execute the code indented underneath it), since you are not explicitly looking to see if the .fidset property actually retuned a string of selected OIDs or just a blank string (it returns a semicolon delimited list of selected OIDs as I recall, otherwise just a blank string... For example '1;2;3;4' vs. just '' if no selection... it does not evaluate to None however (at least in v10.1).
... View more
05-28-2014
04:49 PM
|
0
|
0
|
1686
|
|
POST
|
One idea: Before you run your last "Zonal Statistics" part of your code, try clearing the workspace: arcpy.env.workspace = "" Since you are now using full paths to reference your input features, output rasters, and output zone tables, I think this may be what is messing things up.
... View more
05-23-2014
02:58 PM
|
0
|
0
|
1963
|
|
POST
|
So 1st thing is to get your MSSQL table into a dictionary... I don't use MYSQL, but I image you have some sort of database connection file, right? Also, in my example below, I am assuming the MySQL table has a field called OBJECTID that you are using as the join item, but that's probably not the case. OBJECTID is generally a poor choice for a join field since it is not stable. Anyway... mySqlTable = r"\\mynetwork\myconnectionfile.sde"
mySqlFieldsToReturnList = ["OBJECTID", "SEG_ID", "STL_NAME", "STR_NAME", ...]
rdlkDict = {r[0]:(r[1:]) for r in arcpy.da.SearchCursor(mySqlTbl, mySqlFieldsToReturnList)} So now you should have this awesome dictionary where the key is the join item, and all the field values that you will need in your update cursor are readily available at light speed. The cool thing now is that the dictionary can look up the values from the SQL table a kazzilion times faster than the "sql = "SELECT * FROM [DWOP].[dbo].[ufunc_ReturnGGFDataObjectID" thingy was.
... View more
05-23-2014
12:58 PM
|
0
|
0
|
1284
|
|
POST
|
After reading your original post, all you are doing is basically a join/calc... the performance of which still can be improved using a dictionary/update cursor. This is some older code (doesn't use .da cursors, dictionary comprehensions, etc.) but it illustrates the traditional join and calc sort of thing you are trying to accomplish. http://forums.arcgis.com/threads/9555-Modifying-Permanent-Sort-script-by-Chris-Snyder?p=30010&viewfull=1#post30010
... View more
05-22-2014
10:56 AM
|
0
|
0
|
1284
|
|
POST
|
Some comments and examples: 1. Use a dictionary, as Richard said its way faster than using imbedded cursors, and your bosses will of course give you a raise, since you improved the performance so much. 2. If the dictionary gets "too big"... that is it occupies more than ~2.1 GB of RAM, which is the (sort of) limit of 32-bit Python, you can use 64-bit Python along with the 64-bit "ArcGIS_BackgroundGP_for_Desktop_101sp1.exe", which basically turns arcpy (v10.1+) into a 64-bit bad a$$ capable of using gobs of RAM. Here's an (untested) example of that dictionary calculaty sort of thing... It sucks and existing table into a dictionary, and then updates the table by calcing the ID field value = to the sum of all the (original) ID values that are >= the ID value being processed. Why would you would want to do that? I don't know, but it illustrates the general concept. myTable = r"C:\temp\test.gdb\test"
valueDict = {r[0]:(r[1]) for r in arcpy.da.SearchCursor(myTable, ["OID@","ID"])}
updateRows = arcpy.da.UpdateCursor(myTable, ["OID@","ID"])
for updateRow in updateRows:
oidVal, idVal = updateRow
updateRow[1] = sum([i[0] for i in valueDict if valueDict[0] >= idVal])
updateRows.updateRow(updateRow)
del updateRow, updateRows
... View more
05-22-2014
10:43 AM
|
0
|
0
|
1284
|
|
POST
|
How are the unique values derived fro each raster? Are they a function of the raster itself (like the mean pixel value) or is it just some arbitrary number? If it's the latter, here's one (untested) possible solution: rasterValDict = {"raster1": 11.2, "raster2": 14.9, ..., "raster_37": 9.4}
rasterDirPath = r"C:\temp\where_my_rasters_are"
outputDirPath = r"C:\temp\output_dir"
arcpy.env.workspace = rasterDirPath
rasterList = arcpy.ListRasters()
for raster in rasterList:
rasterObj = arcpy.Raster(raster)
if raster in rasterValDict:
newRaster = rasterObj / rasterValDict[raster] * 100
newRaster.save(outputDirPath + "\\" + raster)
else:
print "ERROR: No matching entry in raster value look up dictionary!"
... View more
05-22-2014
08:10 AM
|
0
|
0
|
563
|
|
POST
|
is there anything to be done about it Yes, instead of relying on the "auto-rasterization" of the Zonal Stats, do it yourself explicitly (run the FeatureToRaster/PolygonToRaster/WhateverToRaster tool yourself). Things can go wrong, if for example, your zone field in the vector layer has wonkey values such as Null. I always do this step myself as to catch/prevent issues like this... another reason to do it yourself is to explicitly control the cell alignment of the raster version of the zone vector layer. My data structure is solid with ESRI ArcGIS. All alpha starting characters, no spaces In your example above, you use a path of N:\Gabions\NDVI\35_38_May_June\t_t324. In this path you have a folder that starts with the number 3.
... View more
05-21-2014
03:54 PM
|
0
|
0
|
1963
|
|
POST
|
Not sure if the trouble lies in: 1. Reading the min/max OID values from the input table (use a search cursor) 2. Calculating a 10% sample of the OID values from the range (see code below) 3. Writing the sampled OIDs to an output table (use an insert cursor) #2 might look like: minOid = 1234
maxOid = 5678
samplePct = 0.1
rangeList = [i for i in range(minOid, maxOid + 1)]
sampleCount = int(len(rangeList) * samplePct + .5)
randomSampleList = random.sample(rangeList, sampleCount)
... View more
05-21-2014
03:19 PM
|
0
|
0
|
992
|
|
POST
|
Some ideas: 1. Make sure your input zone rasters are integer format and have a raster attribute table rasterObj = arcpy.Raster(r"C:\temp\myraster.img")
rasterObj.hasRAT 2. Don't use file names/directory names that start with a number.
... View more
05-20-2014
10:38 AM
|
0
|
0
|
1963
|
|
POST
|
OIDs are not always sequential... Something like this (untested code) should work: samplePct = 0.1
oidFieldName = arcpy.Describe(myFC).oidFieldName
oidList = [r[0] for r in arcpy.da.SearchCursor(myFC, ["OID@"])] #you could also use a feature layer here instead of a FC
sampleOidList = sorted(random.sample(oidList, int(len(oidList) * samplePct)))
sqlExp = oidFieldName + " in (" + ",".join([str(i) for i in sampleOidList]) + ")"
... View more
05-20-2014
09:39 AM
|
0
|
0
|
992
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 08-29-2024 08:23 AM | |
| 1 | 08-29-2024 08:21 AM | |
| 1 | 02-13-2012 09:06 AM | |
| 2 | 10-05-2010 07:50 PM | |
| 1 | 02-08-2012 03:09 PM |
| Online Status |
Offline
|
| Date Last Visited |
08-30-2024
12:25 AM
|