|
POST
|
How about this, I added a optional parameter that you can supply if you choose to make it read a value from a field (Supply the name of field in IList[3], otherwise it is expecting you to pass just a Value within the IList[3]. def queryUpdateField(iList, updateType="Value"):
aTable = iList[0]
aWhereClause = iList[1]
aUpdateField = iList[2]
aUpdateExpression = iList[3]
rows = arcpy.UpdateCursor(aTable,aWhereClause)
try:
for row in rows:
if updateType == "Value":
aUpdateExpression = aUpdateExpression
elif updateType == "Field":
aUpdateExpression = row.getValue(aUpdateExpression)
row.setValue(aUpdateField,aUpdateExpression)
rows.updateRow(row)
del rows, row
logger.info("Updated %s field with a value of %s in the FC: %s with the expression %s." %(iList[2],iList[1], iList[0],iList[3]))
operationsCounter()
except:
del rows
logger.error("Error in function queryUpdateField.",exc_info=True)
print arcpy.GetMessages() So to use this function with a supplied value within IList[3] you can call it as: queryUpdateField(iList) Or if you want it to read from a field (Whose name is within IList[3]: queryUpdateField(iList, "Field") The other thing you can look at is the python function eval() or exec(), but this is never recommended in general and I think the above is ok.
... View more
01-08-2016
08:37 AM
|
1
|
1
|
1794
|
|
POST
|
I like to triple quote my where Clauses, here is a working version of your own script: arcpy.env.workspace = "C:/DuluthGIS/SchemaUpdate/SDESchemaTest.gdb"
workspace = "C:/DuluthGIS/SchemaUpdate/SDESchemaTest.gdb" #workspace
aTable = workspace + "/Gas/gasDistributionMain" # this works
aWhereClause =""" "MAOP" > 0 """ # Non parameterised example
aWhereField = 'MAOP'
aWhereValue = 0
aWhereClause = """ "%s" > %s """ % (aWhereField, aWhereValue) # Parameterised example
aUpdateField = "OPERATINGPRESSURE" # this works
print aWhereClause #debug
rows = arcpy.UpdateCursor(aTable,aWhereClause)
for row in rows:
#row.setValue(aUpdateField,row.getValue(aWhereField)) #Probably a crash
row.setValue(aUpdateField, "'" + row.getValue(aWhereField) + "'") #If this one doesnt work try above!
rows.updateRow(row)
del rows, row The code isnt copying right on one of the lines maybe.... row.setValue(aUpdateField, "'" + row.getValue(aWhereField) + "'") #If this one doesnt work try above!
... View more
01-08-2016
07:57 AM
|
0
|
0
|
1794
|
|
POST
|
This is probably highly unrecommended: I use this SQL query to view my domain values outside an Arc environment. (MSSQL) SELECT codedValue.value('Code[1]','nvarchar(max)') AS "Code", codedValue.value('Name[1]', 'nvarchar(max)') AS "Value" FROM sde.GDB_ITEMS AS items INNER JOIN sde.GDB_ITEMTYPES AS itemtypes ON items.Type = itemtypes.UUID CROSS APPLY items.Definition.nodes ('/GPCodedValueDomain2/CodedValues/CodedValue') AS CodedValues(codedValue) WHERE itemtypes.Name = 'Coded Value Domain' You can also limit it to just one of your domains like I have to view my domain "DOM_STATUS" code value pairs: SELECT codedValue.value('Code[1]','nvarchar(max)') AS "Code", codedValue.value('Name[1]', 'nvarchar(max)') AS "Value" FROM sde.GDB_ITEMS AS items INNER JOIN sde.GDB_ITEMTYPES AS itemtypes ON items.Type = itemtypes.UUID CROSS APPLY items.Definition.nodes ('/GPCodedValueDomain2/CodedValues/CodedValue') AS CodedValues(codedValue) WHERE itemtypes.Name = 'Coded Value Domain' AND items.Name = 'DOM_Status' Have never tried, but maybe you could try writing to these tables too -- I stick to using the Arc tools so far!
... View more
01-06-2016
08:00 AM
|
0
|
0
|
741
|
|
POST
|
The Answer is to just use .py files. There is no need I know where a pyc is required, or difference between using a .py or .pyc -- apart from a tiny (insignificant) speed boost at loading a pyc. If you are doing GIS, any GIS task will cost 1000x more than compiling a .py file, you are wasting your time if you are doing this for performance reasons.
... View more
01-06-2016
07:48 AM
|
0
|
0
|
2399
|
|
POST
|
Hi there, we do use that within our organisation, need to look at using it for more! For now, we have switched to using ArcReader for a simple offline product and it seems to be a lot more reliable than ArcExplorer, so we are very happy!!
... View more
11-18-2015
03:13 AM
|
0
|
0
|
1514
|
|
POST
|
Our organisation requires a lightweight LOCAL (Non internet based) viewer for use where internet connectivity is not availble. We have heavily invested into use of ArcGIS online / Server within our workflows already, what benefits does this new explorer have over using ArcGIS online itself? What should we use when we are not able to fall back to our ArcGIS online maps? We currently use ArcGIS Explorer desktop using an exported file GDB for these uses... please can you recommend what we should use in the future?
... View more
11-09-2015
05:03 AM
|
0
|
2
|
1514
|
|
POST
|
Hi All, Has anyone found any method to create ArcGIS Explorer map files (.nmf) using Arcpy / Python. Please can you share the link to any documentation or example code! Thanks,
... View more
11-04-2015
03:57 AM
|
0
|
1
|
2365
|
|
POST
|
Are you sure that when you tried to run the tool, your data was not joined to a dataset? This causes the output field names to come out as: dataset_Fields Which then gets truncated to dataset_ So one of your datasets in the join was called geocoding?
... View more
09-24-2015
06:44 AM
|
0
|
0
|
395
|
|
POST
|
Your crash is because one of the "Layers" returned from ListLayers() does not support the data source property! As seen in the error messagge : NameError: The attribute 'dataSource' is not supported on this instance of Layer. A basic reason for this could be because you have "Group" layers setup in your table of contents... these are classed as layers and returned by the ListLayers() function, these obviously have no "DataSource" property, so would cause the crash you see. Im not sure of the 'proper way' to catch this, but I think this should work (Not tested sorry!): try:
if lyr.dataSource == r"D:\PROJECTS\zfonGivatShmuel\gis\layers\6_9_15\gvul.shp":
arcpy.mapping.RemoveLayer(df, lyr)
print 'remove'
except:
arcpy.AddMessage("Layer skipped as does not support dataSource property, layer name: " + lyr.name) Or the full revised script: import arcpy,os,sys
import arcpy.mapping
from arcpy import env
env.workspace = r"C:\Project"
counter = 0
for mxdname in arcpy.ListFiles("*.mxd"):
print mxdname # print list of mxd's in the folder
mxd = arcpy.mapping.MapDocument(r"C:\Project\\" + mxdname)
df = arcpy.mapping.ListDataFrames(mxd, "Layers")[0]
for lyr in arcpy.mapping.ListLayers(mxd, "", df):
try:
if lyr.dataSource == r"D:\PROJECTS\zfonGivatShmuel\gis\layers\6_9_15\gvul.shp":
arcpy.mapping.RemoveLayer(df, lyr)
print 'remove'
except:
arcpy.AddMessage("Layer skipped as does not support dataSource property, layer name: " + lyr.name)
mxd.save()
del mxd
... View more
09-24-2015
05:33 AM
|
1
|
0
|
3605
|
|
POST
|
Please test your script using the top 5 lines of your CSV file, if it does not work, then please add these few lines here for us to solve! (We think it may be your CSV data, for example you may be trying to save a "String" inside a "Number" field?)
... View more
09-18-2015
07:24 AM
|
2
|
0
|
1239
|
|
POST
|
Its not what you want, but sometimes cutting our expectations is the easier option... How about 2 root "Groups" in the table of contents, mxdA\Layers and mxdB\Layers.... then your code just swaps the groups that are currently enabled.
... View more
09-18-2015
07:12 AM
|
1
|
0
|
1285
|
|
POST
|
Basic python question here!, you just get the mxd names, as the if statement is never satisfied. To resolve, try changing your code as follows, then you should be able to work out what has happened! import arcpy
from arcpy import env
env.workspace = r"D:\PROJECTS\ab\gis"
counter = 0
for mxdname in arcpy.ListFiles("*.mxd"):
print mxdname
oldText = 'land use'
oldText = oldText + '\n'
oldText = oldText + 'fuel'
mxd = arcpy.mapping.MapDocument(r"D:\PROJECTS\ab\gis\\" + mxdname)
myString = 'free fuel'
myString = myString + '\n'
myString = myString + 'gas'
print "We are looking for a value of: "
print oldText
for elm in arcpy.mapping.ListLayoutElements(mxd, "TEXT_ELEMENT"):
if elm.text == oldText:
print elm.name
elm.text = myString
else:
print "Non Matching Element value of:"
print elm.text
mxd.save()
counter = counter + 1
del mxd
... View more
08-11-2015
07:34 AM
|
1
|
3
|
3200
|
|
POST
|
When you run a script as a tool, depending on configuration, it does not add the 'layers' from the script to your 'MXD', they happen in different processes and are not connected. You are trying to refer to layers that exist within the "Python" process, that you have not added to the "MXD" at any point, so when you run listDatasets[0], it is returning whatever is already inside the MXD at the top at the time your tool was run, not any of the layers you created earlier in the script. To fix this, add the layers to the MXD within the code if you want to refer to them later on!
... View more
07-21-2015
08:32 AM
|
0
|
0
|
756
|
|
POST
|
With regards to Edit6, the variable: totalArea Appears to come from the code here: i = 0 totalArea = 0 print("Comparing site values to constraints...") # Compare the lumped parameters to the constraint dictionary for row in arcpy.da.SearchCursor(combo_table, ["DARAS", "HSG", "MEDSLOPE", "MEDWT"]): i += 1 print i # Temporarily store the area of each DA for later comparison print("Compare total area") for r in arcpy.da.SearchCursor(DA_area, [DAID, "SUM_AREA"]): # NOTE: DAID *must* be an integer value to function properly. if r[0] == row[0]: totalArea = r[1] #### _______________IMPORT MODULES_________________### print("Preparing necessary files...") import os import arcpy import copy ### ______________INPUT FILES___________________### outline = r"D:\Python\Inputs\Lakeridge.gdb\LRoutline" DA = r"D:\Python\Inputs\Lakeridge.gdb\SubAreas" DAID = "DA" # field in DA shapefile where unique DA values exist soil = r"D:\Python\Inputs\VB_SoilsPRJ.shp" WTin = r"D:\Python\Inputs\wtdepth1" DEMin = r"D:\Python\Inputs\2013 5ft DEM.img" MapLoc = r"D:\Python\Inputs\LakeRidge.mxd" WT = arcpy.Raster(WTin) DEM = arcpy.Raster(DEMin) ### ________________SET ENVIRONMENTS__________________### # Check out extension and overwrite outputs arcpy.CheckOutExtension("spatial") arcpy.env.overwriteOutput = True # Set Map Document mxd = arcpy.mapping.MapDocument(MapLoc) # Create project folder and set workspace print("Checking for and creating output folders for spatial data...") WorkPath = MapLoc[:-4] if not os.path.exists(WorkPath): os.makedirs(WorkPath) arcpy.env.workspace = WorkPath # Create scratch workspace ScratchPath = str(WorkPath) + r"\scratch" if not os.path.exists(ScratchPath): os.makedirs(ScratchPath) arcpy.env.scratchWorkspace = ScratchPath # Create GDB path, filename = os.path.split(MapLoc) GDB = filename[:-4] + ".gdb" GDBpath = MapLoc[:-4] + ".gdb" if not os.path.exists(GDBpath): arcpy.CreateFileGDB_management(path, GDB) # Create main output table folder if it does not exist and create project folder print("Checking for and creating output space for Excel files...") TabPath = r"D:\Python\Results" "\\" ProjFolder = TabPath + filename[:-4] if not os.path.exists(TabPath): os.makedirs(TabPath) if not os.path.exists(ProjFolder): os.makedirs(ProjFolder) # Define location of constraint database and establish GIS table output location print("Checking for and creating output space for GIS tables...") CRIT = TabPath + "constraints.xlsx" BMPFold = ProjFolder + r"\GIS-Tables" if not os.path.exists(BMPFold): os.makedirs(BMPFold) ### _________________VERIFY INPUTS________________### # Check that all inputs have the same projection and update list with projected file path names print("Verifying that coordinate systems are the same...") InSHP = [outline, DA, soil] InRAS = [WT] # The base projected coordinate system (PCS) is the DEM's PCS DEMSR = arcpy.Describe(DEM).spatialReference.PCSCode for i, l in enumerate(InSHP): sr = arcpy.Describe(l).spatialReference.PCScode if sr != DEMSR and ".gdb" not in l: l = arcpy.Project_management(l, l[:-4] + "PRJ.shp", DEMSR) InSHP = l elif sr != DEMSR and ".gdb" in l: l = arcpy.Project_management(l, l + "PRJ", DEMSR) InSHP = l sr = arcpy.Describe(WT).spatialReference.PCScode if sr != DEMSR: WTPRJ = arcpy.Raster(arcpy.ProjectRaster_management(WT, "WTPRJ", DEMSR, "CUBIC")) WTPRJ.save(WorkPath + r"\WT_PRJ") WT = WTPRJ # Assign projected file paths to variable names outline = InSHP[0] DA = InSHP[1] soil = InSHP[2] ### _____________SET PROCESSING EXTENTS____________ ### # Set cell size description = arcpy.Describe(DEM) cellsize = description.children[0].meanCellHeight print("Setting cell size to DEM cell size: " + str(cellsize) + " ft...") # Replace ft with code to get units!!! arcpy.env.cellSize = cellsize # Create buffer around outline to use as mask # Buffer distance is in feet print("Creating an environment mask from the site outline shapefile...") maskshp = arcpy.Buffer_analysis(outline, ScratchPath + r"\outline_buff", "50 Feet", "", "", "ALL",) # Convert buffer to raster mask = arcpy.Raster(arcpy.PolygonToRaster_conversion(maskshp, "Id", ScratchPath + r"\rastermask")) mask.save(ScratchPath + r"\rastermask") # Set raster mask and snap raster print("Setting raster mask and snap raster for project...") arcpy.env.mask = mask arcpy.env.snapRaster = mask arcpy.env.extent = mask.extent ### _______________ASSIGN HSG________________### # Many soils in the coastal plain are dual group soils, A/D, B/D, or C/D. # First letter is the drained condition and second letter is the undrained # condition. Soil is considered drained when the depth to water table is # greater than two feet from the surface. # This looks at the HSG assigned to the soil polygon and compares it # to the depth to WT layer. If HSG is unknown or invalid, # HSG is assigned D soil type. # Convert soils shapefile to raster and assign integer values to HSG. # A=1, B=2, C=3, 4=D and dual groups A/D=14, B/D=24, C/D=34 # "---" is treated as a D soil print("Converting dual group soils to single groups...") SoilUnclass = arcpy.PolygonToRaster_conversion(soil, "HSG", ScratchPath + r"\SoilUnclass", "MAXIMUM_COMBINED_AREA") SoilClass = arcpy.sa.Reclassify(SoilUnclass, "HSG", arcpy.sa.RemapValue([["A", 1], ["B", 2], ["C", 3], ["D", 4], ["A/D", 14], ["B/D", 24], ["C/D", 34], ["---", 4]]), "NODATA") SoilClass.save(ScratchPath + r"\HSGraster") # Determine whether locations with dual groups should be considered drained # or undrained and assign a single HSG value to those locations EffHSG = arcpy.sa.Con(SoilClass > 4, arcpy.sa.Con(WT >= 2.0, (SoilClass - 4) / 10, 4), SoilClass) EffHSG.save(WorkPath + r"\EffectiveHSG") ### ______________SUMMARIZE DA PROPERTIES________________ ### # Initialize expression to calculate area of polygons exparea = "float(!shape.area@ACRES!)" # Summarize total area for each DA print("Summarizing DA characteristics...") DAFld = [f.name for f in arcpy.ListFields(DA)] if "Area" not in DAFld: arcpy.AddField_management(DA, "Area", "FLOAT", 6, 3) arcpy.CalculateField_management(DA, "Area", exparea, "PYTHON") stat_field = [["Area", "SUM"]] field_combo = [DAID] DA_area = arcpy.Statistics_analysis(DA, BMPFold + r"\DA_area", stat_field, field_combo) # Convert DA shapefile to raster DAras = arcpy.Raster(arcpy.PolygonToRaster_conversion(DA, DAID, ScratchPath + r"\DAras", "MAXIMUM_AREA")) # Calculate Slope from DEM for the area of interest, convert to integer # and find median slope in each DA slope = arcpy.sa.Slope(DEM, "PERCENT_RISE") slope.save(WorkPath + r"\slope") roundslope = (slope + 0.005) * 100.00 # preserve the last 2 decimal places and round for truncation slopeINT = arcpy.sa.Int(roundslope) # convert to integer by truncation med_slope100 = arcpy.sa.ZonalStatistics(DAras, "VALUE", slopeINT, "MEDIAN", "DATA") # find median (integer operation) med_slope100.save(ScratchPath + r"\intslope") med_slope = med_slope100 / 100.00 # convert back to true median value med_slope.save(WorkPath + r"\medslope") # Find the median depth to water table in each DA rounded to 2 decimal places roundWT = (WT + 0.005) * 100.00 # preserve the last 2 decimal places and round for truncation WTINT = arcpy.sa.Int(roundWT) # convert to integer by truncation med_WT100 = arcpy.sa.ZonalStatistics(DAras, "VALUE", WTINT, "MEDIAN", "DATA") # find median (integer operation) med_WT100.save(ScratchPath + r"\intWT") med_WT = med_WT100 / 100.00 # convert back to true median value med_WT.save(WorkPath + r"\medWT") # Combine rasters to give unique combinations combo = arcpy.sa.Combine([DAras, EffHSG, med_WT100, med_slope100]) combo.save(WorkPath + r"\combo") combo_table = arcpy.BuildRasterAttributeTable_management(combo) # Convert integers to usable format arcpy.AddField_management(combo_table, "HSG", "TEXT", "", "", 6) arcpy.AddField_management(combo_table, "MEDSLOPE", "FLOAT", 5, 2) arcpy.AddField_management(combo_table, "MEDWT", "FLOAT", 5, 2) with arcpy.da.UpdateCursor(combo_table, ["EFFECTIVEHSG", "HSG", "INTSLOPE", "MEDSLOPE", "INTWT", "MEDWT"]) as cursor: for row in cursor: if row[0] == 1: row[1] = "A" if row[0] == 2: row[1] = "B" if row[0] == 3: row[1] = "C" if row[0] == 4: row[1] = "D" row[3] = float(row[2]) / 100.00 row[5] = float(row[4]) / 100.00 cursor.updateRow(row) ### _____________COMPARE CRITERIA_____________________ ### print("Loading constraint database...") # Convert Excel constraint file to GIS table compare = arcpy.ExcelToTable_conversion(CRIT, BMPFold + r"\BMP-constraints") Fields = [f.name for f in arcpy.ListFields(compare)] # Create dictionary from criteria table # Code is the key, other values are stored as a list D = {r[1]:(r[2:]) for r in arcpy.da.SearchCursor(compare, Fields)} # Codes: # SDAB Simple Disconnection A&B # SDCD Simple Disconnection C&D # SDSA Simple Disconnection C&D with Soil Amendments # CAAB Sheet Flow Conservation Area A&B # CACD Sheet Flow Conservation Area C&D # VFA Sheet Flow Veg Filter A # VFSA Sheet Flow Veg Filter B,C&D with Soil Amendments # GCAB Grass Channel A&B # GCCD Grass Channel C&D # GCSA Grass Channel C&D with Soil Amendments # MI1 Micro Infiltration- Level 1 # SI1 Small Infiltration- Level 1 # CI1 Conventional Infiltration- Level 1 # MI2 Micro Infiltration- Level 2 # SI2 Small Infiltration- Level 2 # CI2 Conventional Infiltration- Level 2 # BRE1 Bioretention Basin- Level 1 # BRE2 Bioretention Basin- Level 2 # DS1 Dry Swale- Level 1 # DS2 Dry Swale- Level 2 # WS1 Wet Swale- Level 1 # WS2 Wet Swale- Level 2 # F1 Filter- Level 1 # F2 Filter- Level 2 # CW1 Constructed Wetland- Level 1 # CW2 Constructed Wetland- Level 2 # WP1 Wet Pond- Level 1 # WP2 Wet Pond- Level 2 # WPGW1 Wet Pond with GW- Level 1 # WPGW2 Wet Pond with GW- Level 2 # EDP1 ED Pond- Level 1 # EDP2 ED Pond- Level 2 # Reference: # 0 - BMP # 1 - RR # 2 - PR # 3 - TPR # 4 - NR # 5 - TNR # 6 - SOIL # 7 - MAX_SLOPE # 8 - MIN_CDA # 9 - MAX_CDA # 10 - WT_SEP # 11 - WT_RELAX (boolean) # 12 - COAST_SEP # 13 - MIN_DEPTH # 14 - DEPTH_RELAX (boolean) # 15 - COAST_MIN_DEPTH # 16 - PWOP_PREF # 17 - YEAR_COST # Create output table for BMPs lumped by DA and HSG with criteria table as template Lump = arcpy.CreateTable_management(BMPFold + "\\", "BMP-Allowable") drop = ["OBJECTID", "FIELD1"] arcpy.AddField_management(Lump, "CODE", "TEXT", "", "", 8) arcpy.AddField_management(Lump, "DA", "TEXT", "", "", 15) arcpy.AddField_management(Lump, "HSG", "TEXT", "", "", 6) arcpy.AddField_management(Lump, "BMP", "TEXT", "", "", 50) arcpy.AddField_management(Lump, "MOD", "TEXT", "", "", 25) arcpy.AddField_management(Lump, "RR", "SHORT") arcpy.AddField_management(Lump, "PR", "SHORT") arcpy.AddField_management(Lump, "TPR", "SHORT") arcpy.AddField_management(Lump, "NR", "SHORT") arcpy.AddField_management(Lump, "TNR", "SHORT") arcpy.AddField_management(Lump, "PWOP_PREF", "TEXT", "", "", 25) arcpy.AddField_management(Lump, "YEAR_COST", "TEXT", "", "", 30) arcpy.DeleteField_management(Lump, drop) Fields = [f.name for f in arcpy.ListFields(Lump)] # Create table to build "Rejected BMP" table Fail = arcpy.Copy_management(Lump, BMPFold + "\\" + r"\BMP-Rejected") arcpy.AddField_management(Fail, "RSN_FAILED", "TEXT", "", "", 50) drop = ["BMP", "MOD", "RR", "PR", "TPR", "NR", "TNR", "PWOP_PREF", "YEAR_COST"] arcpy.DeleteField_management(Fail, drop) FFields = [f.name for f in arcpy.ListFields(Fail)] i = 0 print("Comparing site values to constraints...") # Compare the lumped parameters to the constraint dictionary for row in arcpy.da.SearchCursor(combo_table, ["DARAS", "HSG", "MEDSLOPE", "MEDWT"]): i += 1 print i # Temporarily store the area of each DA for later comparison print("Compare total area") for r in arcpy.da.SearchCursor(DA_area, [DAID, "SUM_AREA"]): # NOTE: DAID *must* be an integer value to function properly. if r[0] == row[0]: totalArea = r[1] # Duplicate criteria dictionary that can be amended throughout the loop print("Copy BMP") BMP = copy.deepcopy(D) # Initialize empty dictionary to store BMPs that fail each test print("Initialize empty dictionaries") NoBMP = {} Mod = {} # Compare lumped values in each DA/HSG pair to those in the constraint table print("Begin dictionary loop") for k, v in D.items(): # Test if soil type is incorrect for each BMP and store reason for failure print("Soil test") if row[1] not in v[6]: NoBMP = "Soil type mismatch" # Compare median slope to maximum slope if row[2] > v[7]: if k not in NoBMP.keys(): NoBMP = "Slope too steep" else: NoBMP += ", Slope too steep" # Compare WT depths print("WT depth test") if v[10] == 0: Mod = "---" elif v[13] + v[10] <= row[3]: Mod = "---" elif v[13] + v[10] > row[3]: # Check if coastal modification allows use of practice if v[11] == 1: coast_WT = v[12] else: coast_WT = v[10] if v[14] == 1: coast_depth = v[15] else: coast_depth = v[13] # Notate if coastal modification allows for practice use if coast_WT + coast_depth <= row[3]: if v[11] == 1 and v[14] == 1: Mod = "Separation and Depth" elif v[11] == 1: Mod = "WT Separation" elif v[14] == 1: Mod = "Practice Depth" else: Mod = "---" # Remove the practice if coastal modifications do not help if coast_WT + coast_depth > row[3]: if k not in NoBMP.keys(): NoBMP = "WT proximity" else: NoBMP += ", WT proximity" # Compare allowable contributing drainage areas (in acres) # Maximum CDA neglected because this is lumped analysis print("Compare areas") if v[8] >= totalArea: if k not in NoBMP.keys(): NoBMP = "CDA too small" else: NoBMP += ", CDA too small" # Compare keys in BMP and NoBMP dictionaries. Remove matching pairs from the BMP dictionary. print("Removing bad BMPs from the BMP dictionary") for key in BMP.keys(): if key in NoBMP.keys(): del BMP[key] # Write remaining BMPs to table print("Writing BMPs to output table") with arcpy.da.InsertCursor(Lump, Fields) as cursor: for k,v in BMP.items(): cursor.insertRow((0, k, row[0], row[1], v[0], Mod , v[1], v[2], v[3], v[4], v[5], v[16], v[17])) # Sort values in table, effectively ranking them print("Ranking BMPs...") LumpSort = arcpy.Sort_management(Lump, BMPFold + "\\LumpSort", [["DA", "ASCENDING"], ["HSG", "ASCENDING"], ["TPR", "DESCENDING"]]) arcpy.DeleteField_management(LumpSort, ["ROWID"]) # Convert tables to readable format outside of GIS (.xls) print("Converting good BMPs to Excel format...") arcpy.TableToExcel_conversion(LumpSort, ProjFolder + r"\Lumped-Result.xls") The local variable referenced before assigned, is flagged as this part of the code may never get to line 11 in my quote, due to the if statements. This may never happen, but pycharm cant see your data so flags it as a possible error as when you do the check in: if v[8] >= totalArea: Depending on your data, totalArea may not exist yet! You could avoid this being flagged by assigning a default value for this variable as I have sort of shown above.
... View more
07-17-2015
07:44 AM
|
1
|
1
|
31714
|
|
POST
|
You are amending a fieldmap object, without then committing the results to the FieldMappings object! You need to add code like this somewhere: fieldmappings.replaceFieldMap (index, fld) Youll have to work out the index bit though!
... View more
07-15-2015
03:44 AM
|
1
|
0
|
2321
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 11-29-2019 07:45 AM | |
| 1 | 05-13-2013 07:11 AM | |
| 1 | 05-24-2011 07:53 AM | |
| 1 | 05-22-2017 05:01 AM | |
| 2 | 07-29-2019 05:34 AM |
| Online Status |
Offline
|
| Date Last Visited |
10-22-2024
10:40 AM
|