|
POST
|
Hello all, This question has gotten wildly off topic, so I edited the heading to reflect more clearly what I'm trying to ask. I'm hoping someone can point me in the right direction. I've written a script (still in development) that creates terrains from LiDAR. They are HUGE terrains, so the process from going from LAS to Multipoint can take up to 30 hours. I've hit a hick-up that I'm trying to pinpoint that occurs sometime after the multipoint creation, but before the building of the terrain. The code I've written works fine process by process, so I'm not sure what the issue is--it may have been a network thing. I'm trying to troubleshoot it. The problem is the time it takes to run from the beginning until it breaks. I've added if not arcpy.Exists statements for the elements that I can, but I'm stuck on how to find the properties for the terrain so that I can skip those steps if they've alread been done. For instance, if the terrain already has pyramids defined how do I check for that? Or if / what layers have already been added to the terrain? I've spent the last 1.5 hours doing a Google search and I've got nothing except for arcpy.mapping: "There are a few specialized layers and datasets that don't fall into one of these three categories: annotation subclasses, dimension features, network datasets, terrain datasets, topology datasets, and so on." but no way to handle these specialized layers. Suggestions? Thanks for any help you may provide, John Message was edited by: John Lay
... View more
05-26-2015
04:54 AM
|
1
|
15
|
8842
|
|
POST
|
I'm actually trying to simplify this: arcpy.CreateFeatureDataset_management(Test_gdb, "DoesItWork", "PROJCS['NAD_1983_2011_StatePlane_North_Carolina_FIPS_3200_Ft_US',GEOGCS['GCS_NAD_1983_2011',DATUM['NAD_1983_2011',SPHEROID['GRS_1980',6378137.0,298.257222101]],PRIMEM['Greenwich',0.0],UNIT['Degree',0.0174532925199433]],PROJECTION['Lambert_Conformal_Conic'],PARAMETER['False_Easting',2000000.0],PARAMETER['False_Northing',0.0],PARAMETER['Central_Meridian',-79.0],PARAMETER['Standard_Parallel_1',34.3333333333333],PARAMETER['Standard_Parallel_2',36.1666666666667],PARAMETER['Scale_Factor',1.0],PARAMETER['Latitude_Of_Origin',33.75],UNIT['Foot_US',0.3048006096012192]],VERTCS['NAVD_1988 Geoid12a',VDATUM['North_American_Vertical_Datum_1988'],PARAMETER['Vertical_Shift',0.0],PARAMETER['Direction',1.0],UNIT['Foot_US',0.3048006096012192]];-121841900 -93659000 3048.00609601219;-100000 10000;-100000 10000;3.28083333333333E-03;0.001;0.001;IsHighPrecision") into something that is a little more manageable and less prone to error. If I replace arcpy.CreateFeatureDataset_management(Test_gdb, "DoesItWork", "PROJCS [ ... ],VERTCS[ ... ]; ... with StatePlane = arcpy.SpatialReference(103122)
arcpy.CreateFeatureDataset_management(Test_gdb, "DoesItWork", "PROJCS[StatePlane], ,VERTCS[ ... ]; ... I'm halfway there. I just need to find a way to simplify the vertical spatial reference. I am trying to write a script that will create a feature dataset in order to create terrains. I have to do this 100 times for each of our 100 counties. But, I want the code to be easily manipulated so that it can be absorbed into other projects.
... View more
05-19-2015
04:54 AM
|
0
|
0
|
1205
|
|
POST
|
Is it possible to assign the Vertical Coordinate System using a coordinate system's factory code (or authority code)? Something along the lines of arcpy.SpatialReference(105703) for NAVD 1988 (US survey feet). SpatialReference only works for Geographic and Projected Coordinate systems.
... View more
05-19-2015
04:14 AM
|
0
|
2
|
6131
|
|
POST
|
OK, This almost does what I was looking for (Joshua Bixby and James Crandall I just haven't gotten to your examples yet. James--had some trouble with installing Panda, but am square now) I'm a little lost with it though. Please walk me through the bits I'm missing so that I may apply the info instead of just copy it. magicNumberSet = set([16,17,18])
buildingDict = {}
searchRows = arcpy.da.SearchCursor('L_DAMAGE_RESULTS_WIND', ["BLDG_ID", "HAZARD_ID"])
for searchRow in searchrows:
buildingId, hazardId = searchRow
if buildingId in buildingDict:
buildingDict[buildingId].add(hazardId)
else:
buildingDict[buildingId] = set([hazardId])
yesList = [buildingId for buildingId in buildingDict if magicNumberSet.issubset(buildingDict[buildingId])]
noList = [set(buildingDict.keys()).difference(yesList)] Chris, I get a little lost around line 9. if buildingDict[buildingId].add(hazardId) is appending the value set, I assume is buildingDict[buildingId] = set([hazardId]) creating the first instance. I'm not really sure what "=" is supposed to mean here. My brain automatically goes to one is equal to the other which can't be the case. In line 10, for each key in the dictionary if the value set is a subset of ([16,17,18]) add it to the list. This would mean that sets ([17,18]) and ([17,18,19]) would be excluded from the list, but ([16,17,18,19]) would be added. Using magicNumberSet.issuperset(buildingDict[buildingId]) would mean the reverse is true. By replacing it with magicNumberSet==set(buildingDict[buildingId]) I would essentially be saying that the sets must be equal before being added to the list. Correct? I need to check that there are always 3 buildings and only 3 buildings with a hazard ID of 16, 17, and 18. If there are only 2 buildings or 4 buildings whatever the hazard value, the table fails. Does the order the values appear in the set matter?
... View more
01-09-2015
10:56 AM
|
0
|
0
|
822
|
|
POST
|
It's not quite that the building must have all three, is is more like there must be 3 buildings with the same ID that each have one of the three hazards. the table would look like this: BLDG_ID HAZARD_ID OTHER FIELDS 37013 16 other info unique to HAZ_ID 16 37013 17 other info unique to HAZ_ID 17 37013 18 other info unique to HAZ_ID 18 37014 16 other info unique to HAZ_ID 16 ... Like I mentioned to Joshua Bixby above, I will play with this some later this afternoon. Thank you both for your suggestions.
... View more
01-09-2015
03:34 AM
|
0
|
0
|
822
|
|
POST
|
I tried building the dictionary with just the Building ID, but the result was a single Building ID key and a single Hazard ID. I probably built it wrong. But like I said originally, I'm still wrapping my head around dictionaries. e.g.: dict = {1:(u'37013',16), 2: (u'37013', 17), 3: (u'37013', 18), 4: (u'37014', 16), 5: (u'37014', 17), 6: (u'37014', 18),...} became dict = {u'37013':18, u'37014':18,...} so even if the original code worked it would fail b/c there was no Hazard ID 16 or 17. I will give this a look-see a little later this afternoon. I will also give @Chris Snyder' s a look as well.
... View more
01-09-2015
03:26 AM
|
0
|
0
|
2851
|
|
POST
|
Actually, the dictionary would look more like this: dict = {1:(u'37013',16), 2: (u'37013', 17), 3: (u'37013', 18), 4: (u'37014', 16), 5: (u'37014', 17), 6: (u'37014', 18),...} # corresponding with Building ID: Hazard ID. I need to be able to identify that Building 37013 has a row for Hazard ID 16, a row for Hazard 17, and one for Hazard 18. If the table does not meet this criteria, the table fails the check and I need to log the Building ID.
... View more
01-08-2015
11:40 AM
|
0
|
5
|
2851
|
|
POST
|
Thanks James, I haven't had the opportunity to play with numpy yet. I will definitely give this a look. But for the sake of expediency, I'm going to have to go with the Joshua Bixby's solution.
... View more
01-08-2015
09:15 AM
|
0
|
1
|
2851
|
|
POST
|
Yes, but that just makes it too logical and simple .
... View more
01-08-2015
09:10 AM
|
0
|
0
|
2851
|
|
POST
|
I am trying to build a process that will check a dictionary for an existing key-value combination within a dictionary where the key is a variable and the value is always going to be one of three different values (16, 17, or 18). I quickly came upon the issue of searching a dictionary with a dictionary and found someone's solution to turn the search into a frozenset. That removed the TypeError: unhashable type: 'dict' error message, but the code is still not doing what I want it to do. I am trying to identify that a table with 10's of thousands of Building ID's contains exactly 3 occurrences of each Building ID and that each occurrence is accompanied with only either a 16, 17, or 18 value in the next column. So far I've only been playing around with snippets of code to see if I could make it work. WIND = 'L_DAMAGE_RESULTS_WIND'
readList = ["BLDG_ID", "HAZARD_ID"]
PassFail = "PASS"
WINDDict = {r[0]:(r[1:]) for r in arcpy.da.SearchCursor(WIND, readList)}
with arcpy.da.SearchCursor(WIND, "BLDG_ID") as cursor:
for row in cursor:
lookup = {row[0]:(18)}
key = frozenset(lookup.items())
if key not in WINDDict:
PassFail = "fail"
print PassFail
fail
print lookup
{u'370139999': (18,)} I thought the above would result in a "PASS" because u'370139999': (18,) does exist in WINDDict, but it didn't. I'm still wrapping my head around dictionaries, so I would appreciate any help that is offered. UPDATE: So the problem appears to be with the unicode within the dictionary. WINDDict = {29184: (u'3701310027', 17), 1: (u'370131', 16), 2: (u'3701310', 16), 3: (u'37013100', 16), 4: (u'370131000', 16), 5: (u'3701310000', 16), 6: (u'3701310001', 16), ...}
if (u'3701310027', 17)in WINDDict:
print "yes"
else:
print "no"
...
no
if 17 in WINDDict:
print "yes"
else:
print "no"
...
yes I have no idea how to handle this. Message was edited by: John Lay to add more explanation.
... View more
01-08-2015
07:31 AM
|
0
|
13
|
8411
|
|
POST
|
da cursors with dictionaries ROCK! From 4 hours to 9 minutes! Thanks Caleb and Richard! The updated bit of code looks like this now: FieldNameList = ["CATTLE", "DAIRY", "POULTRY", "SWINE"]
Iteration = 0
Iteration2 = 0
if TABLETYPE == "Animal Operations":
for JoinFile in UPDATELIST_join:
if Iteration > 2:
Iteration = 0
Iteration2 = Iteration2 + 1
layer = CENSUSLayerLIST[Iteration]
ANOPSFIELD = FieldNameList[Iteration2]
readList = ["GEOID10_1", "NumCount"]
valueDict = {r[0]:(r[1:]) for r in arcpy.da.SearchCursor(JoinFile, readList)}
updateList = ["GEOID10_1", ANOPSFIELD]
with arcpy.da.UpdateCursor(layer, updateList) as cursor:
for row in cursor:
GEOIDval = row[0]
if GEOIDval in valueDict:
row[1] = valueDict[GEOIDval][0]
cursor.updateRow(row)
Iteration = Iteration + 1
else:
for layer in CENSUSLayerLIST:
JoinFile = UPDATELIST_join[Iteration]
readList = ["GEOID10_1", "NumCount"]
valueDict = {r[0]:(r[1:]) for r in arcpy.da.SearchCursor(JoinFile, readList)}
updateList = ["GEOID10_1", CENSUSFIELDTRANS]
with arcpy.da.UpdateCursor(layer, updateList) as cursor:
for row in cursor:
GEOIDval = row[0]
if GEOIDval in valueDict:
row[1] = valueDict[GEOIDval][0]
cursor.updateRow(row)
Iteration = Iteration + 1
Now, just so that I understand what is happening here, valueDict = {r[0]:(r[1:]) for r in arcpy.da.SearchCursor(JoinFile, readList)} is building an array from which with arcpy.da.UpdateCursor(layer, updateList) as cursor:
for row in cursor:
GEOIDval = row[0]
if GEOIDval in valueDict:
row[1] = valueDict[GEOIDval][0]
cursor.updateRow(row) the current cursor row searches for a match. And row[1] = valueDict[GEOIDval][0] means the value of the update cursor row's second field (row[1]) now equals the matched object's next value from the matched part (valueDict[GEOIDval][0]). Correct?
... View more
06-18-2014
03:56 AM
|
0
|
0
|
1357
|
|
POST
|
Here is the full code. I'm going to investigate the da cursors and dictionaries method. I've been aware that calculatefield management is very inefficient for a while; I just couldn't figure out another way to do the same thing. Fieldmappings are new to me, so if there is a more elegant way to write that bit, I'd be interested to learn. SCRIPTPATH = sys.path[0]
ROOTFOLDER = os.path.dirname(SCRIPTPATH)
CENSUS_FGDB = os.path.join(ROOTFOLDER, "CENSUS.gdb")
Census_Blocks = os.path.join(ROOTFOLDER, "CENSUS.gdb\CENSUS_BLOCKS")
Census_Group = os.path.join(ROOTFOLDER, "CENSUS.gdb\CENSUS_GROUP")
Grid = os.path.join(ROOTFOLDER, "CENSUS.gdb\CENSUS_GRID")
CENSUSLIST = [Census_Blocks, Census_Group, Grid]
TABLETYPE = arcpy.GetParameterAsText(0)
UPDATELAYER = arcpy.GetParameterAsText(1)
UPDATEFIELD = arcpy.GetParameterAsText(2)# optional
CENSUSFIELD = arcpy.GetParameterAsText(3)# optional
usrName = os.getenv('USERNAME')
HOME = r'C:\Users\%s' % usrName
if CENSUSFIELD == "Buildings":
CENSUSFIELDTRANS = "BLDCNT"
elif CENSUSFIELD == "Chemical Sites":
CENSUSFIELDTRANS = "CHEMICAL"
elif CENSUSFIELD == "Correctional Facilities":
CENSUSFIELDTRANS = "CORRECT"
elif CENSUSFIELD == "Dams":
CENSUSFIELDTRANS = "DAMS"
else:
pass
if UPDATEFIELD =='#' or not UPDATEFIELD:
UPDATEFIELD = "NOTUSED"
DEFAULTGDB = os.path.join(HOME, "Documents", "ArcGIS", "Default.gdb")
UPDATELAYERTemp = (os.path.join(DEFAULTGDB, "UPDATELAYERTemp"))
desc = arcpy.Describe(UPDATELAYER)
type = desc.shapeType
arcpy.CopyFeatures_management(UPDATELAYER, UPDATELAYERTemp)
# List Fields and delete unnecessary
fields = arcpy.ListFields(UPDATELAYERTemp)
fieldNameList = []
for field in fields:
if not field.required and not field.name == UPDATEFIELD:
fieldNameList.append(field.name)
arcpy.DeleteField_management(UPDATELAYERTemp, fieldNameList)
arcpy.AddField_management(UPDATELAYERTemp,"Number","SHORT")
arcpy.CalculateField_management(UPDATELAYERTemp,"Number",1,"PYTHON_9.3")
# Define Animal Operations
ANOPCATTLE = (os.path.join(DEFAULTGDB, "ANOPCATTLE"))
ANOPDAIRY = (os.path.join(DEFAULTGDB, "ANOPDAIRY"))
ANOPPOULTRY = (os.path.join(DEFAULTGDB, "ANOPPOULTRY"))
ANOPSWINE = (os.path.join(DEFAULTGDB, "ANOPSWINE"))
exp1 = "\"" + UPDATEFIELD + "\" LIKE '%Dry%' OR \"" + UPDATEFIELD + "\" LIKE '%Beef%'"
exp2 = "\"" + UPDATEFIELD + "\" LIKE '%Dairy%' OR \"" + UPDATEFIELD + "\" LIKE '%Milk%'"
exp3 = "\"" + UPDATEFIELD + "\" LIKE '%Poultry%'"
exp4 = "\"" + UPDATEFIELD + "\" LIKE '%Swine%'"
# Define Census Layers
CENSUS_Blocks_Layer = (os.path.join(DEFAULTGDB, "CENSUS_Blocks_Layer"))
CENSUS_Group_Layer = (os.path.join(DEFAULTGDB, "CENSUS_Group_Layer"))
CENSUS_Grid_Layer = (os.path.join(DEFAULTGDB, "CENSUS_Grid_Layer"))
CENSUSLayerLIST = [CENSUS_Blocks_Layer, CENSUS_Group_Layer, CENSUS_Grid_Layer]
# Make Feature Layers from Census
Iteration = 0
for layer in CENSUSLIST:
Newlayer = CENSUSLayerLIST[Iteration]
arcpy.MakeFeatureLayer_management(layer, Newlayer)
Iteration = Iteration + 1
#Define Spatial Join outputs for
UPDATELIST_join = []
CENSUS = ["Blocks", "Group", "Grid"]
if TABLETYPE == "Animal Operations":
arcpy.Select_analysis(UPDATELAYERTemp, ANOPCATTLE, exp1)
arcpy.Select_analysis(UPDATELAYERTemp, ANOPDAIRY, exp2)
arcpy.Select_analysis(UPDATELAYERTemp, ANOPPOULTRY, exp3)
arcpy.Select_analysis(UPDATELAYERTemp, ANOPSWINE, exp4)
JOINLAYERList = [ANOPCATTLE, ANOPDAIRY, ANOPPOULTRY, ANOPSWINE]
else:
JOINLAYERList = [UPDATELAYERTemp]
if TABLETYPE == "Lagoons":
field_name = UPDATEFIELD
CENSUSFIELDTRANS = "LAGOONS"
else:
field_name = "Number"
for file in JOINLAYERList:
Iteration = 0
for layer in CENSUSLayerLIST:
Export_Output = file + "_" + CENSUS[Iteration]
UPDATELIST_join.append(Export_Output)
fieldmappings = arcpy.FieldMappings()
fieldmappings.addTable(layer)
fieldmappings.addTable(file)
FieldIndex = fieldmappings.findFieldMapIndex(field_name)
fieldmap = fieldmappings.getFieldMap(FieldIndex)
field = fieldmap.outputField
field.name = "NumCount"
field.aliasName = "NumCount"
fieldmap.outputField = field
fieldmap.mergeRule = "sum"
fieldmappings.replaceFieldMap(FieldIndex, fieldmap)
for field in fieldmappings.fields:
if field.name not in ["NumCount", "GEOID10_1", "GEOID10_3"]:
fieldmappings.removeFieldMap(fieldmappings.findFieldMapIndex(field.name))
arcpy.SpatialJoin_analysis(layer, file, Export_Output, "#","#", fieldmappings)
with arcpy.da.UpdateCursor(Export_Output, "NumCount") as cursor:
for row in cursor:
if row[0] is None:
cursor.deleteRow()
Iteration = Iteration + 1
FieldNameList = ["CATTLE", "DAIRY", "POULTRY", "SWINE"]
Iteration = 0
Iteration2 = 0
if TABLETYPE == "Animal Operations":
for JoinFile in UPDATELIST_join:
if Iteration > 2:
Iteration = 0
Iteration2 = Iteration2 + 1
layer = CENSUSLayerLIST[Iteration]
ANOPSFIELD = FieldNameList[Iteration2]
arcpy.AddJoin_management(layer, "GEOID10_1", JoinFile, "GEOID10_1", "KEEP_COMMON")
arcpy.AddMessage("Updateing " + layer + "' " + ANOPSFIELD)
arcpy.CalculateField_management(layer, ANOPSFIELD, "!NumCount!", "PYTHON_9.3")
arcpy.RemoveJoin_management(layer)
Iteration = Iteration + 1
else:
for layer in CENSUSLayerLIST:
JoinFile = UPDATELIST_join[Iteration]
arcpy.AddJoin_management(layer, "GEOID10_1", JoinFile, "GEOID10_1", "KEEP_COMMON")
arcpy.AddMessage("Updateing " + layer + "' " + CENSUSFIELDTRANS)
arcpy.CalculateField_management(layer, CENSUSFIELDTRANS, "!NumCount!", "PYTHON_9.3")
arcpy.RemoveJoin_management(layer)
Iteration = Iteration + 1
... View more
06-18-2014
02:26 AM
|
0
|
0
|
1357
|
|
POST
|
I have a script that joins one table to another, then calculates one column to another. Everything works as it should, but it takes 4 hours to do so if it is scripted while it only takes a few seconds if I do it manually. The script actually does a lot more in that it loops through 4 tables and joins them each to 3 different tables. This is a process that needs to occur fairly regularly, so this is not something I really want to be doing manually every time I need to update these columns. FieldNameList = ["CATTLE", "DAIRY", "POULTRY", "SWINE"] Iteration = 0 Iteration2 = 0 if TABLETYPE == "Animal Operations": for JoinFile in UPDATELIST_join: if Iteration > 2: Iteration = 0 Iteration2 = Iteration2 + 1 layer = CENSUSLayerLIST[Iteration] ANOPSFIELD = FieldNameList[Iteration2] arcpy.AddJoin_management(layer, "GEOID10_1", JoinFile, "GEOID10_1", "KEEP_COMMON") arcpy.CalculateField_management(layer, ANOPSFIELD, "!NumCount!", "PYTHON_9.3") arcpy.RemoveJoin_management(layer) Iteration = Iteration + 1 else: for layer in CENSUSLayerLIST: JoinFile = UPDATELIST_join[Iteration] arcpy.AddJoin_management(layer, "GEOID10_1", JoinFile, "GEOID10_1", "KEEP_COMMON") arcpy.CalculateField_management(layer, CENSUSFIELDTRANS, "!NumCount!", "PYTHON_9.3") arcpy.RemoveJoin_management(layer) Iteration = Iteration + 1 After my first attempt ran for 4 hours, I thought I might give da.SearchCursor / da.UpdateCursor a go. But after it ran for 6 hours I gave up. Is there a better way to skin this cat?
... View more
06-17-2014
11:00 AM
|
0
|
7
|
2001
|
| Title | Kudos | Posted |
|---|---|---|
| 2 | 07-17-2023 01:26 PM | |
| 1 | 05-26-2015 07:18 AM | |
| 1 | 05-28-2015 05:03 AM | |
| 1 | 04-11-2019 09:19 AM | |
| 1 | 01-12-2016 07:22 AM |
| Online Status |
Offline
|
| Date Last Visited |
06-13-2024
09:06 PM
|