|
POST
|
I'm currently busy with my Master's Thesis: Analyse the Propagation of Uncertainty from DEMs used for Hydrological Modelling. I need to let everyone know upfront that I've got very little knowledge in geostatistics. My masters closely following the approach used by Christian Venzin based on his Masters Thesis: "Analyzing the Impact of High Resolution DEM Uncertainty on Hydrological Models Using a Parallel Computing Approach" , where Christian Venzin used a Process Convolution method. Christian Venzin concludes that due to the limitations with Process Convolution the next logical step is to use a Conditional Sequential Gaussian Simulation based on the paper written by Tomislav Hengl: "On the uncertainty of stream networks derived from elevation data: the error propogation approach". I recently found out that Conditional Sequential Gaussian Simulation is not available within Geostatistical Analyst. I'm looking for advice from the community that understands Geostatistics and in particular Gaussian Simulations. I need to develop a similar approach to C.Venzin and T.Hengl using the available tools within Geostatistical Analyst to derive multiple realizations of a DEM (i.e. SRTM 30m and SRTM 90m DSM) and control points as true values. I then want to use the results from the simulations to derive watersheds and stream networks to derive probability distributions to quantify and visualise the uncertainty within the derived results as done by C.Venzin and T.Hengl. Any advice in developing a geostatistical simulation approach using geostatistical analyst extension to derive similar results as C.Venzin and T.Hengl will truly be appreciated. Please provide explanations for the choice of the methods you propose so that I may better understand geostatistical uncertainty modelling. Regards Peter Wilson
... View more
09-02-2015
11:13 AM
|
0
|
1
|
3787
|
|
POST
|
I've searched online looking for an open source tool, script or source code. I need a stand alone Python script that can convert KML files to AutoCAD (DWG\DXF) without the reliance on ArcGIS. The reason is that our colleagues need to be able to convert KML files to AutoCAD (DWG\DXF) without have to contact us each time they need a KML file converted to AutoCAD (DWG\DXF). Any help to achieve the following will be appreciated.
... View more
08-31-2015
03:41 PM
|
0
|
3
|
12540
|
|
POST
|
Hi Xander I figured out a workaround using ArcPy.da.SearchCursor: '''
Created on Aug 3, 2015
Calculate the time to traverse
a terrain based on Toblers Hiking
Function based on slope
@author: PeterW
'''
# import system modules and site packages
import os
import time
import arcpy
from arcpy.sa import *
from difflib import Match
# process time Function
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60
return "{}h:{:>02}m:{:>05.2f}s".format(h, m, s)
# processing start time
start_time = time.time()
# set environment settings
arcpy.env.overwriteOutput = True
# check out extensions
arcpy.CheckOutExtension("Spatial")
# set input and output workspace
inws = r"F:\Projects\2015\G111741\Model01\Rasters"
fgdb = r"F:\Projects\2015\G111741\Model01\Model01.gdb"
# calculate the slope in degrees
inras = os.path.join(inws, "raw2")
out_measurement = "DEGREE"
slope_deg = Slope(inras, out_measurement)
# spatial reference for output results
coordsys = arcpy.Describe(inras).spatialReference
# set site locations and input Toblers Hiking Function Vertical factor
sites = os.path.join(fgdb, "Schools_All")
sites_buffer = os.path.join(fgdb, "Schools_Buffer")
vertical_factor = arcpy.sa.VfTable(os.path.join(inws, "ToblerAway.txt")) #@UndefinedVariable
# create output feature class to store walk time polylines for each site_buffer
walk_poly = os.path.join(fgdb, "walk_poly")
# check if walk_poly exists else create it from scratch
if arcpy.Exists(walk_poly):
arcpy.Delete_management(walk_poly)
arcpy.CreateFeatureclass_management(fgdb, "walk_poly", "POLYLINE", "", "", "", coordsys, "", "", "", "")
arcpy.AddField_management(walk_poly, "Name", "TEXT", "", "", 25, "", "", "", "")
arcpy.AddField_management(walk_poly, "Contour", "SHORT", "", "", "", "", "", "", "")
else:
arcpy.CreateFeatureclass_management(fgdb, "walk_poly", "POLYLINE", "", "", "", coordsys, "", "", "", "")
arcpy.AddField_management(walk_poly, "Name", "TEXT", "", "", 25, "", "", "", "")
arcpy.AddField_management(walk_poly, "Contour", "SHORT", "", "", "", "", "", "", "")
# walk distance\time (1km = 12min; 2km = 24min; 2.5km = 30min; 5km = 60min)
walk_int = [12, 24, 30, 60]
# create 5km buffers for each site to be used as processing extent
sites_buffer = os.path.join("in_memory", "sites_buffer")
arcpy.Buffer_analysis(sites, sites_buffer, "5 Kilometers")
with arcpy.da.SearchCursor(sites_buffer, ["OBJECTID", "NAME", "SHAPE@"]) as scur1: #@UndefinedVariable
for row1 in scur1:
with arcpy.da.SearchCursor(sites, ["OBJECTID", "NAME", "SHAPE@"]) as scur2: #@UndefinedVariable
count = 0
for row2 in scur2:
count += 1
if row1[1] == row2[1]:
# processing start time
start_time = time.time()
process_extent = row1[2].extent
arcpy.env.extent = process_extent
site_name = row2[1].replace(" ","_").replace("-","_").replace("(","").replace(")","")
sql_exp1 = "Name = '{}'".format(row2[1])
site_lyr = site_name
arcpy.MakeFeatureLayer_management(sites, site_lyr, sql_exp1)
cost = PathDistance(site_lyr, slope_deg, "", "", "", inras, vertical_factor, "","")
print("Processing {} Cost Raster").format(site_name)
costmin = Times(cost, 60)
print("Processing {} Cost per Minute Raster").format(site_name)
walk_name = "walk_cont" + "_" + str(count)
walk_cont = os.path.join("in_memory", walk_name)
print("Processing & Sorting {} Walk Contours").format(site_name)
ContourList(costmin, walk_cont, walk_int)
walk_name2 = walk_name + "_" + "sorted"
walk_sorted = os.path.join("in_memory", walk_name2)
arcpy.Sort_management(walk_cont, walk_sorted, [["Contour", "ASCENDING"]], "")
arcpy.AddField_management(walk_sorted, "Name", "TEXT", "", "", 50, "", "", "", "")
sql_exp2 = "'{}'".format(row2[1])
arcpy.CalculateField_management(walk_sorted, "Name", sql_exp2, "PYTHON_9.3")
with arcpy.da.SearchCursor(walk_sorted, ["SHAPE@", "Name", "Contour"]) as scur: #@UndefinedVariable
with arcpy.da.InsertCursor(walk_poly, ["SHAPE@", "Name", "Contour"]) as icur: #@UndefinedVariable
for srow in scur:
icur.insertRow(srow)
# Processing end time
end_time = time.time()
print "It took {} to execute {}".format(hms_string(end_time - start_time),site_name)
print ("Completed Processing All Sites")
# check in extensions
arcpy.CheckInExtension("Spatial")
# processing end time
end_time = time.time()
print "It took {} to execute this".format(hms_string(end_time - start_time))
... View more
08-03-2015
05:14 PM
|
2
|
2
|
3065
|
|
POST
|
HI Dan Thanks for the reply, the coordinate systems are indeed the same. Regards
... View more
08-03-2015
12:53 PM
|
0
|
0
|
3065
|
|
POST
|
Hi Xander The full Python code is attached below: '''
Created on Jul 16, 2015
Calculate the time to traverse a terrain based on
Toblers Hiking Function based on slope
@author: PeterW
'''
# import system modules and site packages
import os
import arcpy
from arcpy.sa import *
import time
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60
return "{}h:{:>02}m:{:>05.2f}s".format(h, m, s)
# End hms_string
start_time = time.time()
# set environment settings
arcpy.env.overwriteOutput = True
# check out extensions
arcpy.CheckOutExtension("Spatial")
# set input and output workspace
inws = r"F:\Projects\2015\G111741\Model01\Rasters"
fgdb = r"F:\Projects\2015\G111741\Model01\Model01.gdb"
arcpy.env.workspace = inws
# calculate the slope in degrees
inras = "raw2"
out_measurement = "DEGREE"
slope_deg = Slope(inras, out_measurement)
# spatial reference for output results
coordsys = arcpy.Describe(inras).spatialReference
# set site locations and input Tobler Hiking Function Vertical Factor
sites = os.path.join(fgdb,"Schools_All2")
vertical_factor = arcpy.sa.VfTable(os.path.join(inws, "ToblerAway.txt")) #@UndefinedVariable
# create output feature class to store walk time polylines for each site
walk_poly = os.path.join(fgdb, "walk_poly")
# check if walk_poly exists else create it from scratch
if arcpy.Exists(walk_poly):
arcpy.Delete_management(walk_poly)
arcpy.CreateFeatureclass_management(fgdb, "walk_poly", "POLYLINE", "", "", "", coordsys, "", "", "", "")
arcpy.AddField_management(walk_poly, "Name", "TEXT", "", "", 25, "", "", "", "")
arcpy.AddField_management(walk_poly, "Contour", "SHORT", "", "", "", "", "", "", "")
else:
arcpy.CreateFeatureclass_management(fgdb, "walk_poly", "POLYLINE", "", "", "", coordsys, "", "", "", "")
arcpy.AddField_management(walk_poly, "Name", "TEXT", "", "", 25, "", "", "", "")
arcpy.AddField_management(walk_poly, "Contour", "SHORT", "", "", "", "", "", "", "")
# walking distance\time (1km = 12min; 2km = 24min; 2.5km = 30min; 5km = 60min)
walk_int = [12, 24, 30, 60]
with arcpy.da.SearchCursor(sites, ["NAME"]) as cursor: #@UndefinedVariable
count = 0
for row in cursor:
count += 1
site_name = row[0].replace(" ","_").replace("-","_").replace("(","").replace(")","") # remove non valid characters
sql_exp1 = "Name = '{}'".format(row[0])
site_lyr = site_name
arcpy.MakeFeatureLayer_management(sites, site_lyr, sql_exp1) # create feature layer for each site
site_buffer = os.path.join("in_memory", site_name)
arcpy.Buffer_analysis(site_lyr, site_buffer, "5 Kilometers") # create 5 kilometer buffer for each site
process_extent = arcpy.Describe(site_buffer).extent # set processing extent to 5 kilometer buffer of site
arcpy.env.extent = process_extent # set processing extent to site buffer for each site location
cost = PathDistance(site_lyr, slope_deg, "","","",inras, vertical_factor,"","")
print ("Processing {} Cost Raster").format(site_name)
costmin = Times(cost, 60)
print ("Processing {} Cost per Minute Raster").format(site_name)
walk_name = "walk_cont" + "_" + str(count)
walk_cont = os.path.join("in_memory", walk_name) # save to in_memory
print ("Processing & Sorting {} Walk Contours").format(site_name)
ContourList(costmin, walk_cont, walk_int)
walk_name2 = walk_name + "_" + "sorted"
walk_sorted = os.path.join("in_memory", walk_name2)
arcpy.Sort_management(walk_cont, walk_sorted, [["Contour", "ASCENDING"]],"")
arcpy.AddField_management(walk_sorted, "Name", "TEXT", "", "", 50, "", "", "", "")
sql_exp2 = "'{}'".format(row[0])
arcpy.CalculateField_management(walk_sorted, "Name", sql_exp2, "PYTHON_9.3")
with arcpy.da.SearchCursor(walk_sorted,["SHAPE@", "Name", "Contour"]) as scur: #@UndefinedVariable
with arcpy.da.InsertCursor(walk_poly, ["SHAPE@", "Name", "Contour"]) as icur: #@UndefinedVariable
for srow in scur:
icur.insertRow(srow)
print ("Completed Processing All Sites")
# check in extensions
arcpy.CheckInExtension("Spatial")
end_time = time.time()
print "It took {} to execute this".format(hms_string(end_time - start_time))
... View more
08-03-2015
12:47 PM
|
0
|
0
|
3065
|
|
POST
|
I should mention that I'm using ArcGIS 10.2.2 with Python 2.7.6 Regards Peter Wilson
... View more
08-03-2015
10:22 AM
|
0
|
7
|
3065
|
|
POST
|
I need some assistance to figure out if the following is a bug, if so what work around can I use. I've written a Python Script I need to change the Processing Extent for each feature layer being processed. I obtain the extent for each feature layer using the following Python code: process_extent = arcpy.Describe(site_buffer).extent If I print process_extent , the results are what I would expect: I then try to set the Processing Extent using the process_extent as input based on the following code: arcpy.env.extent = process_extent if I print arcpy.env.extent, the results are not what I would expect:
... View more
08-03-2015
10:17 AM
|
1
|
8
|
6874
|
|
POST
|
I've created a Python Script that determines the time to traverse a terrain based on Toblers Hiking Function. I initially created a sample set and clipped the DEM accordingly. The script ran the six school sites just over a minute. When I scaled up the process to include the full DEM, the same six school sites took just over an hour to process. I understand why this is happening as my Python script has to create a new cost raster for the entire study area for each school site. What I looking for advice on is how I can I limit the processing extent for each school site based on the maximum walking time provided by the user. Based on Toblers Hiking Function the average walking distance over flat terrain is 5.037 km\hr. The idea behind Toblers Function is that as one traverses across a terrain the amount of distance one can cover decreases as the slope increases. Hence if the users maximum walking time is 60 min the maximum distance once can traverse if the terrain was flat would be 5.037 km. I'd like to set the extent to 5.037km using the site as the centroid of the processing extent. This would need to be updated for each site as to ensure the processing extent to be correctly set for each site. I've attached Python Script below: '''
Created on Jul 16, 2015
Calculate the time to traverse a terrain based on
Toblers Hiking Function based on slope
@author: PeterW
'''
# import system modules and site packages
import os
import arcpy
from arcpy.sa import *
import time
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60
return "{}h:{:>02}m:{:>05.2f}s".format(h, m, s)
# End hms_string
start_time = time.time()
# set environment settings
arcpy.env.overwriteOutput = True
# check out extensions
arcpy.CheckOutExtension("Spatial")
# set input and output workspace
inws = r"F:\Projects\2015\G111741\Model01\Rasters"
fgdb = r"F:\Projects\2015\G111741\Model01\Model01.gdb"
arcpy.env.workspace = inws
# calculate the slope in degrees
inras = "raw2"
out_measurement = "DEGREE"
# spatial reference for output results
coordsys = arcpy.Describe(inras).spatialReference
slope_deg = Slope(inras, out_measurement)
sites = os.path.join(fgdb,"Schools")
vertical_factor = arcpy.sa.VfTable(os.path.join(inws, "ToblerAway.txt")) #@UndefinedVariable
# create output feature class to store walk time polylines for each site
walk_poly = os.path.join(fgdb, "walk_poly")
# check if walk_poly exists else create it from scratch
if arcpy.Exists(walk_poly):
arcpy.Delete_management(walk_poly)
arcpy.CreateFeatureclass_management(fgdb, "walk_poly", "POLYLINE", "", "", "", coordsys, "", "", "", "")
arcpy.AddField_management(walk_poly, "Name", "TEXT", "", "", 25, "", "", "", "")
arcpy.AddField_management(walk_poly, "Contour", "SHORT", "", "", "", "", "", "", "")
else:
arcpy.CreateFeatureclass_management(fgdb, "walk_poly", "POLYLINE", "", "", "", coordsys, "", "", "", "")
arcpy.AddField_management(walk_poly, "Name", "TEXT", "", "", 25, "", "", "", "")
arcpy.AddField_management(walk_poly, "Contour", "SHORT", "", "", "", "", "", "", "")
# walking distance\time (1km = 12min; 2km = 24min; 2.5km = 30min; 5km = 60min)
walk_int = [12, 24, 30, 60]
with arcpy.da.SearchCursor(sites, ["Name"]) as cursor: #@UndefinedVariable
count = 0
for row in cursor:
count += 1
site_name = row[0].replace(" ","_")
sql_exp1 = "Name = '{}'".format(row[0])
site_lyr = site_name
arcpy.MakeFeatureLayer_management(sites, site_lyr, sql_exp1)
arcpy.env.extent = slope_deg
cost = PathDistance(site_lyr, slope_deg, "","","",inras, vertical_factor,"","")
print ("Processing {} Cost Raster").format(site_name)
costmin = Times(cost, 60)
print ("Processing {} Cost per Minute Raster").format(site_name)
walk_name = "walk_cont" + "_" + str(count)
walk_cont = os.path.join("in_memory", walk_name) # save to in_memory
print ("Processing & Sorting {} Walk Contours").format(site_name)
ContourList(costmin, walk_cont, walk_int)
walk_name2 = walk_name + "_" + "sorted"
walk_sorted = os.path.join("in_memory", walk_name2)
arcpy.Sort_management(walk_cont, walk_sorted, [["Contour", "ASCENDING"]],"")
arcpy.AddField_management(walk_sorted, "Name", "TEXT", "", "", 50, "", "", "", "")
sql_exp2 = "'{}'".format(row[0])
arcpy.CalculateField_management(walk_sorted, "Name", sql_exp2, "PYTHON_9.3")
with arcpy.da.SearchCursor(walk_sorted,["SHAPE@", "Name", "Contour"]) as scur: #@UndefinedVariable
with arcpy.da.InsertCursor(walk_poly, ["SHAPE@", "Name", "Contour"]) as icur: #@UndefinedVariable
for srow in scur:
icur.insertRow(srow)
print ("Completed Processing All Sites")
# check in extensions
arcpy.CheckInExtension("Spatial")
end_time = time.time()
print "It took {} to execute this".format(hms_string(end_time - start_time))
... View more
07-29-2015
01:49 PM
|
0
|
2
|
4880
|
|
POST
|
I'd like to find out if anyone within the community has used Python hashlib to compare shapefiles or feature classes. Would you need to compare the geometry and records to figure out if the shapefiles or feature classes are the same. Any advice in how I can use python hashes to compare shapefiles or feature classes will be appreciated. Regards
... View more
07-21-2015
10:49 PM
|
0
|
10
|
9273
|
|
POST
|
Hi Xander Thanks for getting back to me. Would it be possible to generate a new feature based on the projection of the point onto the longest flow path. I'm struggling to figure out how I would either split the existing line and keep the segment below the points position or generate a new polyline up to the points position. Any suggestions would be appreciated.
... View more
07-12-2015
05:45 AM
|
0
|
3
|
3182
|
|
POST
|
I've re-written my ArcHydro models (Model Builder) into Python and the only part of my model that I've note been able to convert is the Centroidal Longest Flowpath within the Basin Characteristics Toolbox. The explanation under the HEC-GeoHMS user manual (HEC-GeoHMS Manual😞 "This operation computes the centroidal longest flowpath by projecting the centroid onto the longest flowpath. The centroidal longest flowpath is measured from the projected point onto the longest flowpath to the subbasin outlet as shown in Figure 9-13" I've searched the internet and have not been able to find an alternative solution to replace the following tool found within the HEC-GeoHMS : Basin Characteristics Toolbox. I'm hoping that someone within the ESRI community have found an alternative script or has written a script to achieve the same functionality as the HEC-GeoHMS: Centroidal Longest Flowpath. Any advice in how to write a python function to replace the following HEC-GeoHMS: Centroidal Longest Flowpath would really be appreciated. Regards Peter Wilson
... View more
07-11-2015
02:34 PM
|
1
|
5
|
7066
|
|
POST
|
Hi Luke Thanks for you advice, it's truly appreciated. Is there anyway of better handling my Arc Hydro Variables. The inputs\outputs are either rasters or feature classes\tables and they are being read\written to two workspaces. The rasters are being saved or read from a single folder and the feature classes\tables are being saved or read from a File Geodatabase. Regards Peter Wilson
... View more
07-07-2015
01:24 AM
|
0
|
1
|
1580
|
|
POST
|
I've been writing Python scripts for a while now, and when a colleague of mine viewed my code, he cringed that it wasn't structured into Python Functions. I'd like some advice from the ESRI community in how I could structure my Python scripts into functions\modules that will make it easier to reuse and call within new scripts. I've attached one of my Python scripts that uses ArcPy and ArcHydro Functions. '''
Created on May 20, 2015
@author: PeterW
'''
# import system modules and site packages
import os
import arcpy
import ArcHydroTools
# check out Spatial Analyst Extension
arcpy.CheckOutExtension("Spatial")
# set environment settings
arcpy.env.overwriteOutput = True
# set input and output arguments
raw = r"F:\Projects\2015\G111443\ArcHydro\Methodology_Models\Section03\Sect3A\DEM04\raw"
rasWs = r"F:\Projects\2015\G111443\ArcHydro\Methodology_Models\Section03\Sect3A\Layers04"
outWs = r"F:\Projects\2015\G111443\ArcHydro\Methodology_Models\Section03\Sect3A\Model04.gdb"
# ArcHydro variables
fill_sinks = os.path.join(rasWs, "fil")
flow_dir = os.path.join(rasWs, "fdr")
flow_acc = os.path.join(rasWs, "fac")
streams = os.path.join(rasWs, "str")
stream_seg = os.path.join(rasWs, "strlnk")
catchment_grid = os.path.join(rasWs, "cat")
catchment_poly = os.path.join(outWs, "Layers","Catchment")
drainage_line = os.path.join(outWs, "Layers", "DrainageLine")
adj_catch = os.path.join(outWs, "Layers", "AdjointCatchment")
try:
# calculate the fill sinks
arcpy.AddMessage("Processing Fill Sinks")
ArcHydroTools.FillSinks(raw, fill_sinks)
# calculate the flow direction
arcpy.AddMessage("Processing Flow Direction")
ArcHydroTools.FlowDirection(fill_sinks, flow_dir)
# calculate the flow accumulation
arcpy.AddMessage("Processing Flow Accumulation")
ArcHydroTools.FlowAccumulation(flow_dir, flow_acc)
# calculate the maximum flow accumulation
arcpy.AddMessage("Processing Flow Accumulation Maximum")
maxcellsResult = arcpy.GetRasterProperties_management(flow_acc, "MAXIMUM")
maxcells = maxcellsResult.getOutput(0)
print maxcells
# calculate the stream threshold number of cells
arcpy.AddMessage("Processing Stream Threshold")
stream_threshold_numcells = (int(maxcells)*0.25/100)
print stream_threshold_numcells
# calculate the stream definition
arcpy.AddMessage("Processing Stream Definition")
ArcHydroTools.StreamDefinition(flow_acc, stream_threshold_numcells, streams)
# calculate the stream segmentation
arcpy.AddMessage("Processing Stream Segmentation")
ArcHydroTools.StreamSegmentation(streams, flow_dir, stream_seg)
# calculate the catchment grid delineation
arcpy.AddMessage("Processing Catchment Grid Delineation")
ArcHydroTools.CatchmentGridDelineation(flow_dir, stream_seg, catchment_grid)
# calculate the catchment polygons from the catchment grid
arcpy.AddMessage("Processing Catchment Polygons")
ArcHydroTools.CatchmentPolyProcessing(catchment_grid, catchment_poly)
# calculate the drainage lines from the stream segmentation grid
arcpy.AddMessage("Processing DrainageLines")
ArcHydroTools.DrainageLineProcessing(stream_seg, flow_dir, drainage_line)
# calculate the adjoint catchment polygons
arcpy.AddMessage("Processing Ajdoint Catchments")
ArcHydroTools.AdjointCatchment(drainage_line, catchment_poly, adj_catch)
arcpy.AddMessage("Completed Processing ArcHydro Main Model")
except:
print(arcpy.GetMessages(2))
pass
arcpy.CheckInExtension("Spatial") Any advice and assistance will be appreciated. Regards Peter Wilson
... View more
07-06-2015
03:21 PM
|
0
|
6
|
4534
|
|
POST
|
I've created a python scripts that takes a feature class (polyline) of contours and a feature class (polyline) that represents the position of a proposed dam wall. I then use the feature to polygon tool to create a polygon from each contour that represents the proposed full supply level of the proposed dam site. I then use Polygon Volume Tool to determine the volume and surface area for each proposed full supply level. The problem that I'm having is that: if I use in_memory for the output for feature to polygon, the shape_length & shape_area is missing if I save the output for each polygon from Feature to Polygon , the shape_length & shape_area is added, but when I try to append the following feature classes into a new feature class using an Insert Cursor, the geometry is not being transferred into the empty feature class. From the testing that I've done there seems to be a problem in storing the output feature class from Feature to Polygon in_memory as the shape_length & shape_area fields are being dropped, which I tested within ArcMap running the tool and storing the output into memory. I've also tested the same tools using the same input layers and saved the results to disk, the shape_length and shape_area are then preserved within the output results. If anyone can give me advice in how to resolve the following, it would be appreciated. '''
Created on May 28, 2015
@author: PeterW
The following script calculates the volume for each Full Supply Level
to calculate the capacity curve for a proposed dam site
'''
# import system modules and site packages
import arcpy
# Check out 3D Analyst
arcpy.CheckOutExtension("3D")
# set environment settings
arcpy.env.overwriteOutput = True
# set input and output arguments
cont_line = r"F:\Projects\2015\PRCPTWAT02\13176\A34631\wspace\ChrisFox.gdb\Contours"
dw_line = r"F:\Projects\2015\PRCPTWAT02\13176\A34631\wspace\ChrisFox.gdb\Dam_Wall"
fsl_gon = r"F:\Projects\2015\PRCPTWAT02\13176\A34631\wspace\ChrisFox.gdb\Full_Supply_Level"
surf = r"F:\Projects\2015\PRCPTWAT02\13176\A34631\wspace\TIN\tin"
# make feature layer from the dam wall (polyline)
dwLyr = arcpy.MakeFeatureLayer_management(dw_line,"dwLayer")
# convert each FSL contour polyline into a FSL polygon and determine the volume
# below the contour elevation using the polygon volume tool
with arcpy.da.SearchCursor(cont_line,["Elevation"]) as cursor: #@UndefinedVariable
for row in cursor:
elev = str(row[0])
outName = "FSL" + elev + "m"
sqlExp = "Elevation" + " = " + elev
contLyr = arcpy.MakeFeatureLayer_management(cont_line, outName, sqlExp)
fslLyr = "in_memory" + "\\" + outName
arcpy.FeatureToPolygon_management([contLyr,dwLyr], fslLyr, attributes = True)
arcpy.AddField_management(fslLyr, "Elevation", "SHORT")
arcpy.CalculateField_management(fslLyr, "Elevation", elev, "PYTHON_9.3")
fields = arcpy.ListFields(fslLyr)
fields = [field.name for field in fields]
print fields
with arcpy.da.SearchCursor(fslLyr, fields) as sCur: #@UndefinedVariable
with arcpy.da.InsertCursor(fsl_gon, fields) as iCur: #@UndefinedVariable
for sRow in sCur:
iCur.insertRow(sRow)
# calculate the volume and surface area for each Full Supply Level polygon and the TIN
arcpy.PolygonVolume_3d(surf, fsl_gon, "Elevation", "BELOW", "SVolume", "SArea","0")
# print statement that process is completed
arcpy.AddMessage("Completed Processing Capacity Curves") Regards Peter Wilson
... View more
05-28-2015
05:15 PM
|
0
|
1
|
6129
|
|
POST
|
Hi Paul I've not been able to improve the Arc Hydro Flow Accumulation processing, but I must give credit to the Arc Hydro Team. I'm using one of the latest versions of Arc Hydro and the processing time has drastically improved. I've processed a 500 000 000 million cells DEM study area within 2hrs. I'm currently busy with my Masters Thesis and busy developing a Multi Flow Direction algorithm that will divide the study area into smaller tiles and process them simultaneously, improving the overall processing time and accuracy of the derive flow accumulation grid and derived river network. Regards
... View more
04-07-2015
11:52 PM
|
0
|
1
|
2157
|
| Title | Kudos | Posted |
|---|---|---|
| 3 | 01-16-2012 02:34 AM | |
| 1 | 05-07-2016 03:04 AM | |
| 1 | 04-10-2016 01:09 AM | |
| 1 | 03-13-2017 12:27 PM | |
| 1 | 02-17-2016 02:34 PM |
| Online Status |
Offline
|
| Date Last Visited |
03-04-2021
12:50 PM
|