Hi,
I am trying to run a tabulate intersect on an impervious surface layer which contains 88,000 features. My zone feature class contains 112 features. Tabulate intersect has been running for about 17 hours and the geoprocessing window shows that it is reading feature 126,559,100. Why is it reading almost a hundred million more features than the inputs have?
I have noticed that tabulate intersection takes a long time to run on the impervious surface layers I derive from orthoimagery. Maybe its the number of vertices or the number of feature or both? Would it be better if I merged all of the features into 1?
I am using ArcGIS Pro 2.8.1. Any help would be much appreciated.
Thanks
Solved! Go to Solution.
Sorry, I'd missed your response. Yes, splitting up features can improve things a lot, depending on the situation. I had a "right-of-ways" layer at one point that was just all roadways merged into a single, massively complex polygon feature. When the layer was opened for editing (in which the entire feature needs to lead, not just the visible extent), it nearly crashed our Portal. Running GP tools against it was also quite slow.
We split the feature into pieces using a grid, and those problems disappeared. I think your approach of breaking them up by other features like neighbourhoods is the way to go.
Good thinking on making an iterative model, too!
According to the docs:
Determining the intersection of zone and class features follows the same rules as the Intersect tool.
An intersection is always going to create more features than the inputs, if the layers are not identical.
Also, the number of rows in the output will depend greatly on the input data and what field you have selected for the Class Fields. You may only have 112 zones, but how many classes are present in the impervious surfaces layer? You'll get an output row for each value in the class fields that intersect with the zone. So I might have a two zone features, but if there are 1000 unique values in my class field for the intersecting layer, I may end up with 2000 rows in my output.
What's curious here is that even intersecting every feature with every zone would only yield 9,856,000, which is still a far cry from the row count you're getting. I would try running a simple intersection to investigate what is being used in the tabulation. Perhaps on a subset, though, so that you don't have to wait 17 hours for it!
Could you provide some screenshots of the input layers, too? I wonder if perhaps the intersection is counting a row for every piece that intersects, as opposed to every input feature. You could intersect two features and end up with nearly infinite singlepart geometries if the input geometries were complex enough.
Hi Josh,
Thanks for the response. I have posted a screenshot of my data below. I am trying to calculate the percentage of impervious land cover in each stormwater pond catchment. I have 1 zone field which is the ID of the catchment, and no class fields. I will try to run it on a small subset and let you know the outcome.
- Alex
Well, this is certainly curious. Are those impervious features all just one big merged feature?
Well there are 88,000 features, but the vast majority of the area comes from 1 feature that includes the roads and everything connected to them. Do you think it would run faster if I merged it into 1 big feature? I don't think it would be possible to break this feature up as I merged and dissolved it with a few other layers to fill in gaps and the roads were probably all connected in the raw classification results anyways.
I am currently running a tabulate intersection on just 1 catchment and it has read 202,700 features so far over 30 minutes. Still running.
- Alex
I let tabulate intersection run for one catchment for 2h and it was still reading features, so instead I created a model that iterates through each catchment, clips the impervious layer to the catchment, tabulates intersection, and then appends the results to a table. The whole thing ran in 18 minutes so my problems must have been from having that one huge feature. Its still kind of weird though that I have created an impervious layer similar to this one for a bigger city and it runs in about 3h.
In the future I might try just splitting the impervious layer with a watershed or neighbourhoods layer to break things up and avoid having to create a bunch of new files.
Thank you for your help.
Sorry, I'd missed your response. Yes, splitting up features can improve things a lot, depending on the situation. I had a "right-of-ways" layer at one point that was just all roadways merged into a single, massively complex polygon feature. When the layer was opened for editing (in which the entire feature needs to lead, not just the visible extent), it nearly crashed our Portal. Running GP tools against it was also quite slow.
We split the feature into pieces using a grid, and those problems disappeared. I think your approach of breaking them up by other features like neighbourhoods is the way to go.
Good thinking on making an iterative model, too!
If you are working with shapefiles, I would suggest you to save the output as a File Geodatabase Feature Class.
I am working out of a geodatabase already and still having this issue. Thanks for the response though. I think the solution is just to break things up so that I don't end up with huge features that span the entire city.
I just had this issue as well, Tabulate Intersection ran for 31 hours before I cancelled the process and split up the features I had previously merged, assuming that a reduced number of features helps to accelerate the processing. This is certainly not the case, rather it seems that processing time increases with the number of vertices of a feature. After splitting up the features, Tabulate Intersection easily finished within ten minutes.