# Summing areas within unique values via loops

3162
4
04-14-2015 09:00 AM
Occasional Contributor

This is less of a python question and more of a general logic question, I suppose.

I have a shapefile that represents the intersection of soil types and drainage boundaries.  I want to sum the total area of each soil type within each drainage boundary.  There are 4 soil types.  There are a potentially unlimited number of drainage areas.

Soil type (HSG) field: "GRIDCODE"

Drainage area field: "DA"

Area: "Area"

I have a code that extracts unique values from the drainage area field.  My problem is that I am not sure how to implement a loop sequence that iterates through each unique drainage value to find the instances of the soil types and sum the Area without manually coding each one. Earlier, I summed the areas of all the soil types with the following code:

```with arcpy.da.SearchCursor(SiteHSG, ["gridcode", "Area"]) as cursor:
# Initialize variables
SiteArea_A = 0.000
SiteArea_B = 0.000
SiteArea_C = 0.000
SiteArea_D = 0.000
# Calculate area for each HSG
for row in cursor:
if row[0] == 1:
SiteArea_A = SiteArea_A + row[1]
if row[0] == 2:
SiteArea_B = SiteArea_B + row[1]
if row[0] == 3:
SiteArea_C = SiteArea_C + row[1]
if row[0] == 4:
SiteArea_D = SiteArea_D + row[1]```

I have no idea if there was a better way to calculate those areas or not and I have no problem using a similar code over and over, basically, but I do not know how to tell the computer that for each unique DA value, find soil type, and sum those soil types, then move on and search for the next unique DA value and iterate through the grid codes and sum the area. I am kind of thinking a "while" loop may be able to accomplish what I need, but then I am not really sure how to implement a while loop in this context.

I am kind of thinking of something like this so far:

```DAvalues = [row[0] for row in arcpy.da.SearchCursor(DAHSG, DAID)]
DAunique = set(DAvalues)
print(DAunique)
with arcpy.da.SearchCursor(DAHSG, ["DA", "gridcode", "Area"]) as cursor:
for row[0] == DAunique[0]
if row[1] == 1:
SiteArea_A = SiteArea_A + row[3]
if row[1] == 2:
SiteArea_B = SiteArea_B + row[3]
if row[1] == 3:
SiteArea_C = SiteArea_C + row[3]
if row[1] == 4:
SiteArea_D = SiteArea_D + row[3]

for row[0] == DAunique[1]
if row[1] == 1:
SiteArea_A = SiteArea_A + row[3]
if row[1] == 2:
SiteArea_B = SiteArea_B + row[3]
if row[1] == 3:
SiteArea_C = SiteArea_C + row[3]
if row[1] == 4:
SiteArea_D = SiteArea_D + row[3]```

Etc.  I know there's a better way to do this than making a bunch of different for loops, especially because the number of unique values are unlimited.  It's impractical for me to code it this way but I'm not sure where to go.

Tags (6)
1 Solution

Accepted Solutions
MVP Honored Contributor

Can't you just run Summary Statistics, where case fields = soil type and DA value, and stat type = sum area?

4 Replies
MVP Honored Contributor

Can't you just run Summary Statistics, where case fields = soil type and DA value, and stat type = sum area?

Occasional Contributor

Ohmygod. I have been racking my brain trying to figure out how to loop this thing and there's a built in function that does it for me??  I was toying with converting to raster and using combine/zonal stats if all else failed.  It should have occurred to me to check for a similar shapefile function.

Thank you! I'll see if this gives the output I desire.

Occasional Contributor

Whoo it worked! And I can use SearchCursor to pull out the values so I can use them in my program.  Awesome.  Thanks a ton!

MVP Frequent Contributor

Alternatively, you can harness the da.FeatureClassToNumPyArray() and vice-versa to perform grouping outside of ESRI stack.  This works well if say your Intersect output has millions of rows and efficiency is important.  There's some other benefits that might not apply to this scenario, but just thought it good to point out that numpy and pandas libraries offer some excellent alternatives.

This would produce your sum of the area by gridcode value:

```import arcpy
import numpy
import pandas as pd
from pandas import *