I am new to python and I would to like to count the number of features in each feature class that meet certain criteria.
In geodatabase I have many feature classes. One of the attribute field has values "number/number" --> 50/2, 99/6
I would to like to loop each feature class, select layer by attribute and count selected features, which contain sign "/" and print the result. Below is python code:
import arcpy
#Set the input workspace
arcpy.env.workspace = "d:/TMP/miha_k/zirovnica/novo/test.gdb"
fcs = arcpy.ListFeatureClasses()
cursor = arcpy.SearchCursor(fcs)
for row in fcs:
arcpy.SelectLayerByAttribute_management(fcs, "NEW_SELECTION", "PARCELA LIKE '%/%'")
arcpy.GetCount_management(fcs)
print row
I would appreciate your help
1. Follow Dan Patterson's Py... blog.
2. Check out Some Python Snippets
3. Free online courses like Tutorials Point
What can you say about book Programming ArcGIS with Python Cookbook (second edition) By Eric Pimpler?
The Programming ArcGIS with Python Cookbook book looks like a good resource and worth the investment. I have found online resources to be just as valuable, if not more so, than a few good reference books. Knowing what and how to ask Google (etc.) is an invaluable skill.
I was just looking for an answer to something, and came up with this online version (and found my answer)
python on GitHub and python.org once the introductions are done, these are the source links to which all refer. A slog perhaps? Not really if you like clarity and brevity
Try the following function (first substituting the field index number for '10' for your search field.
def countfeaturesfun(inFC): with arcpy.da.SearchCursor(inFC, "*") as cursor: Count = 0 for row in cursor: if "/" in str(row[10]): Count+=1 return Count
Technically, this works. I'm not always the one to do things the "right" way either, but there are a few reasons not to do it this way:
- you need to manually count the field index
- this method reads every feature in the feature class. Using a where clause only reads the matching features, through the magic of SQL, which I don't understand, but from the help, "When a query is specified for an update or search cursor, only the records satisfying that query are returned." Anyhow, it seems to take about 3x longer to read through everything, but it likely depends on the data and how many matches there are. Also, to be fair, the results below are representative of several runs, except the first run, which adds about 3s in actually constructing the table view. So, I suppose the main point is that this method is just fine for small data sets, but likely worse for large data sets.
import timeit
fc = 'bc_geoname_albers' # ~45,000 features
# SQL method
start_time = timeit.default_timer()
arcpy.MakeTableView_management(fc, "myTableView", "CGNDBKEY LIKE '%C%'")
Count = int(arcpy.GetCount_management("myTableView").getOutput(0))
print (Count,str(timeit.default_timer() - start_time))
# Count method
start_time = timeit.default_timer()
with arcpy.da.SearchCursor(fc, "*") as cursor:
Count = 0
for row in cursor:
if "C" in str(row[4]):
Count+=1
print (Count,timeit.default_timer() - start_time)
(13531, 0.64809157931) # SQL method: 0.65s
(13531, 1.87391371297) # Count method: 1.87s
Agreed, in considering processing time, your version would be quicker and more reliable. However like you noted as well for a smaller dataset the difference would be negligible. Cheers to SQL!