I am trying to generate a script that will scan a fgdb with feature classes all within feature datasets, and then it will output to a csv with two columns: state and count, and these will represent which state (US states) are in how many FCs.
The tool will parse the FGDB and FDS, then scan the FCs in the first FDS, but then not manage to search each subsequent FDS after the first one. It also does not write anything into the csv except the headers 'state' and 'count.'
pardon the abundance of messages and try/except stuff, I'm not very proficient (just kidding, I started from chatGPT).
import arcpy
import csv
# Set the workspace to the folder containing the feature classes
workspace = arcpy.GetParameterAsText(0)
arcpy.env.workspace = workspace
arcpy.AddMessage("workspace: " + str(workspace))
#feature_classes = arcpy.ListFeatureClasses()
feature_datasets = arcpy.ListDatasets("", "Feature")
arcpy.AddMessage(str(feature_datasets)) #outputs feature datasets properly
state_counts = {}
for fds in feature_datasets:
arcpy.env.workspace = fds
arcpy.AddMessage(str(fds)) #properly scans through each fds
# Print the current feature dataset to verify if it's correctly set
arcpy.AddMessage(f"Processing feature dataset: {fds}")
try:
feature_classes = arcpy.ListFeatureClasses()
# Print the number of feature classes found in the current feature dataset
arcpy.AddMessage(f"Number of feature classes: {len(feature_classes)}")
arcpy.AddMessage(str(feature_classes))
if feature_classes:
try:
for fc in feature_classes:
arcpy.AddMessage(str(fc))
desc = arcpy.Describe(fc)
state_field_name = "stateName" # Update with the actual field name for state attribute
if state_field_name in desc.fields:
#with arcpy.da.SearchCursor(fc, "stateName") as cursor:
#with arcpy.da.SearchCursor(fc, [state_field_name]) as cursor:
with arcpy.da.SearchCursor(fc, '*') as cursor:
field_names = cursor.fields
state_field_index = field_names.index(state_field_name)
for row in cursor:
state = row[state_field_index]
arcpy.AddMessage(str(state))
arcpy.AddMessage(state)
if state and state != "":
if state in state_counts:
state_counts[state] += 1
else:
state_counts[state] = 1
else:
arcpy.AddMessage(f"Empty or None value encountered in {state_field_name} field.")
# Delete the cursor and row objects
del cursor
del row
except Exception as e:
arcpy.AddMessage(f"An error occurred: {str(e)}")
else:
arcpy.AddMessage("no if feature_classes")
except Exception as e:
arcpy.AddMessage(f"An error occurred while retrieving feature classes: {str(e)}")
# fails here after the first feature dataset is scanned
# Define the output CSV file path
output_csv = r"C:\path\output.csv"
# Write the state counts to the CSV file
with open(output_csv, 'w', newline='') as csvfile:
writer = csv.writer(csvfile)
writer.writerow(['State', 'Count'])
for state, count in state_counts.items():
writer.writerow([state, count])
#arcpy.AddMessage(f"State counts saved to {output_csv}")
if len(state_counts) > 0:
arcpy.AddMessage(f"State counts saved to {output_csv}")
else:
arcpy.AddMessage("No valid state values found in the feature classes.")
cleaned up text from View Details window from running as a geoprocessing tool:
workspace: C:\path\data.gdb
[list of feature datasets]
FDS1
Processing feature dataset: FDS1
Number of feature classes: 8
[list of feature classes in FDS]
feature_classes1
2
3...
8
FDS2
Processing feature dataset: FDS2
An error occurred while retrieving feature classes: object of type 'NoneType' has no len()
FDS3
Processing feature dataset: FDS3
An error occurred while retrieving feature classes: object of type 'NoneType' has no len()
CONITNUES HERE OVER EACH FDS...
DOES NOT PRINT ANY MESSAGES ABOUT THE CSV
had chatGPT redo the code since it was not counting the right way, so here is the new code that does seem to count properly, but it still only scans through the first feature dataset and finds no feature classes in the remaining feature datasets.
import arcpy
import csv
from collections import defaultdict
from datetime import datetime
# Set the workspace to the folder containing the feature classes
workspace = arcpy.GetParameterAsText(0)
arcpy.env.workspace = workspace
arcpy.AddMessage("workspace: " + str(workspace))
feature_datasets = arcpy.ListDatasets("", "Feature")
arcpy.AddMessage(str(feature_datasets)) # outputs feature datasets properly
state_counts = defaultdict(int) # Use defaultdict to automatically initialize count to zero
for fds in feature_datasets:
arcpy.env.workspace = fds
arcpy.AddMessage(str(fds)) # properly scans through each fds
# Print the current feature dataset to verify if it's correctly set
arcpy.AddMessage(f"Processing feature dataset: {fds}")
try:
feature_classes = arcpy.ListFeatureClasses()
# Print the number of feature classes found in the current feature dataset
arcpy.AddMessage(f"Number of feature classes: {len(feature_classes)}")
arcpy.AddMessage(str(feature_classes))
if feature_classes:
try:
for fc in feature_classes:
#arcpy.AddMessage(str(fc))
fields = arcpy.ListFields(fc)
state_field_name = "stateName" # Update with the actual field name for the state attribute
state_field = next((field for field in fields if field.name == state_field_name), None)
if state_field:
unique_states = set() # Use a set to store unique states per feature class
with arcpy.da.SearchCursor(fc, [state_field_name]) as cursor:
for row in cursor:
state = row[0]
#arcpy.AddMessage(str(state))
if state and state != "":
unique_states.add(state)
# Increment the count for each state in the unique_states set
for state in unique_states:
state_counts[state] += 1
del cursor
del row
else:
pass
except Exception as e:
arcpy.AddMessage(f"An error occurred: {str(e)}")
#else:
# arcpy.AddMessage("no if feature_classes")
except Exception as e:
arcpy.AddMessage(f"An error occurred while retrieving feature classes: {str(e)}")
# fails here after the first feature dataset is scanned
# Generate a unique timestamp for the CSV file name
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
# Define the output CSV file path with the unique timestamp
output_csv = arcpy.GetParameterAsText(1) + "\\output_" + timestamp + ".csv"
# Write the state counts to the CSV file
with open(output_csv, 'w', newline='') as csvfile:
writer = csv.writer(csvfile)
writer.writerow(['State', 'Count'])
for state, count in state_counts.items():
writer.writerow([state, count])
if len(state_counts) > 0:
arcpy.AddMessage(f"State counts saved to {output_csv}")
else:
arcpy.AddMessage("No valid state values found in the feature classes.")
Same as before, it will complete the first looped feature dataset and then it repeatedly hits the outer except error message:
An error occurred while retrieving feature classes: object of type 'NoneType' has no len()
The error that you are getting is saying that what ever you are trying to get the length of, it is a Nonetype. So if it's the line 24 or 69, feature_classes or state_counts is None. My guess would be that one of the datasets doesn't have a featureclass and when you listfeatureclasses, it returns None for line 24. If your featureclasses are not in datasets, you also need to get a list of featureclasses and iterate over them separately than those that are in datasets, if that is what you are expecting
Also, when you change the arcpy.env.workspace, you also change where the ListFeatureclasses() and ListDatasets are looking sobe careful about setting it when you can use the dataset name as an argument for the ListFeatureClasses.
Try this- its untested, but refractored from your script.
import arcpy
import csv
# Set the workspace to the folder containing the feature classes
workspace = arcpy.GetParameterAsText(0)
arcpy.env.workspace = workspace
arcpy.AddMessage("workspace: " + str(workspace))
#feature_classes = arcpy.ListFeatureClasses()
feature_datasets = arcpy.ListDatasets("", "Feature")
arcpy.AddMessage(str(feature_datasets)) #outputs feature datasets properly
state_counts = {}
def count_states(feature_classes):
for fc in feature_classes:
arcpy.AddMessage(str(fc))
state_field_name = "stateName" # Update with the actual field name for state attribute
if state_field_name in [f.name for f in arcpy.ListFields()]:
with arcpy.da.SearchCursor(fc, [state_field_name]) as cursor:
for row in cursor:
if row[0]:
arcpy.AddMessage(row[0])
if state_counts.get(row[0]):
state_counts[row[0]] += 1
else:
state_counts[row[0]] = 1
else:
arcpy.AddMessage(f"Empty or None value encountered in {state_field_name} field.")
# Iterate over featureclasses not in datasets
try:
feature_classes = arcpy.ListFeatureClasses()
# Print the number of feature classes found in the current feature dataset
arcpy.AddMessage(f"Number of feature classes: {len(feature_classes)}")
arcpy.AddMessage(str(feature_classes))
if feature_classes:
try:
count_states(feature_classes)
except Exception as e:
arcpy.AddMessage(f"An error occurred: {str(e)}")
else:
arcpy.AddMessage("no if feature_classes")
# Iterate over featureclasses in datasets
for fds in feature_datasets:
arcpy.AddMessage(str(fds)) #properly scans through each fds
# Print the current feature dataset to verify if it's correctly set
arcpy.AddMessage(f"Processing feature dataset: {fds}")
try:
feature_classes = arcpy.ListFeatureClasses(feature_dataset=fds)
if feature_classes:
# Print the number of feature classes found in the current feature dataset
arcpy.AddMessage(f"Number of feature classes: {len(feature_classes)}")
arcpy.AddMessage(str(feature_classes))
count_states(feature_classes)
else:
arcpy.AddMessage(f"Dataset {fds} has no featureclasses.")
except Exception as e:
arcpy.AddMessage(f"An error occurred while retrieving feature classes: {str(e)}")
# fails here after the first feature dataset is scanned
except Exception as e:
arcpy.AddMessage(f"An error occurred while retrieving feature classes: {str(e)}")
# Define the output CSV file path
output_csv = r"C:\path\output.csv"
# Write the state counts to the CSV file
with open(output_csv, 'w', newline='') as csvfile:
writer = csv.writer(csvfile)
writer.writerow(['State', 'Count'])
for state, count in state_counts.items():
writer.writerow([state, count])
#arcpy.AddMessage(f"State counts saved to {output_csv}")
if len(state_counts) > 0:
arcpy.AddMessage(f"State counts saved to {output_csv}")
else:
arcpy.AddMessage("No valid state values found in the feature classes.")
I grabbed your script and changed the output csv portion but ran it unchanged otherwise. It now scans each FDS as desired, finds all the FCs in each FDS, but it now raises the following exception for every FDS:
An error occurred while retrieving feature classes: ListFields() missing 1 required positional argument: 'dataset'
The tool ran successfully, but there was no output written to the csv due to each FDS having the exception.
I know that each FDS has FCs, and no FCs are not in a FDS, so those conditions aren't a concern, and the normal organizational fgdb used in the tool would not deviate from that either, fwiw.
In my interactions with chatGPT yesterday, it had recommended scanning the fields for the desired field name of 'stateName' because the first version had row[0]. I know the desired column is not in the exact same index in each FC within the fgdb. Would that matter in what you cleaned up?
Also, thanks so much for the assistance!
Forgot the positional argument for the ListFields() method. It should be the featureclass so:
if state_field_name in [f.name for f in arcpy.ListFields(fc)]:
Cursors only return the fields that you tell it to and since you do not need / and are not using any other fields than 'stateName', I only passed it to the cursor and it will always be at row[0].
state_field_name = "stateName"
if state_field_name in [f.name for f in arcpy.ListFields(fc)]: # checks if stateName is in the fields and then executes the cursor if its true.
with arcpy.da.SearchCursor(fc, [state_field_name]) as cursor:
Check out the documentation for the arcpy methods too. It will help you know what is wrong in ChatGPT generated stuff.
The script seemed to run through its course, but it got hung up at the end and remained "updating" without finishing properly. Also looking at the output it is not counting as desired.
I don't technically need this anymore for my immediate task, but it would be good to have finished in case it comes up again... with that said, what I am trying to get the tool to count is how many FCs each state is represented in (this is RP asset data), so the output would be each state and like new mexico was found in 80 FCs (out of 172 total FCs in the fgdb). The results in the output were well into the thousands and beyond, so I'm guessing the tool counted features.
In the second version I had posted there was a set created that would increment once for each time the unique state name was found in a FC, not increment once for every time a state was found in the FC. I'm sure a set is one of many ways to do this, and I have no idea/opinion on what is the best way.
Thanks again, great practice for me so far even if I don't manage to finish the tool.