I have a script that calculates several fields in a table, then publishes that table to a sql server.
I use pandas to publish to the sql server, I create the pandas data frame using a numpy array.
I use arcpy.da.TableToNumpyArray() to create the dataframe.
Whenever line 6 is run I receive a type error int() argument must be a string, a bytes-like object or a number, not 'NoneType'.
See code below:
Childcare = mp.listTables("Childcare_Coordinates")[0] #create table variable
Fields = [f.name for f in arcpy.ListFields(Childcare)] #gets field names
Fields.pop(0) #removes objectid
#create NumpyArray
NumpyArray = arcpy.da.TableToNumPyArray(Childcare, Fields, skip_nulls = False)
#Covert NumpyArray to Pandas Dataframe
df = pd.DataFrame(NumpyArray)
#rename objectid to objectid_1
df = df.rename(columns = {"objectid_1": "objectid"})
Every other piece functions as it should with these two variable except line 6. Why
This has worked in the past, then one day it woke up and said I don't want to be friends with you any more. It is not isolated to this table, it is all tables.
I have tried to reference the table from a workspace instead of a map as well. Same error
Solved! Go to Solution.
I think Dan is right about nulls , null_value=0 replaces None values with 0. Adjust this value based on your data requirements.
arcpy.ListTables("Childcare_Coordinates")[0] # Create table variable
Fields = [f.name for f in arcpy.ListFields(Childcare)] # Get field names
Fields.pop(0) # Remove objectid
# Create NumpyArray with null_value parameter
NumpyArray = arcpy.da.TableToNumPyArray(Childcare, Fields, skip_nulls=False, null_value=0)
# Convert NumpyArray to Pandas DataFrame
df = pd.DataFrame(NumpyArray)
# Rename objectid to objectid_1
df = df.rename(columns={"objectid_1": "objectid"})
TableToNumPyArray—ArcGIS Pro | Documentation
is there a chance that you have nulls/missing values in your dataset (particularly in your integer fields) that need to be evaluated and replaced as per the example in the help topic?
I think Dan is right about nulls , null_value=0 replaces None values with 0. Adjust this value based on your data requirements.
arcpy.ListTables("Childcare_Coordinates")[0] # Create table variable
Fields = [f.name for f in arcpy.ListFields(Childcare)] # Get field names
Fields.pop(0) # Remove objectid
# Create NumpyArray with null_value parameter
NumpyArray = arcpy.da.TableToNumPyArray(Childcare, Fields, skip_nulls=False, null_value=0)
# Convert NumpyArray to Pandas DataFrame
df = pd.DataFrame(NumpyArray)
# Rename objectid to objectid_1
df = df.rename(columns={"objectid_1": "objectid"})
Your solution works Tony, it would appear that two fields that were once text fields are now int fields. Which explains why it worked previously then suddenly stopped. When we recreated the tables we changed the field type of text to int.