How do I modify my syntax to properly read an csv file and write the contents to a shapefile using an update cursor ?
# Read csv file & write to shapefile via update cursor
import csv,os
updateCursor = arcpy.da.UpdateCursor(r'sample.shp', ["Name", "OrderCount","Time","Miles"])
with open(r'CsvToShape.csv','rb') as f:
reader = csv.reader(f)
for row in reader:
route = row[0]
order = row[1]
time = row[2]
mile = row[3]
for row in updateCursor:
row.setValue(route = Name)
## row[1]= order
## row[2] = time
## row[3] = mile
updateCursor.updateRow(row)
the polygon feature is not empty, I want to add table info from the csv file.
Can you elaborate on "still have the issue of passing the values to the cursor?" Are you getting an error message? If so, what? Are you getting unexpected results? If so, what?
The row object returned by iteration of reader is NOT an array. You must unpack the dictionary into an array in the order required by the field list.
Your current logic assumes that the order of the file and the feature class as processed in the cursor are identical. In most cases, this will end badly.
- V
I believe you are referring to my following example of a dictionary to a list.
In my case, I would read the csv file as a dictionary not hardcoded in the script.
my_dict = {'A1':405,'A2':145}
print str(list_items) + " " + " These are the keys and values"
print str(list_keys) + " " + " These are the keys"
print str(list_values) + " " + " These are values"
[('A1', 405), ('A2', 145)] These are the keys and values
['A1', 'A2'] These are the keys
[405, 145] These are values
arcpy DA cursors require that the column list be explicit (or a wildcard, which matches all columns in the order of the table, but it's best practice to always explicitly define your columns), and that the values in the array passed into updateRow must be in the order specified in the cursor declaration. Your code does not do this. It passes in a dictionary, which is forbidden, and generates the error.
In order to get your code working, you'll need to populate an array:
for row in reader:
rowArray = []
for name in ["Name", "OrderCount","Time","Miles"]:
rowArray.append(row[name])
updateCursor.updateRow(rowArray)
Be careful, the names might not match by case.
The logic still requires that the rows in the shapefile exactly match the CSV. The safe best is to store the small CSV in a dictionary keyed by ID, add the ID column to the name list (if not keyed by name), and use the shapefile's ID to populate extract the appropriate row.
- V
If this is a point file, it is far easier to read the csv file into a numpy array, then use arcpy.NumPyArrayToFeatureClass
Many ways to do this, but a dictionary describing the field names and the data type will do it
dt = {'names': ('Longitude', 'Latitude', 'A', 'B', 'C'),
'formats': ('f8', 'f8', 'i4', 'f8', 'U4')}
# ---- the csv name, the datatype as described above, skip the header row, delimiter is ','
np.loadtxt('c:/temp/test.csv', dtype=dt, skiprows=1, delimiter=',')
array([(-75.5, 45.5, 1, 1., ' aaa'), (-75. , 45. , 2, 2., ' bbb')],
dtype=[('Longitude', '<f8'), ('Latitude', '<f8'), ('A', '<i4'), ('B', '<f8'), ('C', '<U4')])
"""
original file
Longitude, Latitude, A, B, C
-75.5, 45.5, 1, 1.0, aaaa
-75.0, 45.0, 2, 2.0, bbbb
"""
It is a polygon.
Thank you for the advice, I will work this out with your suggestions.
Can you please advice can't sorted this out?
Hi if I use the can I export CSV file with numeric rather than text? For example instead Yes, or NO. 1 or 2????
# Read csv file & write to shapefile via update cursor
import csv,os
updateCursor = arcpy.da.UpdateCursor(r'sample.shp', ["Name", "OrderCount","Time","Miles"])
with open(r'CsvToShape.csv','rb') as f:
reader = csv.reader(f)
for row in reader:
route = row[0]
order = row[1]
time = row[2]
mile = row[3]
for row in updateCursor:
row.setValue(route = Name)
## row[1]= order
## row[2] = time
## row[3] = mile
updateCursor.updateRow(row