I am trying to automatically add multiple photo attachments to a point using Add Attachments.
First I need to use Generate Match Table to create the match table, and am struggling to create a photo path for multiple attachments. My working folder has photos 18_1a, 18_1b, and 18_1c and my matching field is 18. This path works when there is a single "18" named photo, but the rest of the photos do not get picked up. How do I name these so the tool recognizes multiple attachments for my singular "18" point?
Solved! Go to Solution.
Johannes,
Thanks for taking the time to answer everyone's questions. I have a layer that has multiple photos taken of a location with individual attachment columns. Please see below
The photos are all in a file location outside the gdb. I've enabled attachments. I've tried to create a match table but each time it only shows the headers and no rows underneath.
When I attempted the python script I was getting an error
File "C:\SubsurfaceMaps\ATTACHMENTS.pyt", line 13, in <module> match_table = arcpy.management.CreateTable(str(p.parent), str(p.name)) NameError: name 'arcpy' is not defined
Any help or insight would be very much appreciated.
Thank you
NameError: name 'arcpy' is not defined
The script is meant to be executed in the ArcGIS Pro Python Window, where arcpy is imported by default.
If you want to execute arcpy code outside of the Python Window (eg in custom tools or standalone scripts), you have to import arcpy first.
Your problem is a little different from the other ones in this thread. Your feature class basically already is a match table. You could probably run AddAttachments for each AttachmentField.
To make it easier, you can transpose your table. So instead of
ID | Attachment1 | Attachment2 |
1 | a1 | |
2 | a2 | a3 |
You get
ID | Attachment |
1 | a1 |
2 | a2 |
2 | a3 |
This script will take care of the transformation, the MatchID field will contain the ObjectIDs of your fc.
in_table = "TestPoints"
out_match_table = r"memory\match"
attachment_fields = ["TextField1", "TextField2"]
import arcpy
from pathlib import Path
# create match table
p = Path(out_match_table)
match_table = arcpy.management.CreateTable(str(p.parent), str(p.name))
arcpy.management.AddField(match_table, "Filename", "TEXT")
arcpy.management.AddField(match_table, "MatchID", "LONG")
# for each row in in_table, insert all attachment names into match table
with arcpy.da.InsertCursor(out_match_table, ["FileName", "MatchID"]) as i_cursor:
with arcpy.da.SearchCursor(in_table, ["OID@"] + attachment_fields) as s_cursor:
for row in s_cursor:
oid = row[0]
for attachment_name in row[1:]:
if attachment_name is not None:
i_cursor.insertRow([attachment_name, oid])
Input fc:
Output match table:
Then use that match table in AddAttachments:
Which should attach all specified files:
Hi @JohannesLindner, I've tried using your script to generate an Attachment Match Table for multiple jpgs per row which I want to attach to a feature layer. I'm only returning an empty match table but not recieving any errors. I'm using a field called 'name' in the 'in_table' which matches the photo filenames in the 'photos' folder. There are approximately 6000 images in the 'photo's' folder. I'm not sure why it isn't working, do you have any advice? Thank you.
# same parameters as in the tool
in_table = "NFM_for_match"
in_folder = r"C:\GIS\photos"
out_match_table = r"C:\GIS\working.gdb\matchtable"
key_field = "name"
input_data_filter = "*" # eg "*.jpg"
relative_path = True
from pathlib import Path
# create match table
p = Path(out_match_table)
match_table = arcpy.management.CreateTable(str(p.parent), str(p.name))
arcpy.management.AddField(match_table, "Filename", "TEXT")
arcpy.management.AddField(match_table, "MatchID", "LONG")
# get all files that match your filter
in_files = list(Path(in_folder).glob(input_data_filter))
# for each row in in_table, get all files that match the key field or start with key field followed by "_"
with arcpy.da.InsertCursor(match_table, ["MatchID", "Filename"]) as i_cur:
with arcpy.da.SearchCursor(in_table, [key_field]) as s_cur:
for row in s_cur:
key = str(row[0])
for f in in_files:
if f.stem == key or f.stem.startswith(key + "_"):
fn = f.name if relative_path else str(f)
i_cur.insertRow([row[0], fn])
Thank you,
Heather
Can you please post screenshots of
Hi @JohannesLindner,
I've attached :
Thank you so much.
Best wishes,
Heather.
I'm not quite sure what you're doing here... This seem to be either a modified attachment table or the attachment table joined to the feature class. And now you want to add new attachments based on the name of the attachments that are already present?
Anyway, the original script matches files whose names (without extension) match the key field value or start with that value followed by an underscore. Your name field includes the extension, so the script doesn't find any matching files.
Change line 24 of the original script:
key = str(row[0]).split(".")[0]
This will split the value in the key field at the dot and take that first part, which is the name without extension.
Hi @JohannesLindner,
Thank you. I tried it a different way and stripped the 'name' field of the file extension before running your script again. It still did not output a table with records. I've attached a screenshot of the error.
However, I've realised that if I change the field type of the 'MatchID' to text that I do output a table with the filenames and as in the screenshot. The 'MatchID' field then brings across the 'name' into the 'MatchID' field instead of a count of the matches. So I'm not sure what is going wrong here.
Yes you are right about my table looking like a join of the feature layer and an attachment layer. This is because I have buffered a point layer to get a polygon feature class, but the operation does not support attachments. I used a script to export the attachments into the 'photo's folder from the point layer. I joined the attachment table to the polygon feature layer table to keep the details of each row in the table to each photo name in the folder because each row might have more than one attachment. Perhaps there is a better way? I could start a new thread on this. It seems very close though now, if I can adapt your script to work, then I will be able to match the attachments back to my feature layer.
Thank you so much,
Heather.
OK, so you have a point fc with attachments, and you want buffers with attachments, is that correct?
Maybe you could use what you have so far, but I'm not sure without knowing more. There could be all sorts of problems with the 1:m joins you did.
If possible, I'd start fresh.
points = "path:/to/point_fc"
polygons = "path:/to_polygon_fc"
in_folder = "path:/to/in_folder" # folder where the attachments are (or will be) exported
from pathlib import Path
# get a dict {GlobalID: ObjectID} for the point fc
guid_to_oid = {guid: oid for guid, oid in arcpy.da.SearchCursor(points, ["GlobalID", "OBJECTID"])}
# get a dict {OldObjectID: NewGlobalID}
oid_to_new_guid = {oid: guid for guid, oid in arcpy.da.SearchCursor(polygons, ["GlobalID", "ORIG_FID"])}
# create the match table
match_table = arcpy.management.CreateTable("memory", "MatchTable")
arcpy.management.AddField(match_table, "Filename", "TEXT")
arcpy.management.AddField(match_table, "MatchID", "GUID")
# export the old attachments and fill the match table
points_attach = points + "__ATTACH"
in_folder = Path(in_folder)
with arcpy.da.InsertCursor(match_table, ["Filename", "MatchID"]) as i_cursor:
with arcpy.da.SearchCursor(points_attach, ["REL_GLOBALID", "ATT_NAME", "DATA"]) as s_cursor:
for rel_gid, att_name, data in s_cursor:
# get attachment path
att_path = in_folder / att_name
# export (delete this line if you already have exported)
att_path.write_bytes(data)
# map from old rel_globalid (points.GlobalID) to polygons.GlobalID
old_oid = guid_to_oid[rel_gid]
new_rel_gid = oid_to_new_guid[old_oid]
# insert into match table
i_cursor.insertRow([str(att_path), new_rel_gid])
# Add attachments to polygon fc
arcpy.management.AddAttachments(polygons, "GlobalID", match_table, "MatchID", "Filename")
Thank you @JohannesLindner ,
This works! I will definitely have a great day and share with my team at The Rivers Trust.
Thank you so much we are very grateful for your help,
Heather