I’m trying to create a Python script that runs after hours to update a standalone GIS table (SDE MSSQL). Currently all of the rows are deleted and an updated CSV file is appended to the GIS table, basically a wipe and replace. That will no longer be an option because we’re only exporting out the current year, so prior years must remain in the GIS table. Is it possible to append new and either update or delete existing rows within the GIS table using the CSV, if so could someone provide some guidance?
I have a ticket number field that can be used as a reference between the GIS and the export.
Thank you,
Daniel
If nothing changes in prior year records then you can throw a where clause to delete only current year records. Then follow your process to import the csv.
Currently all of the rows are deleted and an updated CSV file is appended to the GIS table, basically a wipe and replace
Make this easy for forum members to help you by posting up the current code you have. This will give us something to go on and an initial view of your process needs. If this is just done manually, then spell out the steps you use.
There are some considerations when attempting to update RDMS tables that participate in ArcSDE instance and you need to be aware of these potential issues before deciding on a path to take.
Please see the attached file. I played off Mahtab's suggestion, but using a join of just the current year (CSV) and keeping only the matching records. I delete the joined table rows, remove the join and append the CSV back to the GIS table (updated).
Using this method I run into the same issue I had before. Append refuses to work through Python. The append portion of the script will work all day long through the model builder, but that's not an option the script must run after hours.
Hmm, needed to updated. It's not triggering the Delete either. Still looking into it.
I don't see what the issue is and why you need to join anything. According to your OP you just need to insert new rows into the GDB table.
Aside from that, perhaps it has something to do with naming everything the same?
FH_Tickets_Table_Out = "\\in_memory\\FireHydrant_Tickets" FH_Join_Output = "\\in_memory\\FireHydrant_Tickets" FH_Delete_Matches = "\\in_memory\\FireHydrant_Tickets" FH_Tickets_CSV_Join_Out = "\\in_memory\\FireHydrant_Tickets"
I really don't know and not able to follow along your code well enough to understand your process logic.
Thanks James.
Sorry for the confusion. Model Builder formats the process outs all the same when you export the script as Python. Yes, you're correct I would like to insert rows, but if possible I would like to just update / delete the existing rows where the IDs match to improve efficiency / performance. Since I was not able to figure that out I proceeded with joining the ticket export CSV of daily updates from our work order system to the GIS table to delete only the found set (which would be the current year). Using an append I updated just the current year of tickets.
The FireHydrant_Tickets table resides in the SDE database (MS SQL). It's also setup as a related table to our Fire Hydrants Feature Class. Because the other system we use for processing our Work Orders is not connected to our SQL (design limitations), the only way for us to get the information out is to use a .MER file type (includes column headers – basically a delimited text file). We rename it to .CSV to make it compatible with ArcGIS. We have three export files; EMER and Bagged must be sent via Email to our local county departments, and the Field Tickets are only used to update our information.
The only part of the code that is not working properly is below.
# Start Update Field Ticket Script Part if os.path.isfile(FMFieldTickets_MER): # Rename file os.rename(FMFieldTickets_MER, FMFieldTickets_CSV) # Process: Make Table View arcpy.MakeTableView_management(Fire_Hydrant_Tickets_Table, FH_Tickets_Table_Out, "", "", "OBJECTID OBJECTID VISIBLE NONE;pk_FHMF_ID pk_FHMF_ID VISIBLE NONE;fk_FH_ID fk_FH_ID VISIBLE NONE;FHMF_Prepared_Date FHMF_Prepared_Date VISIBLE NONE;FHMF_Type FHMF_Type VISIBLE NONE;FHMF_Flowing_Hydrant_Number FHMF_Flowing_Hydrant_Number VISIBLE NONE;FHMF_Completed_ByName FHMF_Completed_ByName VISIBLE NONE;FHMF_Completion_Date FHMF_Completion_Date VISIBLE NONE;FHMF_Closed_By FHMF_Closed_By VISIBLE NONE;FHMF_Closed_Date FHMF_Closed_Date VISIBLE NONE;z_Status z_Status VISIBLE NONE") # Process: Add Join arcpy.AddJoin_management(FH_Tickets_Table_Out, "pk_FHMF_ID", FMFieldTickets_CSV, "pk_FHMF_ID", "KEEP_COMMON") # Process: Select ALL (Joined) arcpy.SelectLayerByAttribute_management(FH_Tickets_CSV_Join_Out, "NEW_SELECTION", "") # Process: Delete Rows arcpy.DeleteRows_management(FH_Join_Output) # Process: Remove Join arcpy.RemoveJoin_management(FH_Delete_Matches, "") # Process: Append arcpy.Append_management(FMFieldTickets_CSV, FH_Tickets_Table_Out, "NO_TEST", "pk_FHMF_ID \"Ticket #\" true true false 4 Long 0 10 ,First,#,"+ FMFieldTickets_CSV +",pk_FHMF_ID,-1,-1;fk_FH_ID \"Fire Hydrant #\" true true false 4 Long 0 10 ,First,#,"+ FMFieldTickets_CSV +",fk_FH_ID,-1,-1;FHMF_Prepared_Date \"Prepared Date\" true true false 36 Date 0 0 ,First,#,"+ FMFieldTickets_CSV +",FHMF_Prepared_Date,-1,-1;FHMF_Type \"Type\" true true false 255 Text 0 0 ,First,#,"+ FMFieldTickets_CSV +",FHMF_Type,-1,-1;FHMF_Flowing_Hydrant_Number \"Flowing Hydrant #\" true true false 4 Long 0 10 ,First,#,"+ FMFieldTickets_CSV +",FHMF_Flowing_Hydrant_Number,-1,-1;FHMF_Completed_ByName \"Completed By\" true true false 255 Text 0 0 ,First,#,"+ FMFieldTickets_CSV +",FHMF_Completed_ByName,-1,-1;FHMF_Completion_Date \"Completion Date\" true true false 36 Date 0 0 ,First,#,"+ FMFieldTickets_CSV +",FHMF_Completion_Date,-1,-1;FHMF_Closed_By \"Closed By\" true true false 255 Text 0 0 ,First,#,"+ FMFieldTickets_CSV +",FHMF_Closed_By,-1,-1;FHMF_Closed_Date \"Closed Date\" true true false 36 Date 0 0 ,First,#,"+ FMFieldTickets_CSV +",FHMF_Closed_Date,-1,-1;z_Status \"Status\" true true false 255 Text 0 0 ,First,#,"+ FMFieldTickets_CSV +",z_Status,-1,-1", "") #arcpy.Append_management(FMFieldTickets_CSV, FH_Tickets_Table_Out, "NO_TEST","","") # Process: Reconcile Versions arcpy.ReconcileVersions_management(GISSQL_Main__sdeadmin__sde, "ALL_VERSIONS", "dbo.DEFAULT", "DBO.QA_BASE", "LOCK_ACQUIRED", "NO_ABORT", "BY_OBJECT", "FAVOR_TARGET_VERSION", "POST", "KEEP_VERSION", Reconcile_Versions_Log) # Remove CSV os.remove(FMFieldTickets_CSV)
Yes, you're correct I would like to insert rows, but if possible I would like to just update / delete the existing rows where the IDs match to improve efficiency / performance.
You are wanting to handle a transactional process with a scripting language. Depending on frequency of the updates/inserts would likely drive my approach. If it's an interactive/dynamic requirement then I'd develop an application tier to help manage it (ADO.NET would be my first choice, along with StoredProcedures on the SQL database). This Storm Water Revenue Management System | ArcNews is a good example.
An alternative might be a simple TRUNCATE and then BULK INSERT containted in a StoredProcedure would likely suffice but I have no idea data size you are working with. This table could participate in a join or spatial view and presented to the user.
Sorry I don't have a python recommendation and hope someone has a better solution for you.
It might help us to understand the problem in the script if you could share the exact error that you are getting.
Secondly I am not sure if you could call DeleteRow on join. This is what I will try:
1. After Line 13, Update some field with a value that is unique to filter out the records in the table later
2. Remove join and filter the records based on the value that I have updated in step
3. Call DeleteRows to delete the filtered records and proceed with remaining part at Line 22.
Couldn't you just put a where clause to grab current year in the MakeTableView on line 7, then run DeleteRows. That will delete only the rows matching the where query and skip the join/select part. Then could just append or insertCursor, insertRow() the "new" data.
R_