POST
|
Hello, To summarize my current workflow, I use tkinter in Python 3.x (Esri's release that accompanies each Pro update) to execute some Esri geoprocessing functions for exporting SDE data (Create local File GDB, export feature classes from the SDE into the File GDB, run some field calculations, zip up the File GDB, etc.). This workflow has been working just fine for me these past few minor releases of Pro. However, with the recent upgrade to Pro 2.6, I'm running into errors related to arcpy's CreateFileGDB_management(). When I create a file geodatabase through arcpy, it's leaving a "_gdb" sr.lock in place even after the file geodatabase has been created. All other functions that occur after this step (export feature classes, field calculations, etc.) properly release their locks upon completion. However, this one lock gets stuck for some reason during the File GDB creation step. This has caused some of my applications to have problems with regard to zipping and migrating data due to the lock persisting. After some experimentation, I can get the sr.lock to release by executing the arcpy.ClearWorkspaceCache_management() option. However, I've never needed to use this previously when creating local File GDBs. Is this sr.lock working as intended, or could there have been a bug introduced in Pro 2.6? Again, this has not been an issue for my tools in Pro 2.x prior to 2.6, so I'm curious if anyone is encountering this issue now as well. Thanks for your assistance! Here's my code for this particular function, and this is where I've isolated the sr.lock hanging: def func_Create_GDB(self):
# This function creates an empty File GDB that will be used to house the exported Regional SDE feature classes.
try:
# This string creates a naming convention for the new QC Prepped GDB.
self.gdb_name_prepped = stat_val.gdb_name_prepped + "_" + self.input_zip_file_name_date_only
# Display message within scrolled text box.
self.func_Scroll_SetOutputText("Creating File GDB:\n" +
self.gdb_name_prepped + stat_val.string_gdb + "\n" +
"Saving to:\n" +
self.output_folder + self.input_zip_file_name_date_only, None)
# Creates the File GDB with the new GDB naming convention.
arcpy.CreateFileGDB_management(self.output_folder + self.input_zip_file_name_date_only,
self.gdb_name_prepped)
# Display all geoprocessing messages within scrolled text box with blue text.
self.func_Scroll_SetOutputText(arcpy.GetMessages(), stat_val.color_Blue)
# Increment progress bar by 2 percent.
self.percent_value = self.percent_value + 2
self.func_ProgressBar_SetProgress(self.percent_value)
# Once task has completed, display message within scrolled text box.
self.func_Scroll_SetOutputText("Output GDB created!", None)
# Change function check message from Failure to Success.
self.check_func_Create_GDB = stat_val.message_Success
except arcpy.ExecuteError:
# Display message within scrolled text box.
self.func_Scroll_SetOutputText(arcpy.GetMessages(2), stat_val.color_Red)
# Set progress bar to 100 percent.
self.func_ProgressBar_SetProgress(100)
# Set completion boolean to False if failure occurs.
self.check_complete = False
... View more
08-06-2020
12:07 PM
|
1
|
3
|
1844
|
POST
|
It'll consist of several hundred thousand records being coalesced/exported every two weeks. I haven't quite determined the best workflow for dealing with the intermediate SDE database once the export has finished. I was thinking I'd just leave it "as-is" and have the script simply overwrite the output each time. The intermediate data holds no value once the export/submission is complete, so retaining it won't be necessary. Having said that, if performance degrades, I'll consider emptying out the SDE each time the script finishes if it's a noticeable improvement. I'm no database expert either, but we have a fairly robust enterprise setup with dozens of SDE connections going here, there, and yonder. We've not had any issues with large data transfers like this so far (*knocks on wood*), and my tests so far appear more stable than the clunky script we inherited for the process. I can follow up here with more feedback once the application is completed.
... View more
11-12-2019
07:21 PM
|
0
|
0
|
1406
|
POST
|
After further research and testing, preserving Global IDs from SDE databases directly to a local File GDB does not appear to be possible with the Append tool. To solve this, I needed to Append all individual SDE databases to a coalesced SDE database, then perform a Feature Class to Feature Class conversion from the coalesced SDE database to a local File GDB (with arcpy.env.preserveGlobalIds = True). That extra step allowed me to preserve the Global IDs as desired. Weird, and definitely not ideal. However, it is functional for now.
... View more
11-09-2019
04:00 PM
|
3
|
1
|
1406
|
POST
|
So I've written a Python script called driver.py (using Python environment from Pro 2.4.2) that will append several SDE databases' feature classes to a coalesced database. All databases have matching schema. My goal is to preserve Global IDs on all output. The script performs several tasks: 1) It creates an empty file geodatabase. 2) The necessary feature classes from the first SDE database are exported to the output file GDB (with arcpy.FeatureClassToFeatureClass_conversion). 3) The remaining SDE database feature classes are then appended to the exports from the first SDE's feature classes. Here's the script: import arcpy
import os
sde_folder = "My\\SDE\\Folder\\Path"
output_folder = "My\\Output\\Folder\\Path"
gdb_name = "testing_gdb.gdb"
schemaType = "TEST" #"NO_TEST"
fieldMappings = ""
subType = ""
if arcpy.Exists(output_folder + gdb_name):
print("GDB already exists")
else:
arcpy.CreateFileGDB_management(output_folder, gdb_name)
arcpy.env.workspace = sde_folder
folder_Connections_SDE = arcpy.ListWorkspaces("*", "SDE")
# For each GDB in the workspace connections folder...
for sdeDatabase in folder_Connections_SDE:
arcpy.env.workspace = sdeDatabase
arcpy.env.preserveGlobalIds = True
if sdeDatabase == folder_Connections_SDE[0]:
print("First SDE database")
fc_list = arcpy.ListFeatureClasses()
for fc in fc_list:
if "QC_REPORTS" in fc:
print("Skipped:" + fc)
pass
elif "REPLICATION_BND" in fc:
print("Skipped:" + fc)
pass
else:
output_name = fc.rsplit(".")[2]
print(output_name)
arcpy.env.overwriteOutput = True
arcpy.FeatureClassToFeatureClass_conversion(fc, output_folder + gdb_name, output_name)
else:
print(sdeDatabase)
fc_list = arcpy.ListFeatureClasses()
for fc in fc_list:
if "QC_REPORTS" in fc:
print("Skipped:" + fc)
pass
elif "REPLICATION_BND" in fc:
print("Skipped:" + fc)
pass
else:
output_name = fc.rsplit(".")[2]
input_feature_class_path = os.path.join(sdeDatabase, fc)
output_feature_class_path = os.path.join(output_folder + gdb_name, output_name)
arcpy.Append_management(input_feature_class_path, output_feature_class_path, schemaType, fieldMappings, subType) If I run the above script with... arcpy.env.preserveGlobalIds = False ...the script executes without error and all output is created, but the Global IDs are not preserved. However, if I change that boolean to True, the script successfully creates the output GDB and successfully completes the Feature Class to Feature Class conversion (while also preserving the Global IDs), but it fails during the Append operation with the following error: Traceback (most recent call last): File "My/Script/Folder/Path/driver.py", line 81, in <module> arcpy.Append_management(input_feature_class_path, output_feature_class_path, schemaType, fieldMappings, subType) File "C:\Program Files\ArcGIS\Pro\Resources\ArcPy\arcpy\management.py", line 4929, in Append raise e File "C:\Program Files\ArcGIS\Pro\Resources\ArcPy\arcpy\management.py", line 4926, in Append retval = convertArcObjectToPythonObject(gp.Append_management(*gp_fixargs((inputs, target, schema_type, field_mapping, subtype, expression), True))) File "C:\Program Files\ArcGIS\Pro\Resources\ArcPy\arcpy\geoprocessing\_base.py", line 506, in <lambda> return lambda *args: val(*gp_fixargs(args, True)) arcgisscripting.ExecuteError: ERROR 999999: Something unexpected caused the tool to fail. Contact Esri Technical Support (http://esriurl.com/support) to Report a Bug, and refer to the error help for potential solutions or workarounds. Failed to execute (Append). I read this from Esri on the preserveGlobalIds help page: "For the Append tool, this environment only applies to enterprise geodatabase data and will only work on data that has a Global ID field with a unique index. If the Global ID field does not have a unique index, the tool may fail." Since I'm utilizing enterprise geodatabases for the append, I'm unsure about the unique index issue. Does anybody know if I'm missing something here? Thanks for any clarification!
... View more
11-09-2019
02:36 PM
|
1
|
4
|
1705
|
Title | Kudos | Posted |
---|---|---|
1 | 08-06-2020 12:07 PM | |
1 | 11-09-2019 02:36 PM | |
3 | 11-09-2019 04:00 PM |
Online Status |
Offline
|
Date Last Visited |
11-11-2020
02:23 AM
|