I have an SDE database that is registered in the Data Store of my server:
I reference this database connection in the python script (the geoprocessing service) as this:
db_con = r'Database Connections\HbMonitoringTest_nbcidb_HabitatTestWriter.sde'
I check to see if the connection evaluates to true, but it doesn't and I'm not sure why.
if arcpy.Exists(db_con):
arcpy.AddMessage("your db connection worked")
else:
arcpy.AddMessage("your db connection sucks")
Is this not the appropriate way to reference database connections in a geoprocessing service? Do I need to store the .sde connection file in a local folder on the server and connect using that instead? I have noticed int he v101 folder on the server, that the python script it contains did not replace "Database Connections" with the ESRI generated variable arcpy.env.packageWorkspace. Might that have something to do with it?
Hey Molly, that 'Database Connections' directory references something from your ArcCatalog. As you suggest, I'd put in the path to the sde file, which, if this is on the same machine could still reference the catalog directory in
C:\Users\JPierre\AppData\Roaming\ESRI\Desktop10.5\ArcCatalog
I think there was actually an error during publishing because when I went into the /extracted/v101 folder, no SDE file was present. I have republished the service now and it's working, but there are two copies of the SDE file instead of one and I'm not sure why that is.
When publishing GP services, the database connection gets copied to the server even though my data store settings have the "Allow data to be copied to the site when publishing services" checkbox unchecked. Why would it duplicate it though?
For reference, my python script is very simple:
import arcpy, zipfile
db_con = r'Database Connections\HbMonitoringTest_nbcidb_HabitatTestWriter.sde'
arcpy.AddMessage(db_con)
if arcpy.Exists(db_con):
arcpy.AddMessage("your db connection worked")
else:
arcpy.AddMessage("your db connection sucks")
try:
myZipFile = arcpy.GetParameterAsText(0)
arcpy.AddMessage(myZipFile)
scratch = arcpy.env.scratchFolder
arcpy.AddMessage(scratch)
zip_ref = zipfile.ZipFile(myZipFile, 'r')
zip_ref.extractall(scratch)
zip_ref.close()
arcpy.AddMessage("success")
except:
arcpy.AddMessage("fail")
It's using the copy as well in the script for some reason (HbMonitoringTest_nbcidb_HabitatTestWriter1.sde):
g_ESRI_variable_1 = os.path.join(arcpy.env.packageWorkspace,u'C:\\arcgisserver\\
directories\\arcgissystem\\arcgisinput\\xxx\\ZipTest.GPServer\\extracted\\v101
\\HbMonitoringTest_nbcidb_HabitatTestWriter1.sde')
Our GIS server is remote, so the publishing machine and the server are different machines.
The "Allow data to be copied to the site" option will actually copy all the data from your database to the server. I've had that happen to me before publishing services that have a database connection.
The two sde files are a mystery to me. The one is from the 19th so i'm guessing when you republished it couldn't overwrite the old one, so it created the new one.
I've republished a couple times, and it still publishes up with 2 SDE connection files (the dates have been updated to today). Very strange.