I have created a python script that contains all of my database connection strings and when I'm working locally, I can just import those strings like you would any regular module and use them. When I publish a geoprocessing service though, I can't figure out how to use an imported module like this because those connection strings won't be made into g_ESRI variables. My goal is to have one central place to update all my connection string paths if they change in the future, rather than going into 50 different scripts and changing each one.
This is what my database connection python file (DatabaseConnections.py) looks like:
hb_db_con = r'C:\Users\xxx\AppData\Roaming\ESRI\Desktop10.7\ArcCatalog\FocalAreas_HabitatWriter.sde'
fa_db_con = r'C:\Users\xxx\AppData\Roaming\ESRI\Desktop10.7\ArcCatalog\HbMon_HabitatWriter.sde'
#So on and so forth
The top of the geoprocessing service I publish imports those connections like so:
from DatabaseConnections import hb_db_con, fa_db_con
fc1 = hb_db_con + "\blah.DBO.blahblah"
fc2 = fa_db_con + "\blah2.DBO.blahblahblah"
I have copied DatabaseConnections.py to the server's python library so it can find the connection strings, but obviously it doesn't convert them to where the server should find the data and instead just takes the hard-copied connection string and goes searching, coming up with nothing. Is there a way to centralize database connection strings in your code and still make it publishable?