I wonder if you could create a view on one database that references all those databases that you would like to query against?
Or perhaps the view contains the query?
You could try something like this:
import arcpy import os # specify your folder with the .sde connection files sde_folder = r"C:\Users\xbakker\AppData\Roaming\ESRI\Desktop10.2\ArcCatalog" arcpy.env.workspace = sde_folder # list the sde files sdes = arcpy.ListWorkspaces("*", "SDE") # define featureclass name and field name fcname = "state_forest_area" fldname = "total_acres" flds = (fldname) total_acres = 0 # loop through the sde files for sde in sdes: try: sde_path, sde_name = os.path.split(sde) fc = os.path.join(sde, fcname) # assuming data is in sde root and not in a feature dataset lst_acres = [r[0] for r in arcpy.da.SearchCursor(fc, flds)] county_acres = sum(lst_acres) total_acres += county_acres print "County '{0}' has {1} acres '{2}'".format(sde_name[:-4], county_acres, fcname) except Exception as e: print "Error: {0}".format(e) print "check existence of the featureclass and field in the current sde connection" print "Total acres: {0}".format(total_acres)
Kind regards, Xander
Since it's still early in deployment, you should reconsider the multi-database model, and evaluate putting all the data in a single database, either by using multiple county users, or by making one comprehensive database and using COUNTY_CODE to restrict mapping by county. Cross-database operation will make access to the data for statewide mapping as difficult and inefficient as possible.
- V
Vince Angelo is absolutely right! (+1 for that) It is better to get things right, create a structure that is durable and enables analysis and statistics rather than start with all types of customization to get results you could obtain without customization if your schema was set up correctly. Invest now in setting up your data correctly, the ROI will be high.