Short version: Is there an efficient way to query 300+ feature classes with the same select-by-location query for an area of interest? We can do this using spatial SQL(SQL Server), ArcGIS Server map/feature services, or Portal layers. I can load most of the data into ArcGIS OnLine, but licensing agreements will require me to keep some layers locally.
Long-ish version: I inherited some ArcPy code from a former coworker that starts with a polygon of interest, queries the SDE database (SQL Server) for any features that overlap, and creates a PDF report of them. There are over 300 different layers in the report, which would mean a lot of database queries. He solves this by combining all the data into a single feature class each for points, lines, and polygons. It's really quick with only 3 queries to the database. Since the input layers all have different attribute fields, his process appends all the fields into a single text field in the format "field1: value1; field2: value2; field3: value3; [field4 etc.]".
It's actually a pretty clever solution, except that it requires a bunch of processing to consolidate all the data into those 3 feature classes for data loading/updating. More code is needed for all the string parsing to turn the one delimited text field back into N fields. Plus, we lose track of field types, since everything is just a string.
Going forward, I'd like to just maintain all my feature classes separately and do away with the merged layers, but I also can't wait 10 minutes to do 300+ queries. Multi-threading could make it a bit faster, but might melt also my server. So, is there an efficient way to send the same select-by-location query to a whole bunch of feature classes in SDE? How about to ArcGIS Server, Portal, or ArcGIS Online?
thanks!
You could try making a near table which supports multiple input features.
Just get the closest and read the table.
Clever idea!
I wonder if this single command is really running N queries under the hood. I will try performance testing this, but if anyone out there has an idea, I'd appreciate the feedback.
I have found that the generation of near tables can be very quick.
Adding a few more communities that may be able to help: Geodatabase Enterprise GIS
Thanks, George!
It seems like this querying is part of a process to generate PDF reports, why is 10 minutes for the process to execute a problem? Is the process running more frequently than 10 minutes?
Users run this by submitting the area of interest polygon from a web app. They are sitting there watching an hourglass until it returns the result. I agree that if it were just a cron job, then it could take longer, but there is a user waiting on the other end.