I need to run a SearchCusor over a featureclass in one geodatabase, and conditionally write a subset of returned records to a featureclass in a second geodatabase. Is this possible to do within a single loop, or am I limited to all cursors pointing at the same geodatabase, implying that the subset records must be output into a temporary object (such as a Python list) before switching workspaces to the second geodatabase and instantiating an InsertCursor there?
BTW - the only reason I'm doing this is to perform a one:one join where the left featureclass is large (several million features) and the right table is small (a few thousand rows, held in memory as a python dictionary). Using Pro's various options for joining has proven to be extremely inefficient, however I can perform the join using a SearchCursor in <5 minutes. It's just a matter of how to get the results into a new featureclass.
Using ArcGIS Poo 2.9
Cursors are independent. It's generally best to avoid nesting them, except in the case of a insert with search inside (so you're smack-dab in the center of the use case lane). If both cursors use the same enterprise geodatabase connection, you'd probably be better off with a `da.Editor` to avoid conflicts.
- V
Thanks Vince. Good to know about da.Editor...I'll keep that in mind next time.
For the current problem, though, the geodatabase is a mobile geodatabase. I could write output into the same geodatabase it but am trying to avoid doing so - the db is not really intended to be used like a workspace.
It is probably faster to get the source values into a dictionary and then work over them. That way you only use a searchcursor once to get the values, and can do the joining within the insertcursor in one operation using key gets.
Thanks Jeff. That is basically what I'm doing except the join is happening inside the SearchCursor loop and the matches are held in a dictionary. After iterating through the entire cursor I switch the workspace to point at the output geodatabase and use an InsertCursor to write the matches to the db. It works fine if the number of matches is small, but I'm concerned about running out of memory if the matches grows into the millions.
The complete "left" table is too big to predictably fit in memory, unfortunately.