SearchCursor vs. export to pandas - What is more efficent

629
6
10-03-2022 04:34 AM
AndreasLorentzen
New Contributor III

Hi,

Background:
I have a analysis that does cross sections of a multi-patch road model  (around 10000+ sections) to get some geometrical statistics. The statistics would be for each different layer/ feature type (road surface, road shoulder, new terrain etc.) and consist of delta height, width etc. As for now, everything works just fine.

however its a bit slow, as it needs 10000 x #layers = approximately 200 000 SearchCursor requests with where_clause.  

Note: the layer in question is a singelpoint PointZ layer with around 1M points

 

Question:

Would it be any benefit to just loading the entire table with point geometries into pandas and do all the where clauses there? Is there any information about the efficiency of the different cursors? 

Does anyone have any thoughts on what is most efficient? 

0 Kudos
6 Replies
JoshuaBixby
MVP Esteemed Contributor

With the information provided, it is difficult to comment on whether changing your workflow to pandas would be more efficient.  Are the feature classes in an enterprise geodatabase, file geodatabase, or something else?  If file or mobile geodatabase, are they stored locally or on network share?  Could you copy the feature classes into memory before processing?  Are the fields you are querying indexed appropriately?  There are likely more questions I could come up with if I spent a bit more time thinking about it.

Exporting a dataset into memory via pandas could definitely be faster, but not necessarily because there are lots of ways of processing data in pandas and some are faster than others.  If the dataset is very large and it won't fit in memory, then having the Python process start paging will cause performance hit regardless of specific pandas workflow.

Given there are so many factors involved, the best approach is to do some testing oneself.  Is it worth taking the time to test?  Probably if you are here asking the question(s).

AndreasLorentzen
New Contributor III

Thanks for the reply, and yeah, i know its kind of a hard question with alot of variables;

  1. Its on a local hard drive in a normal geodatabase
  2. I think the entire thing could fit inside memory. Its only about 0,5-2gb. However i must have an hard-drive copy (so it would be a "read onto memory" operation like "load into pandas" in the pandas work flow)
  3. Not sure about if they indexed. I will check and try forcing an index before. That might help a lot.

Based on your response it seems like it should be tested when i am writing the "a bit more serious implementation" code (also known as next version) as it takes just a bit of time re-writing the functions for this first test of the analysis. I will try to implement the others right away 🙂


So thanks.

0 Kudos
RogerDunnGIS
Occasional Contributor II

I'm also wondering if the statistics you need to grab can be efficiently done with Python using geoprocessing tools like Frequency and Summary, all of which are in the Statistics toolset of the Analysis toolbox.

0 Kudos
AndreasLorentzen
New Contributor III

Good question. I thought about it, however the tools in it self need to do the where clauses, so if its not implemented in C or some other optimized way (parallel etc.). I could test that as well. I think it there should be some sort of information about the efficiency of different analysis? or a is there an best practice?

0 Kudos
RogerDunnGIS
Occasional Contributor II

You have a challenging road ahead, focused on optimization.  I'm guessing you mean the optimization of time, not space.  They say space is cheap and time is money!  Obviously a lot of variables have to be considered.  For example, if you were to run with Python Pandas, you may get time X.  But if you were to run with arcpy geoprocessing and have indexed the fields you're grouping on, you may get better time Y.  Database A may perform better than database B.  A Python script may work faster at a command line and slower in a Notebook.

All I'm saying is that I am not well-read in the area of GIS algorithm optimization.  It might be a good idea to use Google Scholar to look for white papers, published articles, theses, and dissertations that address these concepts.  For instance, a quick search just now for "binary search in GIS" pointed me to an article named "CudaGIS: report on the design and realization of a massive data parallel GIS on GPUs."  But that's not stuff I read.

0 Kudos
AndreasLorentzen
New Contributor III

yeah... i'm aware. Its kind of a can of worms. And i think the easiest for me is to just test out a couple of methods and see which runs the fastest. Was mainly curious if there was any data on the subject and if there was a definitive solution or best practice 🙂

0 Kudos