Python, arcpy and memory use

6758
14
08-12-2014 06:11 PM
GrantHerbert
Occasional Contributor II

I have a script which searches subdirectories for mxds and reports on them. Nothing new, except that I am having trouble with memory management.

There seems to be a memory leak, although I have tried hard to del variables as I go, and separate the MXD parsing to a separate function in the script so that they are all local variables although I hold a reference to the logging and reporting text files as globals.

This has been working well, until I hit a certain MXD in my test folders (I just grabbed a handful of folders off our network drives, so they are real MXDs). This MXD  is 2MB in size and has nothing wrong with it that I can see, although it  has a number of broken links it opens fine in ArcMap, and I can access it fine from the python window using "current", but when I access it with arcpy from my script, memory use jumps up 300-400MB for a moment (peaking at 600MB+). This usually crashes the process even though I have 16GB of RAM. The python process is 32bit.

The MXD in question has a broken events layer at the top of the TOC, but I can access it OK from PyScripter (with the same memory jump) if I do so directly and before accessing any other MXDs.  If I open the MXD and use Arcpy in the python window I have no trouble accessing it either. If I have already processed an MXD and am using around 300MB of memory then the spike often kills the python.exe process. Making a copy of the MXD with the events layer deleted seems to work OK in my script, even after processing with other MXDs, suggesting that the events layer itself may be the problem, but I am at a loss what to do about it. I cannot tell if there is a problem until I access it and I can't seem to catch it when there is (it skips my try block and windows reports that python has stopped working).

Any ideas of what I can try?

0 Kudos
14 Replies
HenryColgate
Occasional Contributor

I have had issues using Python to accomplish a very similar task. 

I eventually rewrote my MXD script in C# and found that the errors that were killing Python were coming from deeper errors in the ESRI DLL's.  Likewise they would kill my C# code and the resident application.  On the upside this generated a 'Application Error' incident in  'Event Viewer -> Window Logs -> Application' which pointed directly to the DLL in question.  From here I was able to at least target the specific dataset or function causing the error.  This may be generated under Python also but I never checked. 

I have a few bugs logged with ESRI for these issues at the moment.  These relate to errors in the GDAL library with reading some files, CadEngine.dll, a TsbleUI.dll issue and some others I can't recall off the top of my head.  Any of these may be issues you are getting also. 

One of the fun things with scanning an MXD is that you run throughs all your data in its many different forms which tends to trigger the undocumented features of the software.

0 Kudos
JoshuaBixby
MVP Esteemed Contributor

Henry Colgate wrote:

....

I have a few bugs logged with ESRI for these issues at the moment.  These relate to errors in the GDAL library with reading some files, CadEngine.dll, a TsbleUI.dll issue and some others I can't recall off the top of my head.  Any of these may be issues you are getting also. 

....

What are some of the NIMs?  How long have they been open?

0 Kudos
HenryColgate
Occasional Contributor

NIM101698

NIM099971

NIM096788

Most were generated between November 2013 and May 2014.  I haven't had the opportunity/motivation to go back and check the status recently.

0 Kudos
JoshuaBixby
MVP Esteemed Contributor

Thanks.  At least they are all publically published so they can be looked up quicker.  That said, a status of "Open" or "New" doesn't really tell us much.

0 Kudos
DanPatterson_Retired
MVP Emeritus

NIM101698: Console Application crashes at IMap:dataFrame where ..  died

NIM099971: A design (DGN) file with a REV extension (.rev) caus..  not in current production plan for 10.2  

NIM096788: ArcMap 10.1 and ArcMap 10.2 crashes while opening an..  for 10.1 and 10.2... thankfully it says "in Production Plan", which is good since both of those versions are retired

0 Kudos
JoshuaBixby
MVP Esteemed Contributor

I too ran into a very similar situation with a script I wrote to data mine, of sorts, all of the MXDs and LYRs in a file system to see what and how users were actually using data sources instead of asking them and relying on incomplete and usually inaccurate responses from the users.  I had something like 10,000 files to look at, and I could never get the script to make it more than several hundred to a thousand before it would crash, vanish crash.  I would find the MXD that was being analyzed when it crashed, and there was never any issues with them in ArcMap or if I copied subsets of related MXDs to a different folder and processed only a few hundred.

Since the crashes would bring down Python, I could never find a way to adequately catch the errors.  No matter how much error trapping and compartmentalization/isolation I did in the code, it would hit some magical number and poof.  In the end, because I had to get something working, I used multiprocessing to pool workers so that a given subprocess could crash and not kill the rest of the script.  Since the subprocesses were tracking their chunks of the list and reporting back, I could recycle the lists from the crashed subprocess and get other processes working on it.

Even the multiprocessing approach got clunky because of timeout problems.  There were certain MXDs, and I have no idea why, that would hang indefinitely.  Some subprocesses would crash and some would hang indefinitely.  It was messy in the end, but I got what I needed.

Fundamentally, ArcMap seems much more tolerant of MXD structure than arcpy.mapping.  It would be nice if there was a mapping.isValid method/property that could basically catch structural errors and return a Boolean rather than have the user start to list layers and have errors raised or the code crash.

GrantHerbert
Occasional Contributor II

Thanks for the quick replies.

I also had to put all sorts of checks in place in the code, as I had MXD's reporting they didn't have a activeDataFrame one pass, then that they did when I queried them directly, and not showing the error when processed a second time and seemingly passing it on to another one further down. The issue I posted about occurs when you access the mxd object for the first time,

   mxd = arcpy.mapping.MapDocument(fullPath)

does not cause a spike in memory, but

   df = mxd.activeDataFrame

(for example) will, and it is at that point that python crashes, so I have no idea how to catch it.

I will look into the multi-processing approach, thanks.

0 Kudos
JoshuaBixby
MVP Esteemed Contributor

I know exactly what you mean, and I have run into problems at the exact same spot.  If the error was trappable, that would be one thing, but the ungracious Python crash makes it difficult to deal with.  That is what got me thinking it would be nice to have an arcpy.mapping validity check, something like:

mxd = arcpy.mapping.MapDocument(fullPath)
if mxd.isValid:

Then again, maybe the untrappable error would simply cause the isValid check to crash too.  Maybe I will head over to ArcGIS Ideas and throw the idea out there.

GrantHerbert
Occasional Contributor II

I agree that a validity check would be nice, but also have my suspicions that it would cause the same crashing issue. (EDIT: I used the MXD Doctor to check one of the MXDs and it had an invalid Page Layout, but another problem MXD came up clean).

About the best I have come up with is wrapping it in a try: block, which at least catches the error, but I can't seem to prevent it from killing the process shortly afterwards (deleting the local mxd variable fails to help).

I can report out what went wrong, and it seems to often be " 'NoneType' is not iterable ", from attempting to access the mxd object, but type(mxd) reports 'arcpy._mapping.MapDocument'.

0 Kudos