The memory usage easily exceeds 2GB and can reach the magic number that crashes a 32bit application on Windows.
I see that I am not the only one experiencing this. I am using the recommended optimizations and I have tried different modes (static, dynamic). The graphics are added to a GraphicsOverlay and I am using a renderer to define symbolization (v10.2.7 / v100.0 / v100.1).
Will this issue be improved upon in a forthcoming release?
Thanks for the tip. I tried it now - still 600MB for 160K items. The load time went from a few seconds to a minute. Probably due to the async API for adding a feature to the feature table - lots and lots of garbage collections.
I know the actual data is about 15MB in memory, so I am curious about what the runtime is doing.
Did you use the Add method that adds an entire array of them? If you do one-by-one yes it'll be slower. You can't really rely on the memory consumption - it could just be the GC decides it's not low on memory and busy doing other stuff, so will hold off on collecting memory.
I see, the load time would surely increase. But the main blocker is the memory usage. I am forcing GC collect, and I can see that most of the memory usage is unmanaged.
That would be an alternative yes, but as the data is coming over a generic REST call (non-ArcGIS) the orchestration would be a bit heavy.
From a programmers perspective I am really curious about how megabytes of data turns into gigabytes inmemory. Is there some kind of bitmap caching behind the scenes?
Is it possible for you to build a small stand-alone reproducer that reproduces this? (ie something that generates random graphics/features with similar a amount of geometries with similar complexity and attributes)
I am changing jobs and I will probably not work with ESRI software for a long time. In the process I do not have the time to make you an app.
That being said there is no complexity to this. It happens for point geometries in large volume.