Server-side code or JavaScript loop?

2131
8
07-19-2010 07:51 AM
deleted-user-Jie3eyjOl9XM
Occasional Contributor
I've got an XML file that I am reading, and rendering points as graphics. This is all done client side -- reading the XML and creating the graphics. As I add more and more layers, the performance is REALLY suffering. Accessing this data as tile layer is not an option, though.

As it stands, I have a JavaScript loop, inside of which a graphic is created and added to the map, which happens 500 times. This seems to be the standard thing I see in all the examples.

I am wondering if it would be more performant to load the XML, and generate the JavaScript code for each graphic on the server. That is, have ASP read the data, perform the loop, and output JavaScript code that explicitly creates graphics and calls map.addGraphic 500 times.

I could code this up and compare, but I'm figuring someone out there has already thought about this. Thoughts?
0 Kudos
8 Replies
derekswingley1
Frequent Contributor
I say give it a shot and post your results here.
0 Kudos
XavierIrias
New Contributor III
The Javascript version ought to be best.  Reading and parsing XML ought to be lightning fast in a modern browser even for thousands of items.  The bottleneck is likely in making the graphics not in parsing the xml.  I suggest profiling your javascript code, either with a tool, or with simple timer calls.

If the data change very seldom, make sure that your browser is caching the results.  You can verify this with Firebug.

Other things to consider: 
1. consider using JSON instead of XML.  It's smaller and quicker to evaluate; most browsers have JSON parsing done natively now.

2. examine all your looping code and optimize it by moving initialization items outside the loop.  I know, standard procedure, but if you profile you may find you're creating objects, parsing strings, etc. hundreds of times instead of just once.

3. make sure that you are not concatenating large strings with Javascript.  This type of operation absolutely does not scale linearly (it's far worse).  If you need to build a large string, use an array then do a single array.join operation at the end. 

4. generally manipulate objects prior to attaching to the DOM, not after.
0 Kudos
AxelSchaefer
New Contributor II
As it stands, I have a JavaScript loop, inside of which a graphic is created and added to the map, which happens 500 times. This seems to be the standard thing I see in all the examples.


Perhaps it will increase the performance if you build your graphics array in the loop and do the rendering outside. Or to strip down the loop to the creation of the JSON string only. Outside the loop you can create the graphic objects and add them to the graphics layer.

3. make sure that you are not concatenating large strings with Javascript.


That's right. You can use the Buffer objects and methods of dojo.

Regards,
Axel
0 Kudos
deleted-user-Jie3eyjOl9XM
Occasional Contributor
Indeed the bottleneck seems to be rendering the graphics, not so much the looping, etc. But, I will continue to work on the speed, and get more familiar with JS profiling. I hate to think this, but I wonder if the Flex API would be faster.


1. consider using JSON instead of XML.  It's smaller and quicker to evaluate; most browsers have JSON parsing done natively now.

Good idea. Never even considered this. It would be a big change to the architecture, and I wonder if it would pay off. Never know until you try.


2. examine all your looping code and optimize it by moving initialization items outside the loop.  I know, standard procedure, but if you profile you may find you're creating objects, parsing strings, etc. hundreds of times instead of just once.

Yes! This was actually the first thing I looked at. I was SHOCKED at how much faster the code ran, with just a little thought to performance. I have another layer with polygons, and parsing thousands of points went from hanging the browser to almost nothing.


3. make sure that you are not concatenating large strings with Javascript.  This type of operation absolutely does not scale linearly (it's far worse).  If you need to build a large string, use an array then do a single array.join operation at the end. 

As above, this was a great, simple trick. I used it for generating HTML for the InfoWindow.

There was another thing that surprised me. It was much faster to create a SimpleMarkerSymbol all at once, rather than creating separate variables for the color, line, fill, etc.
0 Kudos
deleted-user-Jie3eyjOl9XM
Occasional Contributor
Perhaps it will increase the performance if you build your graphics array in the loop and do the rendering outside.

Indeed, I did this. The loop simply pushes each graphic into an array, and later I create a layer. BUT, I couldn't find a way to add THE ENTIRE ARRAY at once. I still needed to loop through the array of graphics and add each one individually. Is there a method that I'm not seeing?
0 Kudos
deleted-user-Jie3eyjOl9XM
Occasional Contributor
Thanks for the info, all. I haven't given much more thought to using ASP to generate the JavaScript. It looks like that won't help. But, I thought I'd post some additional results regarding speed. I've really tightened up my code, to the point where I can actually find the bottlenecks. Actually rendering the graphics isn't so bad. But, there are two API calls that are the most expensive.

layer.add(graphic) is a heavy method, and it gets called for every graphic. It looks like it recomputes the extent, and some other things.

map.addLayer(graphicsLayer) is also a big hit. It only gets called once, but when there are a large number of graphics, it's crazy slow.

It would be nice if the API developers could look into that a little. Maybe there is a way to improve speed by combining those methods:
   map.addLayer(graphics[])
I'll bet it could avoid some of the underlying duplication involved in adding the graphics, and then adding the layer.
0 Kudos
BrettLord-Castillo
Occasional Contributor
One of the most costly aspects of addGraphic is that it loops through the array of graphics to see if an identical graphic has already been added. When you do 500 deep comparisons of objects for every new add, things really slow down.
FeatureLayer is actually the way to go now, if you can, to get around this.
At 1.5, I rewrote GraphicsLayer.add() and added a layer index to esri.Graphic to make this widget:
http://maps.stlouisco.com/police/mapdijits/GraphicsManagerOpt.js
(or go to http://maps.stlouisco.com/police/mapdijits/ and click on Graphics ManagerOpt)
The functions to pay attention to in that widget are the constructor (where I overwrite GraphicsLayer.add() for the graphicslayer used by the widget) and add() and remove(), where I interact with a hash function instead of using a search in order to determine if a graphic has already been created.

I would have to take a closer look at _updateExtent(Graphic) and _draw(Graphic) to see if it is feasible to most those outside the add function too (and see how those function in version 2.0 first of course).
0 Kudos
JohannesElsinghorst
New Contributor II
Hey guys,
i'm currently evaluating the JS-API for some projects. We are dealing with point and polygon features with up to 40000 points and 1000 polygons. Based on your experience, would you say it is possible to handle these numbers on the client? Besides drawing we would have to do some analysis on them (intersection and such).
Thanks for any hints,
Johannes
0 Kudos