Feature Layer Graphics Limitation via Source

01-25-2018 10:01 AM
New Contributor II


I'm trying to load a simple Map & MapView with a single Feature Layer and am seeing some serious performance degradation when I attempt to load the layer with thousands of features. 

I'm using the ArcGIS JS API v4.6.

I'm not loading the Feature Layer graphics via an ArcGIS Feature Service URL, but rather loading it client side via the "source" property.

If I limit the number of features to 1000 the map becomes responsive. 

If I load the Feature Layer with an ArcGIS Feature Service URL the map is fully responsive even when rendering 30,000+ features.

The data set I'm using to load the Feature Layer locally via the "source" property is a small subset of the data available on the Feature Service, and the graphic I'm using is a very small SVG (approximately 600 bytes).

The reason for my question is to see if there is any known limitation to loading a Feature Layer's graphics client side via the source property, or if there is a way to emulate the optimization/rendering logic utilized when a Feature Layer is loaded from an ArcGIS Feature Service URL.

I've confirmed that my browser supports WebGL and have even added the "esri-featurelayer-webgl" setting to the dojoConfig in the <head> of my index.html file in order to enable WebGL for Feature Layers as seen here:

ArcGIS API for JavaScript Sandbox 

var dojoConfig = {
has: {
// Enable webgl for feature layer in MapView
"esri-featurelayer-webgl": 1

Unfortunately i'm not seeing any effect of enabling this setting.  Hopefully someone can shed some light on what I'm doing wrong.



0 Kudos
11 Replies
Esri Regular Contributor

Have you tried just setting the definitionExpression on the FeatureLayer instead?  It should allow you to retrieve the necessary subset of data without having to resort to loading the data yourself.  It always helps to have a jsbin, codepen, or similar so we can all better see the issues.

0 Kudos
New Contributor II

My requirements are to load the layer with a predefined subset of data points (30,000+ features), not use a hosted Feature Service.

The definitionExpression would be used to filter the data, but for initial view I need to display all features specified.

0 Kudos
Esri Regular Contributor

Is the data not coming from a Feature Service as is implied above?  Also, loading the data via 'source' has many limitations when it comes to using using queries, and definitionExpression won't help as you don't have a service.  It always helps to have a jsbin, codepen, or similar so we can all better see the issues.

0 Kudos
Esri Regular Contributor

Also, the webgl rendering you mention above does not apply to 'source' based Feature Layers.  Please see the "Known Limitations" section here FeatureLayer webgl-rendering 

0 Kudos
New Contributor II

Hi John,


Thanks for the response.  Maybe I didn't use the right wording but I'm trying to load a Feature Layer with local client data via the FeatureLayer.source property, NOT the FeatureLayer.url property.


We do have a Feature Service that houses this data, but unfortunately existing authentication/authorization requirements prevent us from being able to load the Feature Layers directly from this Feature Service without first authorizing the data locally.


That being said we're forced to load the features/graphics data locally via the FeatureLayer.source property but are experiencing significant performance degradation vs loading the Feature Layer via a Feature Service URL.


In regards to your comment about WebGL optimization, I read that article but must have missed the section where WebGL optimization only applies to remotely sourced ESRI Feature Services.  That would explain why enabling/disabling this setting results in no change in map responsiveness and performance with locally sourced data.


According to what you're saying it sounds like ESRI never intended for a Feature Layer to be loaded with 1000's of local features in the JS API, but rather leverage the server side Feature Service?  Apples to Apples instead of Apples to Oranges?


I'll try and get a simplified code sample put together to demonstrate what I'm trying to achieve.  It might shed some light on the issue, but it sounds like we're trying to take an approach to mapping data that Esri never really intended for mass consumption.




Esri Regular Contributor


  Thank you for the clarification. 
  Trying to load 30k elements into a browser is never an ideal use case, regardless of the type of information; it won't matter if they're just images or GIS features (especially if they're more than just points).  Ideally this amount of information would be best served up as a tiled service, or even a dynamic map service, and as a last resort maybe a feature service.  Each one of these options presents increasing client-side challenges and performance normally degrades at each level.  When working with your data via a service we can optimize the process on both the server and client so performance with large datasets becomes less of an issue.  Over the years we've seen many performance improvements for all of these types of services and the webgl rendering that works in conjunction with hosted services functionality is just one of the latest.  

  Authentication of the information provided by the service can be handled in many different ways; all the way from when someone accesses the web page, to the use of a proxy, or using online/enterprise/server authentication, or one of many other options.  If necessary, I believe even a SOI on your server could handle custom authentication scenarios.

  If client-side performance and better use of available functionality in the API are necessary, then please first consider tackling the authentication issues before trying to loading 30k features directly client-side.  Yes, it can be done, but it's not ideal as performance will not be good and you will have many other challenges.

New Contributor II

Hi John,

Thanks again for your response. Normally I would leverage one of the

patterns in your previous reply but due to authorization requirements we've

resorted to loading the data client side via the feature layer source.

All of this would be a moot point if we could inject an authorization

header into all of the web requests executed from a feature layer loaded


We're leveraging an azure api gatekeeper as a proxy service between client

and server and it requires an authentication key in the header.

My colleagues and I have scoured the api documentation for a solution to

this problem and aside from the esriRequest class have come up empty


Based on your knowledge of the JS API do you know if there is a way to

inject headers into the low lying requests a feature layer class executes

or if there's a way to inject them at the base level of the dojo framework

for all requests made from an application?

I'm sure these questions are so edge case they don't have an answer but

it's worth asking.



0 Kudos
Occasional Contributor III

This is a longshot but....

There was an issue with the MaptoJSON in the PrintTask.

We ripped the code from the Core Libs, beautified them up, and found where we could add our hook. We then overrode the Execute() method, adding our fixes.


Perhaps you could do the same with the featureLayer class?

Inherit the class, override the http call(s) and add your headers.

My example of overriding the execute method of the PrintTask:

(function (wapp) {
wapp.objects = wapp.objects || {};
], function (
) {
// Handle redline Polygons/Cirles
wapp.objects.CustomPrintTask = EsriPrintTask.createSubclass({
execute: function (a, b) {
internalPrintPrams = this._setPrintParams(a);

// parse current esri version
var mapJson = JSON.parse(internalPrintPrams.Web_Map_as_JSON);

// add rotation
mapJson.mapOptions.rotation = -1 * a.view.rotation;
// Force scale
mapJson.mapOptions.scale = a.view.scale;

// create visibleLayers node
$(mapJson.operationalLayers).each(function (opIdx, opLayer) {
var visibleLayers = [];
$(opLayer.layers).each(function (subIdx, subLayer) {
opLayer.visibleLayers = jQuery.makeArray(visibleLayers);
})// for each Operational Layer

// Remove empty layers
mapJson.operationalLayers = jQuery.grep(mapJson.operationalLayers, function (layer, idx) {
return ((layer.visibleLayers.length > 0)
|| ((layer.featureCollection) && (layer.featureCollection.layers) && (layer.featureCollection.layers.length > 0)));
}); // grep Operational Layers

// Add "style": "esriSMSCircle", to draw features that don't have it
jQuery.each(mapJson.operationalLayers, function (opIdx, opLayer) {
if ((opLayer.featureCollection)&&(opLayer.featureCollection.layers) && (opLayer.featureCollection.layers.length > 0)) {
jQuery.each(opLayer.featureCollection.layers, function (featureSetIdx, featureSet) {
jQuery.each(featureSet.featureSet.features, function (featureIdx, feature) {
if ((feature.symbol)&&(!feature.symbol.style)) {
feature.symbol.style = "esriSMSCircle";
}); // for each featureSet
}); // for each opLayer, add style to symbols that don't have it

internalPrintPrams.Web_Map_as_JSON = JSON.stringify(mapJson);
return this._geoprocessor["async" === this.mode ? "submitJob" : "execute"](internalPrintPrams, b)
}); // require
}(window.wapp = window.wapp || {}));

arrg (how do you get "code" formatting here?)

0 Kudos
Esri Regular Contributor


   I don't know for sure if this will work, but have you tried adding the required azure api authentication info via a local proxy on your web server hosting your app?  You can add a proxy rule to your code so all communication for your service goes via the local proxy which would augment the call by adding the necessary information: see Esri/resource-proxy.  Also, it seems like the esri/request method has a 'headers' property, but I haven't used it so I don't know if it'll help.  In the past I would have used 'setRequestPreCallback' but I'm not sure if it made it into 4.x, and not sure if you'd want to use it as you'd be exposing your key in the js code.  I think using a local proxy is a possible good option for what you need.