How to manage and map 450 000 features across 18 feature layers - in a memory efficent / optimized way

758
3
11-25-2020 11:21 AM
AndrewMurdoch1
Occasional Contributor II

Good Day

I have a massive data collection of ~450, 000 features that I'm mapping on to 18 feature layers using "bins" of 25,000 features / feature layer.  The following code generates those feature layers and after it runs I have 18 layers comprising my ~450, 000 features.

 

createFeatureLayers() {
    return new Promise( (r) => {
        for (const prop in this._geoData) {
          if (this._geoData.hasOwnProperty(prop)) {
            console.time('forLoop');
            for (const geometryProp in this._geoData[prop]) {
              if (this._geoData[prop].hasOwnProperty(geometryProp)) {
                for (const binProp in this._geoData[prop][geometryProp]) {
                  if (this._geoData[prop][geometryProp].hasOwnProperty(binProp)) {

                    const buildFeatureLayers = true;
                    if (buildFeatureLayers) {
                      const data = this._geoData[prop][geometryProp][binProp];
                      if (data.length > 0) {
                        switch (geometryProp) {
                          case 'lines':
                            const featureLayer =
                              this.buildFeatureSettings(data, '#000000');
                            if (featureLayer) {
                              this._layers[prop].lines[binProp] =
                                new this.FeatureLayer(featureLayer);
                              this._layers[prop].lines[binProp].maxScale = 0;
                              this._layers[prop].lines[binProp].minScale = 3000000;
                            }
                            break;
                          default:
                            break;
                        }
                      }
                    }

                  }
                }
              }
              this._geoDataKeyIndex++;
            }
            console.timeEnd('forLoop');
          }
        }
    })
  }

  buildFeatureSettings(data, colour) {
    return {
      source: data,
      renderer: this.buildRenderSettings(data, colour),
      fields: this._fields,
      outFields: ['*'],
      objectIdField: 'ObjectID',
      geometryType: 'polyline',
      spatialReference: {
        wkid: 4326
      },
    };
  }

  buildRenderSettings(data, colour: string) {
    return {
      type: 'simple',
      symbol: {
        type: 'simple-line',
        size: 30,
        width: 3,
        color: colour,
        outline: {
          width: 4,
          color: colour,
        }
      },
    }
  }
 }

 
Once it's mapped onto my map view, the memory usage approximately doubles.  What is the recommended way to handle massive data sets? 

Are there methods available that will remove features from the map view if they're not the current stationary  view? 

Are there methods that will automatically add them?

 

Thanks

0 Kudos
3 Replies
mgeorge
Esri Contributor

@AndrewMurdoch1 - is this the same as the other app where you are seeing the graphical bug in 4.17? Performance-wise, if you are able to use hosted feature layers (rather than query geojson -> create feature-layer from feature collection), that should perform dramatically better. With lines, server-side generalization can improve performance immensely.

With the client-side feature collection, it requires loading all the data upfront and building spatial indices/querying/quantizing/re-projecting all on the client. 

1 larger feature layer will also usually perform better than many small ones because each feature layer includes some upfront memory costs.

AndrewMurdoch1
Occasional Contributor II

Thank you for responding.

We ended up using a host feature layer as it's border line insane how much memory we use otherwise.  If we build the layer ourselves we don't see the same graphical glitch as in my other post.

Right now we have 1 large layer and that's all we need, it's working nicely.

Thanks

0 Kudos
mgeorge
Esri Contributor

Ah ok great! Yes, one of the advantages of using the hosted services here is the generalization. With this, the service reduces the number of vertices one the line to what you can actually see on the screen at a given scale. This can massively reduce the amount of data the client needs to hold onto at any given point in time, loading the more precise data only when the user zooms in. Also we don't have to store all the data in the service, only the data on screen (and since 4.17 we actually keep the data the service returns to us, which is a pbf-encoded binary payload, in its compressed format always and work on the compressed data directly). 

Unfortunately with the feature collection, we also have to duplicate how we store the data (for threading). This is because shared memory is a bit of awkward thing in JS -- we use shared array buffers when possible (specifically for processed features we use this to store attribute data), but it's quite hard to share arbitrary json data between threads. 

And like you mentioned there may also be some real memory issue that was fixed in 4.17, despite there now being some other graphical problem...

0 Kudos