Query in-memory cache when Polling an ArcGIS Feature Service

457
2
Jump to solution
12-04-2017 07:52 AM
JohnDye
Regular Contributor

According to this post by RJ Sunderman‌, when you Poll ArcGIS Server for Features and use the Get Incremental Updates method, the input will maintain an in-memory cache.

Is there anyway to query this cache to determine what the underlying deltas are?

In my case, I want to create a GeoEvent service that will poll a Feature Service for updates every 5 minutes. When it determines that updates have been made to the service (one or more features have been added, one or more features have been edited, or some combination of the two), I want it to actually compare the returned features to its in-memory cache and compile the actual changes so that they can be written as a new feature to a "Change-Log" feature service.

If there is not a way to query that in-memory cache (which is my expectation), is there another way for us to determine what the nature of the changes are? We want very granular tracking of every single change made to certain services.

We explored using SOIs for this and that was successful to an extent, but the lack of support for Hosted Services has become a show stopper since we're trying very hard to move away from Federated Services.

0 Kudos
1 Solution

Accepted Solutions
RJSunderman
Esri Regular Contributor

Hello John -

You are correct - there is no way of using GeoEvent Manager or the GeoEvent Admin API to query the cache used by the inbound 'Poll ArcGIS Server for Features' input to effectively poll for incremental updates.

There is a clarification I would offer, based on my read of your question. The cache is not a cache of the actual feature records. GeoEvent is not comparing feature records queried from a feature service to a cached copy of those same records. The cache is only a query key used to construct a WHERE clause. What GeoEvent Server is doing is iterating through all of the records returned by a particular query and caching the single attribute value which is the "greatest" of the feature record set. If the incremental update key is a date, then the date from the feature record with the most recent date is cached (not the entire feature record, just its date).

That way, on the next query, GeoEvent Server can construct a WHERE clause to query only for feature records whose date is greater than the most recent date cached from the last set of feature records.

A side-effect of this is that if you have false data in your feature records, you can inadvertently block your input from ever receiving incremental updates. Say your data policies are that no date field ever be null. So you default all "unknown" date values to a future value like January 1st 2199. Well, when GeoEvent polls the feature service and encounters one or more records with this future date, it will cache that date and construct it's next query to request only feature records whose "last updated" date is greater than 2199 - which more than likely excludes all feature records.

Honestly, an SOI or other ArcObjects solution which is able to listen for something like IObjectClassEvents:onChange is probably the approach you want to take - rather than relying on GeoEvent Server to poll for incremental updates.

- RJ

View solution in original post

2 Replies
RJSunderman
Esri Regular Contributor

Hello John -

You are correct - there is no way of using GeoEvent Manager or the GeoEvent Admin API to query the cache used by the inbound 'Poll ArcGIS Server for Features' input to effectively poll for incremental updates.

There is a clarification I would offer, based on my read of your question. The cache is not a cache of the actual feature records. GeoEvent is not comparing feature records queried from a feature service to a cached copy of those same records. The cache is only a query key used to construct a WHERE clause. What GeoEvent Server is doing is iterating through all of the records returned by a particular query and caching the single attribute value which is the "greatest" of the feature record set. If the incremental update key is a date, then the date from the feature record with the most recent date is cached (not the entire feature record, just its date).

That way, on the next query, GeoEvent Server can construct a WHERE clause to query only for feature records whose date is greater than the most recent date cached from the last set of feature records.

A side-effect of this is that if you have false data in your feature records, you can inadvertently block your input from ever receiving incremental updates. Say your data policies are that no date field ever be null. So you default all "unknown" date values to a future value like January 1st 2199. Well, when GeoEvent polls the feature service and encounters one or more records with this future date, it will cache that date and construct it's next query to request only feature records whose "last updated" date is greater than 2199 - which more than likely excludes all feature records.

Honestly, an SOI or other ArcObjects solution which is able to listen for something like IObjectClassEvents:onChange is probably the approach you want to take - rather than relying on GeoEvent Server to poll for incremental updates.

- RJ

View solution in original post

JohnDye
Regular Contributor

Thanks RJ. Good to know that it's only caching the update key data. 

As for the SOIs, that's what the existing solution does now - but we really want to support Hosted Services and it's not possible to configure an SOI on a hosted service so we're looking for a way to deliver similar functionality for both hosted and federated services. 

It would be nice however, it there was a way for to build a GeoEvent service that could "subscribe to a service's endpoints". Now that we know GeoEvent can't solve the problem, we're thinking about trying to spoof a Hosted Feature Service using an AWS Lamba function which we would then register with our Portal, then using that spoofed feature service as a proxy for the actual feature service we want to intercept calls to and do some pre/post processing.

0 Kudos