I have a workflow in mind "? but it"?s not perfect. Perhaps if I share it with you, you or someone else can suggest an improvement.
I"?m assuming that what we are trying to address is a proposed maintenance cycle which requires a service admin to stop a feature service (for a period of time) which a GeoEvent Service is actively updating. I"?m further assuming that the maintenance will be to add attribute fields to the feature service"?s schema "? not to change data types of existing fields or remove existing fields.
What we want to do is actively cache events received by GeoEvent Processor so that when the admin brings the feature service back on-line it will automatically receive any events which came in during the (planned) outage.
If you were to incorporate a second Output connector into the GeoEvent Service which was updating features in the feature service, you could start/stop that Output independently, effectively turning caching "on"� / "off"� "� you could use any Output, but I"?ll stick with a CSV or JSON file Output for purposes of this discussion.
So the workflow might be that you start the "cache"� Output so that whatever event data is being sent to the Output updating the feature service gets copied to a system file. Then stop the feature service, which will result in event data being sent to the Output updating the feature service (e.g fs-out) failing to reach the target feature layer. But the event data is being cached in the system file "� so we"?re OK so far.
Once the planned maintenance to the feature service is complete, the admin can restart the feature service, which will resume feature service updates, and then stop the secondary "cache"� Output so that data is no longer being written to the system file. The admin would then copy the system file to a folder being watched by a different GeoEvent Service so that the "cached"� event data would get read into GEP and used to update the target feature service.
But we have a race condition. Depending on the rate at which feature data is being received, I can imagine a situation in which "live"� data output to the feature layer gets overwritten by older "cached"� event data from the system file "� once the system file is copied and subsequently read.
I"?m not sure that we"?ll be able to identify a solution which guarantees both (a) no data loss, and (b) also guarantees the most recent "live"� event data sent by a data provider is given priority over data from a "cache"�. Perhaps you or someone else can suggest a modification which would address this concern.
- RJ