Skip navigation
All Places > GIS > Enterprise GIS > GeoEvent > Blog > Author: rsunderman-esristaff
1 2 Previous Next


29 Posts authored by: rsunderman-esristaff Employee

I sent the following to one of our contractors today. The information on configuring SSL certificates, administrative tips for multi-machine deployments following a 'site' model, and things to check when GeoEvent Server fails to load its ArcGIS Server's configured certificates and instead uses its own SelfSignedCertificate might be of more general use, so I'll leave this here in case it helps someone working with GeoEvent Server deployments.

With a multi-machine ‘site’ configuration it is critical that all machines trust one another. That means that not only do I have to configure an SSL certificate on Box#1 and configure that machine’s ArcGIS Server to use that certificate as its Web Server Certificate … I have to import certificates for Box#2, Box#3, … Box#N into the ArcGIS Server so that it trusts all the other machines participating in the site. I have to do this “fan-out” on every server, setting *that* server’s Web Server Certificate and importing certificates from all the *other* machines onto that server.

I’ve captured what I do that works for me when setting up a couple of machines. But to be honest, SSL certificate configuration is not something I understand at a deep, technical level. Likely there is a “better” way of doing what I propose in the attached, maybe using a wild-card certificate, but I don’t know how to set that up.

I’d also like to break the problem you’re seeing into two pieces. The first being SSL certificate configuration, for which I’ll capture some screenshots (see attached PDF). The second piece involves things I look at when GeoEvent Server seems unable to locate and load the certificates its ArcGIS Server is configured to use.

The second part probably has more to do with why GeoEvent Server completes a fail over to use its SelfSignedCertificate rather than the certificate its ArcGIS Server is configured to use. I’ll apologize if anything I share is overly pedestrian … like I said, SSL certificates are not my cup of tea, so all I can do is show you what works for me and hope that your experience will allow you to iterate and adapt what I have to share.

The first part, SSL certificate configuration, is attached.

For the second part … I would caution against opening the Java Keystore using a command like keytool.  I’ve watched developer’s do this, but I’ve never seen that administratively editing the JKS do anything to resolve a problem. GeoEvent Server, when it launches for the first time, interrogates its ArcGIS Server for information on its site and SSL certificates. If you would like to see some evidence for this, you can request DEBUG logging on the GeoEvent Server logger component. GeoEvent Server will attempt to copy the certificate configuration of the ArcGIS Server is it running beneath. If GeoEvent Server cannot obtain the certificates from the ArcGIS Server configuration, it will fail over to use its own SelfSignedCertificate. The fail over is intended to at least allow GeoEvent Server to complete its startup – but if GeoEvent Server does not trust machines the same way as its ArcGIS Server does, lots of stuff is probably not going to work.

By the way, it is precisely because GeoEvent Server interrogates its ArcGIS Server for information that it is best to have your ArcGIS Enterprise (Portal for ArcGIS, hosting ArcGIS Server, ArcGIS Data Store) fully configured with a site created, federated and all SSL certificates configured before you introduce GeoEvent Server to the Enterprise. Installing – or at least starting the GeoEvent Gateway and GeoEvent Server – before ArcGIS Server and Portal for ArcGIS are fully configured means that the initial interrogation fails. Security topology may change … you may later decide to federate for example, or SSL certificates have to change … in which case resetting your GeoEvent Server configuration from within GeoEvent Manager (e.g. not an “administrative reset”) should force GeoEvent to pick-up changes made to the Enterprise configuration. Worst case you have to stop and restart GeoEvent Server after resetting its configuration then import your inputs, outputs, …etc. You don’t always have to re-install, but installation order can make your life easier administratively when deploying all this s/w for the first time.

There are a few things I check when I find that GeoEvent Server is using its own SelfSignedCertificate rather than the certificate its ArcGIS Server specifies as its Web server SSL certificate.

  • Did I accurately follow the certificate configuration laid out in the attached PDF?

Sometimes a machine gets re-imaged, or a something else invalidates a certificate I had previously generated, applied, and imported using the attached procedure. That is when I have to walk through that whole process again. Sometimes it is just that a certificate has expired. They do that, and rarely when it’s convenient.

  • ArcGIS Server maintains two different certificate stores – do their contents match?
Seriously, this has bitten us more than once. There’s a certificate store beneath …\ArcGIS\Server\framework used, I think, by web clients. ArcGIS Server maintains a copy of these certificates in its configuration store for each machine in the site. This second key store is used, I think, by thick client applications.
  • C:\Program Files\ArcGIS\Server\framework\etc\certificates
  • C:\arcgisserver\config‑store\machines\MYMACHINE.ESRI.COM

The two certificate stores should be identical. I’ve found once or twice that files had not been copied from the Server framework into its configuration store. When this happened I had to stop ArcGIS Server, manually create the folder named for the machine (e.g. CARMON.ESRI.COM beneath …\config-store\machines) and copy the files from the framework into the configuration store folder. When I restarted ArcGIS Server and administratively reset GeoEvent Server, it adopted its Server’s certificates and began working as expected.

  • ArcGIS Server maintains both JSON and XML copies of its SSL configuration – do they match?

When debugging we’ve found a couple of times that the SSL configuration reported by ArcGIS Server by its Admin API did not match an XML file’s content that GeoEvent Server was using to retrieve certificate information. Specifically a file D:\arcgisserver\config-store\machines\ specified a webServerCertificateAlias which did not match what should have been the same information in a C:\Program Files\ArcGIS\Server\framework\etc\machine-config.xml file.

When this happens you might try stopping GeoEvent Server (and GeoEvent Gateway) and reconfiguring the ArcGIS Server’s certificates. If the files match after ArcGIS Server completes a restart, then you can administratively reset GeoEvent Server and it should pick-up the correct certificate configuration.

  • Does the GeoEvent Gateway have its correct hostname / IP Address in its com.esri.ges.gateway.cfg file?

Part of the GeoEvent Server administrative reset is to delete this file and make sure that it gets regenerated automatically when GeoEvent Gateway (or maybe its when GeoEvent Server) comes up for the “first” time.

If you look at the file’s content in a text editor you’ll see that it instructs the Gateway as to which server and port it should use for connecting to the Zookeeper distributed configuration store which manages your GeoEvent Server’s configuration. It also specifies the Apache Kafka topic partitions, replication and how to reach the broker. If the machine information in this file designates a machine which does not exist – like when you use cloud image utilities to push a machine image out to multiple virtual machine instances – when GeoEvent Gateway launches it never reaches a stable state and cannot support its GeoEvent Server.

The procedures to administratively reset GeoEvent Server are in a blog:  Administratively Reset GeoEvent Server

You can follow the procedures for 10.6.x as they will be the same for 10.7.x and 10.8 deployments. These are the steps, by the way that you have to run on each server when following a multi-machine deployment with a ‘site’ configuration and one of the machines drops out of the configuration and does not automatically re-integrate.

Resetting a multi-machine ‘site’ configuration is both tedious and error prone. You basically have to work as if you’re installing all of the s/w for the first time:

  • Install ArcGIS Server, create site, configure certificates, install GeoEvent Server
  • Install ArcGIS Server, join site, configure certificates, install GeoEvent Server
  • Install ArcGIS Server, join site, configure certificates, install GeoEvent Server (lather, rinse, repeat)

When you already have an ArcGIS Server site with, say, three machines things get messy. I think what you do is use ArcGIS Server Manger to ‘STOP’ two of the machines – you’ll want to stop GeoEvent Gateway and GeoEvent Server on those machines first. The idea is that as far as the ArcGIS Server site is concerned it only has one machine. Complete the admin reset for GeoEvent Server on that machine then start its Gateway, wait a couple minutes, then start its GeoEvent Server.

Then, back in ArcGIS Server Manger to ‘START’ a second machine. The site now thinks it has two machines, only one of which is running GeoEvent Server. Complete the admin reset for GeoEvent Server on the second machine then start its Gateway, wait a couple minutes, then start its GeoEvent Server. As the GeoEvent Gateway and GeoEvent Server come up they’ll discover and coordinate with the running GeoEvent Server, through the AGS site, and work out among themselves how to balance the kafka topics and brokers.

Finally, in ArcGIS Server Manager, ‘START’ the third machine. The site now thinks it has thee machines, only two of which are running GeoEvent Server. Complete the admin reset for GeoEvent Server on the third machine then start its Gateway, wait a couple minutes, then start its GeoEvent Server. As the GeoEvent Gateway and GeoEvent Server come up on this final machine they’ll integrate with the other two.

If you try to bring all three machines on-line at the same time and they were not properly integrated / balanced when they were taken down … they’ll likely not integrate correctly with one another. You have to stage their startup so that the ArcGIS Server site never has more than one machine ‘STARTED’ which does not have a fully initialized and integrated GeoEvent Server. When two or more GeoEvent Server’s try to integrate at the same time things tend to fail. It is precisely this sort of fragility, and the fact that it is so administratively difficult to determine if the machines were not properly integrated / balanced in the first place, that I feel a ‘site’ configuration really doesn’t provide the resiliency it was designed to provide. Sure, when everything is working it works beautifully. But when a machine falls out of configuration … getting the ‘site’ back to nominal is difficult (to say the least).


Hope this information is helpful –


Polygons which model areas of interest – counties, national parks, or property boundaries for example – are generally static. A new area of interest might be established requiring a geofence to be added, or an existing area’s geographic extent might occasionally change requiring a geofence to be updated, but in general the geofences don't change very often. This scenario fits well with GeoEvent Server’s ability to synchronize its geofences with a feature record set maintained as part of a feature service. The areas of interest can be maintained as feature records and occasionally imported to establish or update the relatively static geofences. A synchronization rule can periodically poll the feature service to obtain updates.

This blog explores a different scenario. Suppose you need geofences to be created dynamically, managed for only a short period of time, and then frequently and automatically destroyed when no longer needed. Constantly polling a feature service to check and see if there have been any changes is impractical.

In a dynamic scenario, we need to push changes to GeoEvent Server immediately as the changes are received. A video attached to this blog will show how a GeoEvent Service can be used to receive attributes describing an area of interest, compute an effective date/time range during which the area of interest is considered relevant, and generate a polygon to model the area of interest. A stream service will be used to broadcast dynamically generated polygons and computed date/time values as a feature records allowing them to be registered with GeoEvent Server as new or updated geofences via a synchronization rule.


  • Import and review a pair of GeoEvent Services configured to process a tracked asset's current location and dynamic geofences constructed for a given center point of geographic interest.
  • Review how stream services are published and configured outbound connectors updated to use the published stream services to broadcast processed event records as feature records.
  • Use the GeoEvent Simulator to send simulated vehicle location observations to GeoEvent Server and display those locations, live, on a web map.
  • Configure a synchronization rule to subscribe to a stream service and receive polygon feature records as they are broadcast (rather than relying on a feature service which must be frequently polled for updates).
  • Demonstrate how information can be sent to GeoEvent Server, on demand, via HTTP/POST to drive the generation of dynamic areas of interest (e.g. geofences).
  • Demonstrate the display and update of dynamic geofences both on a web map and in GeoEvent Server.
  • Extend a GeoEvent Service with an analytic which detects when a tracked asset's location intersects a dynamic geofence and produce an alert message which can be displayed using the GeoEvent Logger.
  • Discuss the temporal relevance of geofences, how analytics you configure will ignore geofences which are not temporally relevant, and how the GeoEvent Server AOI Manager automatically purges geofences which are no longer being used to clean-up its registry.

Demo Resources

I have included demonstration resources with this blog post so you can recreate this demonstration in your own environment. An attached ZIP archive includes an XML snapshot of a GeoEvent Server configuration which includes a couple of GeoEvent Services as well as inbound and outbound connectors. The configuration file was taken from a 10.7.1 deployment, but should work in the upcoming 10.8 and 10.8.1 releases.

  • A pre-configured GeoEvent Service Trackpoints connects a TCP/TEXT input with a stream service output to broadcast point feature records and report the location of a simulated tracked asset.
  • A second pre-configured GeoEvent Service, AOI_Centerpoint, connects an HTTP/JSON input with a stream service output to buffer received point locations and broadcast each buffer as a polygon feature record suitable for use as a geofence.
  • A video will walk you through the import of the provided configuration, necessary stream service publication, and the configuration of a geofence synchronization rule to subscribe to receive feature records broadcast from the second stream service.

I hope you find the video tutorial and information useful –

This blog is one in a series of blogs discussing debugging techniques you can use when working to identify the root cause of an issue with a GeoEvent Server deployment or configuration. Click any link in the quick list below to jump to another blog in the series.

In this blog I will discuss a technique I have used to perform a targeted extraction of debug messages being logged as GeoEvent Server queries a feature record set from an available polygon feature service to synchronize its geofences. The technique expands the use of command-line utilities first introduced in a previous blog. These utilities enable us to perform pattern matching on specific sections of a logged message then extract and apply string substitution and formatting to the logged messages, live, as the messages are being written, to make them easier to read.

A lot of the analysis I am about to demonstrate could be done using a static log file and a text editor, but I have come to really appreciate the power inherent in the command-line utilities I am covering in this blog. Our goal will be to find and review HTTP requests GeoEvent Server makes on a feature service resource being used as the authoritative source of area of interest polygons as well as the feature service's responses to the requests.


A customer has published a feature service with several dozen polygons representing different areas of interest and configured a Geofence Synchronization Rule to enable the polygons to be periodically imported and synchronized to keep a set of geofences up-to-date. We know that GeoEvent Server polls the feature service to obtain a feature record set and registers the geometries with its AOI Manager – in this context AOI is short for "Area of Interest".

For this exercise we are interested in the interface between GeoEvent Server and an ArcGIS Server feature service, not the internal operations of the AOI Manager. We want to capture information on feature record requests and the responses to these requests. GeoEvent Manager does not provide an indication of when geofence synchronization occurs, only that it occurs once every 10 minutes in the customer's configuration, so the customer would like to know if enabling debug logging for a specific component logger will grant them additional visibility into the periodic geofence refresh as it takes place. Knowing when a synchronization is about to occur will more deterministic testing on the real-time analytics configured without resorting to aggressive synchronizations every few seconds.

Geofence Synchronization

To begin testing the scenario described above I published a feature service and added a few dozen polygon feature records to the service's feature data set. I can query the feature records via the feature service's REST endpoint:

Notice that each feature record has two attribute fields, gf_category and gf_name, which can be used to uniquely name and organize geofences when they are imported into GeoEvent Server.

Next, in GeoEvent Manager, I configure a synchronization rule that will query the feature service every 10 minutes. The feature records illustrated above will be loaded into the GeoEvent Server’s AOI Manager which handles the addition and update of geofences.

At this point I know that GeoEvent Server is periodically querying the feature service, but the GeoEvent Manager web application does not provide any indication of when the synchronizations will occur. I know the synchronization cycle starts when I click the Synchronize button and occurs every 10 minutes after that, but I do not know which component loggers would be most appropriate to watch for DEBUG messages. My only real choice, then, is to request debug logging for all component loggers by setting the level DEBUG on the ROOT component (knowing that this will cause the karaf.log file to grow very large very quickly).

In a previous blog, Application logging tips and tricks, I introduced tail and grep, a couple of command-line tools that can be used to help identify and isolate logged messages based on keywords or phrases. Using this technique to identify logged messages which include specific keywords allows me to focus on messages of particular interest.

In this case, however, using grep to search will not work as well because a pattern match may occur anywhere in a logged message's text. Using grep to look for something like MyGeofences[/]FeatureServer[/]0 to is likely to match more than we are interested in, specifically because the feature service’s URL appears in both the thread identifier as well as the actual message portion of numerous logged messages. So we need a more discriminating technique. We need a way to apply a regular expression pattern match to a specific portion of a logged message and associate a successful pattern match with an action we can run on the text of messages as they are written to the system’s log file.

Power tools for text processing and data extraction

Consider the following command which leverages awk rather than grep and a new stream editing utility sed:

rsunderman@localhost  //localhost/C$/Program Files/ArcGIS/Server/GeoEvent/data/log
tail -0f karaf.log | awk -F\| '$6 ~ /rest.*services.*MyGeofences/ { print $1 $4 $6; fflush() }' |sed 's/[&]token[=][0-9a-zA-Z_-]*/.../'

The awk command is typically used for data extraction and reporting. It is a text processing language developed by Aho, Weinberger, and Kernighan (yes, AWK is an acronym). The sed command is a stream editor used to filter and transform text. When interpreting the command line illustrated above remember that logged messages have six parts and each part is separated by a pipe ( | ) character.

Logged messages have six parts

As new messages are added to the karaf.log each message’s text is processed by the awk script which specifies that a pipe character should be used as the field delimiter and that the sixth field, the actual message, should match the specified regular expression pattern. If the pattern is matched then fields 1, 4, and 6 from each logged message are printed as output. The fflush( ) is important to force the command's buffered content to be flushed as each line of text is processed so that the sed command can identify a string of characters matching a query parameter &token= and replace the entire string with a few literal dots (simplifying the overall string).

There is a lot of power packed into this command. It enables us to apply a dynamic if / then evaluation to each logged message as the message is committed to the system log file, discard any message when a specific field does not match a specific pattern, and reformat messages on-the-fly to simplify their display. Wow.

You can read all about the power of sed and awk online. O’Reilly Media has an entire book dedicated to using sed and awk as power tools for text processing and data extraction.

Determining which component logger(s) to watch

The following illustration shows the output produced when the command above is used to filter the large volume of messages logged by all components when debug logging is requested at the ROOT level. For this example, assume that the command was run just before the Synchronize button is clicked to force a geofence synchronization rule to perform a set of queries against the feature service.

One pattern that stands out immediately is that there appear to be four requests made. Different component loggers represent these the requests in their own way, but we see key phrases repeated such as "Executing following request", "Main Client Exec ...Executing request" and the request's outgoing headers and actual request going out over the HTTP wire:

(Click to Enlarge Image)

We certainly don't need to see each request represented four different ways, and a quick search of the karaf.log for the key word "MainClientExec" shows the raw (unprocessed) log messages are associated with a particular class and bundle. These are clues to loggers we can interrogate further:

(Click to Enlarge Image)

If we are careful to leave DEBUG logging turned on at the ROOT level for only as long as it takes to navigate to the GeoFence Synchronization Rules and click Synchronize, then return to the Logs and change the settings back to WARN for the ROOT level, we can use the cached logged messages to generate a list of possible component loggers we might be interested in looking at more closely.

Two loggers that seem specifically appropriate are org.apache.http.impl.execchain.MainClientExec (because "MainClientExec" was identified as a class name of interest) and com.esri.ges.httpclient.Http (because the bundle identifier "com.esri.ges.framework.httpclient" was part of each logged message).

Requesting DEBUG logging on the HTTP Client logger will still produce a large number of logged messages. By targeting a single logger, however, we reduce the number of messages being logged overall; we are not interested in examining debug messages from the header or wire components for example. Also, we can tailor our sed and awk command to help further identify messages of particular interest. If we run our text extraction and format command on an active tail of the karaf.log – and take care to start and end the tail around the time that we navigate to GeoFence Synchronization Rules and click Synchronize – the number of logged messages is surprisingly manageable.

I have included the 24 lines extracted and formatted by the sample command below which is looking specifically for the key phrases "Executing request" and "Got response":

$ tail -0f karaf.log |awk -F\| '$6 ~ /(Executing request|Got response)/ { print $1 $6; fflush() }' |sed 's/[&]token[=][0-9a-zA-Z_-]*/.../'

2019-11-07T18:06:40,294  Executing request POST /arcgis/admin/machines/localhost/status HTTP/1.1
2019-11-07T18:06:40,342  Got response from HTTP request: <html lang="en">
2019-11-07T18:06:43,629  Executing request POST /arcgis/admin/system/configstore HTTP/1.1
2019-11-07T18:06:43,646  Got response from HTTP request: <html lang="en">
2019-11-07T18:06:46,622  Executing request GET /arcgis/help/en/geoevent HTTP/1.1
2019-11-07T18:06:46,626  Executing request GET /arcgis/help/en/geoevent/ HTTP/1.1
2019-11-07T18:06:47,250  Executing request GET /arcgis/rest/info?f=json HTTP/1.1
2019-11-07T18:06:47,253  Got response from HTTP request: {"currentVersion":10.8,"fullVersion":"10.8.0","soapUrl":"https://localhost:6443/arcgis/services","secureSoapUrl":null,"authInfo":{"isTokenBasedSecurity":true,"tokenServicesUrl":"https://localhost:6443/arcgis/tokens/","shortLivedTokenValidity":900}}.
2019-11-07T18:06:47,710  Executing request GET /arcgis/rest/services/?f=json..... HTTP/1.1
2019-11-07T18:06:47,720  Got response from HTTP request: {"currentVersion":10.8,"folders":["System","Utilities"],"services":[{"name":"AffectedTransLines-Buffers","type":"StreamServer"},{"name":"AffectedTransLines-Intersections","type":"StreamServer"},{"name":"CriticalInfrastructure","type":"FeatureServer"},{"name":"CriticalInfrastructure","type":"MapServer"},{"name":"Geofence_Stream","type":"StreamServer"},{"name":"MyGeofences","type":"FeatureServer"},{"name":"MyGeofences","type":"MapServer"},{"name":"SampleWorldCities","type":"MapServer"},{"name":"TropicalStormPolygons","type":"StreamServer"}]}.
2019-11-07T18:06:48,060  Executing request GET /arcgis/rest/services/?f=json..... HTTP/1.1
2019-11-07T18:06:48,068  Got response from HTTP request: {"currentVersion":10.8,"folders":["System","Utilities"],"services":[{"name":"AffectedTransLines-Buffers","type":"StreamServer"},{"name":"AffectedTransLines-Intersections","type":"StreamServer"},{"name":"CriticalInfrastructure","type":"FeatureServer"},{"name":"CriticalInfrastructure","type":"MapServer"},{"name":"Geofence_Stream","type":"StreamServer"},{"name":"MyGeofences","type":"FeatureServer"},{"name":"MyGeofences","type":"MapServer"},{"name":"SampleWorldCities","type":"MapServer"},{"name":"TropicalStormPolygons","type":"StreamServer"}]}.
2019-11-07T18:06:56,608  Executing request GET /arcgis/rest/services/MyGeofences/FeatureServer/0/query?f=json.....&where=1%3D1&outFields=gf_name%2Cgf_category&outSR=4326 HTTP/1.1
2019-11-07T18:06:56,635  Got response from HTTP request: {"objectIdFieldName":"objectid","globalIdFieldName":"","geometryType":"esriGeometryPolygon","spatialReference":{"wkid":4326,"latestWkid":4326},"fields":[{"name":"gf_name","alias":"gf_name","type":"esriFieldTypeString","length":50},{"name":"gf_category","alias":"gf_category","type":"esriFieldTypeString","length":50}],"features":[{"attributes":{"gf_name":"Alpha_003","gf_category":"Alpha"},"geometry":{"rings":[[[-120.252028,30.944518],[-119.784204,29.644623],[-120.566595,29.483390],[-121.447948,30.461197],[-121.275115,30.841066],[-120.252028,30.944518]]]}},{"attributes":{"gf_name":"Alpha_005","gf_category":"Alpha"},"geometry":{"rings":[[[-120.943999,33.487086],[-121.032831,32.575755],[-121.690575,31.901015],[-122.421752,32.119503],[-122.245335,33.447119],[-120.943999,33.487086]]]}},{"attributes":{"gf_name":"Alpha_006","gf_category":"Alpha"},"geometry":{"rings":[[[-122.691280,29.516679],[-123.226533,29.802332],[-122.749277,31.805495],[-122.429246,32.118518],[-122.421752,32.119503],[-121.690575,31.901015],[-121.275115,30.841066],[-121.447948,30.461197],[-122.691280,29.516679]]]}},{"attributes":{"gf_name":"Alpha_008","gf_category":"Alpha"},"geometry":{"rings":[[[-120.851764,33.649721],[-120.165423,33.593953],[-119.397317,32.932664],[-120.074747,32.236847],[-121.032831,32.575755],[-120.943999,33.487086],[-120.851764,33.649721]]]}},{"attributes":{"gf_name":"Alpha_010","gf_category":"Alpha"},"geometry":{"rings":[[[-116.132660,31.584451],[-116.047511,31.421341],[-115.681330,29.828279],[-116.996163,30.707680],[-116.957586,31.120668],[-116.132660,31.584451]]]}},{"attributes":{"gf_name":"Alpha_011","gf_category":"Alpha"},"geometry":{"rings":[[[-117.397404,29.178025],[-117.888219,29.222649],[-118.816738,30.272808],[-118.787736,30.371437],[-118.275610,30.321049],[-117.406325,29.315306],[-117.397404,29.178025]]]}},{"attributes":{"gf_name":"Alpha_013","gf_category":"Alpha"},"geometry":{"rings":[[[-118.461679,30.835274],[-118.017691,30.803414],[-118.017804,30.732340],[-118.275610,30.321049],[-118.787736,30.371437],[-118.830849,30.513622],[-118.712642,30.726205],[-118.461679,30.835274]]]}},{"attributes":{"gf_name":"Alpha_014","gf_category":"Alpha"},"geometry":{"rings":[[[-118.017804,30.732340],[-117.331629,29.805684],[-117.406325,29.315306],[-118.275610,30.321049],[-118.017804,30.732340]]]}},{"attributes":{"gf_name":"Alpha_018","gf_category":"Alpha"},"geometry":{"rings":[[[-118.291482,32.187915],[-118.236504,32.105900],[-118.461679,30.835274],[-118.712642,30.726205],[-118.910999,31.691403],[-118.291482,32.187915]]]}},{"attributes":{"gf_name":"Alpha_021","gf_category":"Alpha"},"geometry":{"rings":[[[-118.236504,32.105900],[-117.610715,31.368293],[-118.017691,30.803414],[-118.461679,30.835274],[-118.236504,32.105900]]]}},{"attributes":{"gf_name":"Alpha_022","gf_category":"Alpha"},"geometry":{"rings":[[[-118.415540,32.957686],[-118.306950,32.857367],[-118.241221,32.789177],[-118.291482,32.187915],[-118.910999,31.691403],[-119.764803,31.398264],[-120.074747,32.236847],[-119.397317,32.932664],[-118.415540,32.957686]]]}},{"attributes":{"gf_name":"Alpha_023","gf_category":"Alpha"},"geometry":{"rings":[[[-116.802682,34.081536],[-115.652745,33.453531],[-115.641477,32.336068],[-115.871176,32.188122],[-117.476538,32.953794],[-116.802682,34.081536]]]}},{"attributes":{"gf_name":"Alpha_024","gf_category":"Alpha"},"geometry":{"rings":[[[-122.562145,38.611239],[-122.186957,38.336612],[-122.195569,38.107073],[-123.151426,37.051286],[-122.951406,38.539622],[-122.562145,38.611239]]]}},{"attributes":{"gf_name":"Alpha_026","gf_category":"Alpha"},"geometry":{"rings":[[[-121.521585,38.401753],[-121.317484,37.884274],[-121.542840,36.899873],[-121.646568,36.833617],[-122.195569,38.107073],[-122.186957,38.336612],[-121.521585,38.401753]]]}},{"attributes":{"gf_name":"Alpha_028","gf_category":"Alpha"},"geometry":{"rings":[[[-122.340072,35.533098],[-121.150046,34.948623],[-120.851764,33.649721],[-120.943999,33.487086],[-122.245335,33.447119],[-122.980897,34.359173],[-122.340072,35.533098]]]}},{"attributes":{"gf_name":"Alpha_030","gf_category":"Alpha"},"geometry":{"rings":[[[-122.195569,38.107073],[-121.646568,36.833617],[-122.497375,35.977360],[-123.316198,36.487165],[-123.151426,37.051286],[-122.195569,38.107073]]]}},{"attributes":{"gf_name":"Alpha_033","gf_category":"Alpha"},"geometry":{"rings":[[[-120.462003,35.205002],[-119.434621,34.271019],[-120.165423,33.593953],[-120.851764,33.649721],[-121.150046,34.948623],[-120.462003,35.205002]]]}},{"attributes":{"gf_name":"Alpha_034","gf_category":"Alpha"},"geometry":{"rings":[[[-120.314583,38.136693],[-119.722418,38.072922],[-120.702231,36.942569],[-121.542840,36.899873],[-121.317484,37.884274],[-120.314583,38.136693]]]}},{"attributes":{"gf_name":"Alpha_036","gf_category":"Alpha"},"geometry":{"rings":[[[-121.315802,38.800687],[-120.314583,38.136693],[-121.317484,37.884274],[-121.521585,38.401753],[-121.315802,38.800687]]]}},{"attributes":{"gf_name":"Alpha_038","gf_category":"Alpha"},"geometry":{"rings":[[[-117.414676,35.037767],[-116.873703,34.464625],[-116.878218,34.445776],[-117.452834,34.349222],[-117.874553,34.354357],[-117.900979,34.521991],[-117.414676,35.037767]]]}},{"attributes":{"gf_name":"Alpha_039","gf_category":"Alpha"},"geometry":{"rings":[[[-118.276782,35.277533],[-118.210805,35.269057],[-118.075170,35.188227],[-117.900979,34.521991],[-117.874553,34.354357],[-118.597699,33.869008],[-118.895173,34.292433],[-118.827343,34.677451],[-118.276782,35.277533]]]}},{"attributes":{"gf_name":"Alpha_041","gf_category":"Alpha"},"geometry":{"rings":[[[-119.837637,35.784433],[-118.827343,34.677451],[-118.895173,34.292433],[-119.434621,34.271019],[-120.462003,35.205002],[-119.837637,35.784433]]]}},{"attributes":{"gf_name":"Alpha_042","gf_category":"Alpha"},"geometry":{"rings":[[[-117.922812,36.598206],[-117.057992,36.394203],[-118.210805,35.269057],[-118.276782,35.277533],[-118.493063,35.699818],[-118.285579,36.557670],[-117.922812,36.598206]]]}},{"attributes":{"gf_name":"Alpha_043","gf_category":"Alpha"},"geometry":{"rings":[[[-118.075170,35.188227],[-117.434176,35.060678],[-117.414676,35.037767],[-117.900979,34.521991],[-118.075170,35.188227]]]}},{"attributes":{"gf_name":"Alpha_045","gf_category":"Alpha"},"geometry":{"rings":[[[-119.400803,37.662407],[-118.363534,37.428185],[-118.320407,36.580012],[-119.191764,36.798038],[-119.400803,37.662407]]]}},{"attributes":{"gf_name":"Alpha_046","gf_category":"Alpha"},"geometry":{"rings":[[[-119.629592,38.114023],[-119.400803,37.662407],[-119.191764,36.798038],[-119.814395,36.016233],[-120.702231,36.942569],[-119.722418,38.072922],[-119.629592,38.114023]]]}},{"attributes":{"gf_name":"Alpha_049","gf_category":"Alpha"},"geometry":{"rings":[[[-118.645011,38.788713],[-116.375726,38.414554],[-117.879285,37.762548],[-117.981593,37.812142],[-118.645011,38.788713]]]}},{"attributes":{"gf_name":"Alpha_050","gf_category":"Alpha"},"geometry":{"rings":[[[-117.981593,37.812142],[-117.879285,37.762548],[-117.850493,37.692040],[-117.922812,36.598206],[-118.285579,36.557670],[-118.320407,36.580012],[-118.363534,37.428185],[-117.981593,37.812142]]]}},{"attributes":{"gf_name":"Bravo_003","gf_category":"Bravo"},"geometry":{"rings":[[[-114.966407,29.010553],[-115.556070,29.010613],[-115.565899,29.576853],[-114.877388,30.652564],[-113.973903,31.144828],[-113.874728,31.024001],[-114.966407,29.010553]]]}},{"attributes":{"gf_name":"Bravo_004","gf_category":"Bravo"},"geometry":{"rings":[[[-116.047511,31.421341],[-114.877388,30.652564],[-115.565899,29.576853],[-115.681330,29.828279],[-116.047511,31.421341]]]}},{"attributes":{"gf_name":"Bravo_005","gf_category":"Bravo"},"geometry":{"rings":[[[-113.665880,33.089569],[-113.578775,33.084994],[-113.595481,32.109917],[-113.642988,31.702545],[-113.997771,31.250836],[-115.038456,32.239077],[-114.828726,32.423734],[-113.665880,33.089569]]]}},{"attributes":{"gf_name":"Bravo_007","gf_category":"Bravo"},"geometry":{"rings":[[[-116.842159,36.261753],[-116.346604,35.977788],[-116.271652,35.913982],[-116.010855,34.931918],[-116.723780,34.610674],[-116.842159,36.261753]]]}},{"attributes":{"gf_name":"Bravo_009","gf_category":"Bravo"},"geometry":{"rings":[[[-115.760073,34.464355],[-115.288817,34.426023],[-115.232718,33.750965],[-115.651894,33.454681],[-115.760073,34.464355]]]}},{"attributes":{"gf_name":"Bravo_012","gf_category":"Bravo"},"geometry":{"rings":[[[-112.309934,34.738007],[-111.899280,34.156477],[-111.896265,34.000485],[-112.259220,33.812066],[-112.559940,33.882035],[-112.785387,34.665098],[-112.309934,34.738007]]]}},{"attributes":{"gf_name":"Bravo_013","gf_category":"Bravo"},"geometry":{"rings":[[[-113.866163,31.017642],[-112.821893,30.803882],[-112.208051,30.141331],[-112.201103,29.892425],[-113.605198,29.548920],[-113.866163,31.017642]]]}},{"attributes":{"gf_name":"Bravo_014","gf_category":"Bravo"},"geometry":{"rings":[[[-111.159012,32.522856],[-110.824649,32.494792],[-110.141501,31.918549],[-110.403977,31.052692],[-111.259674,30.945511],[-111.315692,30.965117],[-111.474622,31.590875],[-111.467117,31.957874],[-111.159012,32.522856]]]}},{"attributes":{"gf_name":"Bravo_016","gf_category":"Bravo"},"geometry":{"rings":[[[-113.946703,29.010449],[-114.966407,29.010553],[-113.874728,31.024001],[-113.866163,31.017642],[-113.605198,29.548920],[-113.946703,29.010449]]]}},{"attributes":{"gf_name":"Bravo_018","gf_category":"Bravo"},"geometry":{"rings":[[[-112.935806,32.530579],[-112.905623,32.022493],[-113.642988,31.702545],[-113.595481,32.109917],[-112.935806,32.530579]]]}},{"attributes":{"gf_name":"Bravo_019","gf_category":"Bravo"},"geometry":{"rings":[[[-111.317819,34.348952],[-110.978438,33.833734],[-111.375070,33.198857],[-111.896265,34.000485],[-111.899280,34.156477],[-111.317819,34.348952]]]}},{"attributes":{"gf_name":"Bravo_020","gf_category":"Bravo"},"geometry":{"rings":[[[-112.909116,33.056288],[-112.454372,32.971595],[-112.423551,31.937676],[-112.551553,31.875448],[-112.905623,32.022493],[-112.935806,32.530579],[-112.909116,33.056288]]]}},{"attributes":{"gf_name":"Bravo_021","gf_category":"Bravo"},"geometry":{"rings":[[[-112.150814,33.001880],[-111.414706,32.914951],[-111.159012,32.522856],[-111.467117,31.957874],[-112.423551,31.937676],[-112.454372,32.971595],[-112.150814,33.001880]]]}},{"attributes":{"gf_name":"Bravo_022","gf_category":"Bravo"},"geometry":{"rings":[[[-114.546436,34.566336],[-114.000237,34.215944],[-114.203561,33.611652],[-114.485905,33.625307],[-114.892467,33.864714],[-114.546436,34.566336]]]}},{"attributes":{"gf_name":"Bravo_028","gf_category":"Bravo"},"geometry":{"rings":[[[-116.878218,34.445776],[-116.802682,34.081536],[-117.476538,32.953794],[-118.241221,32.789177],[-118.306950,32.857367],[-117.452834,34.349222],[-116.878218,34.445776]]]}},{"attributes":{"gf_name":"Bravo_029","gf_category":"Bravo"},"geometry":{"rings":[[[-116.010855,34.931918],[-115.974970,34.925304],[-115.760073,34.464355],[-115.651894,33.454681],[-115.652745,33.453531],[-116.802682,34.081536],[-116.878218,34.445776],[-116.873703,34.464625],[-116.723780,34.610674],[-116.010855,34.931918]]]}},{"attributes":{"gf_name":"Bravo_035","gf_category":"Bravo"},"geometry":{"rings":[[[-113.322646,36.492659],[-112.866662,35.823773],[-113.157328,34.983246],[-113.614524,35.263022],[-113.911112,36.188262],[-113.322646,36.492659]]]}},{"attributes":{"gf_name":"Bravo_037","gf_category":"Bravo"},"geometry":{"rings":[[[-115.337130,36.999392],[-114.286610,36.369889],[-114.190413,36.202457],[-114.581011,35.827259],[-114.984871,35.568119],[-115.038457,35.629228],[-115.332937,36.457187],[-115.360534,36.658888],[-115.337130,36.999392]]]}},{"attributes":{"gf_name":"Bravo_039","gf_category":"Bravo"},"geometry":{"rings":[[[-114.581011,35.827259],[-113.881352,35.215058],[-114.599067,34.811896],[-114.604860,34.809841],[-114.934130,35.107656],[-115.025696,35.248019],[-114.984871,35.568119],[-114.581011,35.827259]]]}},{"attributes":{"gf_name":"Bravo_040","gf_category":"Bravo"},"geometry":{"rings":[[[-115.635838,37.895516],[-114.244290,37.688867],[-114.286610,36.369889],[-115.337130,36.999392],[-115.635838,37.895516]]]}},{"attributes":{"gf_name":"Bravo_041","gf_category":"Bravo"},"geometry":{"rings":[[[-116.236850,38.430295],[-115.893363,38.243036],[-115.830905,38.042138],[-116.762045,36.853773],[-117.850493,37.692040],[-117.879285,37.762548],[-116.375726,38.414554],[-116.236850,38.430295]]]}},{"attributes":{"gf_name":"Bravo_043","gf_category":"Bravo"},"geometry":{"rings":[[[-113.032255,37.207939],[-112.165492,36.820693],[-111.999148,36.398822],[-112.273275,35.883587],[-112.866662,35.823773],[-113.322646,36.492659],[-113.032255,37.207939]]]}},{"attributes":{"gf_name":"Bravo_044","gf_category":"Bravo"},"geometry":{"rings":[[[-113.614524,35.263022],[-113.157328,34.983246],[-113.059816,34.730975],[-113.114472,34.672716],[-114.599067,34.811896],[-113.881352,35.215058],[-113.614524,35.263022]]]}},{"attributes":{"gf_name":"Bravo_045","gf_category":"Bravo"},"geometry":{"rings":[[[-113.372466,34.311458],[-113.327958,33.182361],[-113.578775,33.084994],[-113.665880,33.089569],[-114.203561,33.611652],[-114.000237,34.215944],[-113.372466,34.311458]]]}},{"attributes":{"gf_name":"Bravo_046","gf_category":"Bravo"},"geometry":{"rings":[[[-112.273275,35.883587],[-111.846116,35.154557],[-112.309934,34.738007],[-112.785387,34.665098],[-113.059816,34.730975],[-113.157328,34.983246],[-112.866662,35.823773],[-112.273275,35.883587]]]}},{"attributes":{"gf_name":"Bravo_047","gf_category":"Bravo"},"geometry":{"rings":[[[-111.265839,37.278314],[-111.175263,37.246313],[-110.752701,36.701053],[-110.999562,36.098601],[-111.999148,36.398822],[-112.165492,36.820693],[-111.896582,37.016041],[-111.265839,37.278314]]]}},{"attributes":{"gf_name":"Charlie_004","gf_category":"Charlie"},"geometry":{"rings":[[[-111.990476,39.623930],[-111.672383,39.412941],[-111.919637,38.212856],[-111.999787,38.197742],[-112.565550,38.183798],[-112.769221,38.858292],[-112.622809,39.071739],[-111.990476,39.623930]]]}},{"attributes":{"gf_name":"Charlie_005","gf_category":"Charlie"},"geometry":{"rings":[[[-112.811170,38.874945],[-112.769221,38.858292],[-112.565550,38.183798],[-113.120292,37.481661],[-113.857748,37.760542],[-113.852376,37.928864],[-113.462539,38.716036],[-112.811170,38.874945]]]}},{"attributes":{"gf_name":"Charlie_007","gf_category":"Charlie"},"geometry":{"rings":[[[-111.919637,38.212856],[-111.687861,38.152417],[-111.265839,37.278314],[-111.896582,37.016041],[-111.999787,38.197742],[-111.919637,38.212856]]]}},{"attributes":{"gf_name":"Charlie_008","gf_category":"Charlie"},"geometry":{"rings":[[[-111.315692,30.965117],[-111.259674,30.945511],[-110.796944,30.198316],[-110.956477,29.722505],[-112.072384,29.643578],[-112.201103,29.892425],[-112.208051,30.141331],[-111.315692,30.965117]]]}},{"attributes":{"gf_name":"Charlie_009","gf_category":"Charlie"},"geometry":{"rings":[[[-110.141501,31.918549],[-109.185745,31.877346],[-108.663232,31.118540],[-108.398365,30.600397],[-108.351497,29.968889],[-109.843830,30.505716],[-110.403977,31.052692],[-110.141501,31.918549]]]}},{"attributes":{"gf_name":"Charlie_012","gf_category":"Charlie"},"geometry":{"rings":[[[-110.403977,31.052692],[-109.843830,30.505716],[-110.466160,30.249463],[-110.796944,30.198316],[-111.259674,30.945511],[-110.403977,31.052692]]]}},{"attributes":{"gf_name":"Charlie_016","gf_category":"Charlie"},"geometry":{"rings":[[[-111.251511,38.338078],[-110.907121,38.232546],[-110.744795,37.758057],[-111.175263,37.246313],[-111.265839,37.278314],[-111.687861,38.152417],[-111.251511,38.338078]]]}},{"attributes":{"gf_name":"Charlie_017","gf_category":"Charlie"},"geometry":{"rings":[[[-110.848814,35.410481],[-110.395022,35.218355],[-110.629724,33.983079],[-110.978438,33.833734],[-111.317819,34.348952],[-111.227231,35.202718],[-110.848814,35.410481]]]}},{"attributes":{"gf_name":"Charlie_018","gf_category":"Charlie"},"geometry":{"rings":[[[-108.957760,35.823204],[-108.582120,34.448969],[-108.695642,34.413530],[-109.616894,35.160616],[-109.861048,35.391741],[-109.580750,35.671064],[-108.957760,35.823204]]]}},{"attributes":{"gf_name":"Charlie_019","gf_category":"Charlie"},"geometry":{"rings":[[[-109.616894,35.160616],[-108.695642,34.413530],[-109.209776,34.009453],[-109.616894,35.160616]]]}},{"attributes":{"gf_name":"Charlie_020","gf_category":"Charlie"},"geometry":{"rings":[[[-109.580423,33.485896],[-109.019515,32.893275],[-109.185745,31.877346],[-110.141501,31.918549],[-110.824649,32.494792],[-109.600599,33.481973],[-109.580423,33.485896]]]}},{"attributes":{"gf_name":"Charlie_026","gf_category":"Charlie"},"geometry":{"rings":[[[-107.448109,32.465305],[-107.543777,31.824440],[-108.398365,30.600397],[-108.663232,31.118540],[-108.172089,32.338211],[-107.448109,32.465305]]]}},{"attributes":{"gf_name":"Charlie_027","gf_category":"Charlie"},"geometry":{"rings":[[[-107.000647,33.131979],[-106.927124,33.119847],[-105.707650,32.307470],[-105.776106,31.834468],[-105.924658,31.432486],[-106.583294,31.970314],[-107.000647,33.131979]]]}},{"attributes":{"gf_name":"Charlie_028","gf_category":"Charlie"},"geometry":{"rings":[[[-106.583294,31.970314],[-105.924658,31.432486],[-105.907446,31.138703],[-106.150128,30.782825],[-107.095091,31.764133],[-106.583294,31.970314]]]}},{"attributes":{"gf_name":"Charlie_029","gf_category":"Charlie"},"geometry":{"rings":[[[-107.359704,33.554365],[-107.142280,33.201815],[-107.448109,32.465305],[-108.172089,32.338211],[-108.685156,32.962142],[-107.359704,33.554365]]]}},{"attributes":{"gf_name":"Charlie_030","gf_category":"Charlie"},"geometry":{"rings":[[[-105.994738,33.337477],[-105.436816,32.627620],[-105.422219,32.568667],[-105.707650,32.307470],[-106.927124,33.119847],[-105.994738,33.337477]]]}},{"attributes":{"gf_name":"Charlie_034","gf_category":"Charlie"},"geometry":{"rings":[[[-105.313424,33.867144],[-105.436816,32.627620],[-105.994738,33.337477],[-105.313424,33.867144]]]}},{"attributes":{"gf_name":"Charlie_036","gf_category":"Charlie"},"geometry":{"rings":[[[-110.055886,36.915457],[-110.000338,36.888631],[-109.580750,35.671064],[-109.861048,35.391741],[-110.224385,35.222138],[-110.395022,35.218355],[-110.848814,35.410481],[-110.999562,36.098601],[-110.752701,36.701053],[-110.055886,36.915457]]]}},{"attributes":{"gf_name":"Charlie_039","gf_category":"Charlie"},"geometry":{"rings":[[[-109.580447,36.961026],[-108.825944,35.920920],[-108.957760,35.823204],[-109.580750,35.671064],[-110.000338,36.888631],[-109.580447,36.961026]]]}},{"attributes":{"gf_name":"Charlie_040","gf_category":"Charlie"},"geometry":{"rings":[[[-108.381504,36.137964],[-106.993809,34.882178],[-107.053820,34.603020],[-107.376441,34.146431],[-108.254492,34.413094],[-108.396412,36.127488],[-108.381504,36.137964]]]}},{"attributes":{"gf_name":"Charlie_043","gf_category":"Charlie"},"geometry":{"rings":[[[-109.532017,37.942050],[-109.235501,37.903205],[-109.059044,37.837445],[-108.846176,37.328161],[-109.580447,36.961026],[-110.000338,36.888631],[-110.055886,36.915457],[-110.093643,37.604001],[-109.532017,37.942050]]]}},{"attributes":{"gf_name":"Charlie_045","gf_category":"Charlie"},"geometry":{"rings":[[[-110.295214,38.567151],[-110.205307,37.706183],[-110.744795,37.758057],[-110.907121,38.232546],[-110.295214,38.567151]]]}},{"attributes":{"gf_name":"Charlie_046","gf_category":"Charlie"},"geometry":{"rings":[[[-111.672383,39.412941],[-111.040607,39.348197],[-111.251511,38.338078],[-111.687861,38.152417],[-111.919637,38.212856],[-111.672383,39.412941]]]}},{"attributes":{"gf_name":"Charlie_047","gf_category":"Charlie"},"geometry":{"rings":[[[-108.586684,38.599815],[-107.751418,38.385245],[-107.882355,37.022654],[-108.096641,36.914409],[-108.846176,37.328161],[-109.059044,37.837445],[-108.586684,38.599815]]]}},{"attributes":{"gf_name":"Charlie_049","gf_category":"Charlie"},"geometry":{"rings":[[[-110.089960,38.906828],[-109.532017,37.942050],[-110.093643,37.604001],[-110.205307,37.706183],[-110.295214,38.567151],[-110.089960,38.906828]]]}},{"attributes":{"gf_name":"Charlie_050","gf_category":"Charlie"},"geometry":{"rings":[[[-109.171210,39.584714],[-108.586684,38.599815],[-109.059044,37.837445],[-109.235501,37.903205],[-109.527535,39.304970],[-109.171210,39.584714]]]}}]}.
2019-11-07T18:06:56,895  Executing request GET /arcgis/rest/services/MyGeofences/FeatureServer/0/query?f=json.....&where=1%3D1&outFields=gf_name%2Cgf_category&outSR=4326&returnIdsOnly=true HTTP/1.1
2019-11-07T18:06:56,906  Got response from HTTP request: {"objectIdFieldName":"objectid","objectIds":[3,5,6,8,10,11,13,14,18,21,22,23,24,26,28,30,33,34,36,38,39,41,42,43,45,46,49,50,53,54,55,57,59,62,63,64,66,68,69,70,71,72,78,79,85,87,89,90,91,93,94,95,96,97,104,105,107,108,109,112,116,117,118,119,120,126,127,128,129,130,134,136,139,140,143,145,146,147,149,150]}.
2019-11-07T18:06:57,294  Executing request GET /arcgis/rest/services/MyGeofences/FeatureServer/0/query?f=json.....&where=1%3D1&outFields=objectid%2Cgf_name%2Cgf_category&returnGeometry=false HTTP/1.1
2019-11-07T18:06:57,305  Got response from HTTP request: {"objectIdFieldName":"objectid","globalIdFieldName":"","geometryType":"esriGeometryPolygon","spatialReference":{"wkid":4326,"latestWkid":4326},"fields":[{"name":"objectid","alias":"OBJECTID","type":"esriFieldTypeOID"},{"name":"gf_name","alias":"gf_name","type":"esriFieldTypeString","length":50},{"name":"gf_category","alias":"gf_category","type":"esriFieldTypeString","length":50}],"features":[{"attributes":{"objectid":3,"gf_name":"Alpha_003","gf_category":"Alpha"}},{"attributes":{"objectid":5,"gf_name":"Alpha_005","gf_category":"Alpha"}},{"attributes":{"objectid":6,"gf_name":"Alpha_006","gf_category":"Alpha"}},{"attributes":{"objectid":8,"gf_name":"Alpha_008","gf_category":"Alpha"}},{"attributes":{"objectid":10,"gf_name":"Alpha_010","gf_category":"Alpha"}},{"attributes":{"objectid":11,"gf_name":"Alpha_011","gf_category":"Alpha"}},{"attributes":{"objectid":13,"gf_name":"Alpha_013","gf_category":"Alpha"}},{"attributes":{"objectid":14,"gf_name":"Alpha_014","gf_category":"Alpha"}},{"attributes":{"objectid":18,"gf_name":"Alpha_018","gf_category":"Alpha"}},{"attributes":{"objectid":21,"gf_name":"Alpha_021","gf_category":"Alpha"}},{"attributes":{"objectid":22,"gf_name":"Alpha_022","gf_category":"Alpha"}},{"attributes":{"objectid":23,"gf_name":"Alpha_023","gf_category":"Alpha"}},{"attributes":{"objectid":24,"gf_name":"Alpha_024","gf_category":"Alpha"}},{"attributes":{"objectid":26,"gf_name":"Alpha_026","gf_category":"Alpha"}},{"attributes":{"objectid":28,"gf_name":"Alpha_028","gf_category":"Alpha"}},{"attributes":{"objectid":30,"gf_name":"Alpha_030","gf_category":"Alpha"}},{"attributes":{"objectid":33,"gf_name":"Alpha_033","gf_category":"Alpha"}},{"attributes":{"objectid":34,"gf_name":"Alpha_034","gf_category":"Alpha"}},{"attributes":{"objectid":36,"gf_name":"Alpha_036","gf_category":"Alpha"}},{"attributes":{"objectid":38,"gf_name":"Alpha_038","gf_category":"Alpha"}},{"attributes":{"objectid":39,"gf_name":"Alpha_039","gf_category":"Alpha"}},{"attributes":{"objectid":41,"gf_name":"Alpha_041","gf_category":"Alpha"}},{"attributes":{"objectid":42,"gf_name":"Alpha_042","gf_category":"Alpha"}},{"attributes":{"objectid":43,"gf_name":"Alpha_043","gf_category":"Alpha"}},{"attributes":{"objectid":45,"gf_name":"Alpha_045","gf_category":"Alpha"}},{"attributes":{"objectid":46,"gf_name":"Alpha_046","gf_category":"Alpha"}},{"attributes":{"objectid":49,"gf_name":"Alpha_049","gf_category":"Alpha"}},{"attributes":{"objectid":50,"gf_name":"Alpha_050","gf_category":"Alpha"}},{"attributes":{"objectid":53,"gf_name":"Bravo_003","gf_category":"Bravo"}},{"attributes":{"objectid":54,"gf_name":"Bravo_004","gf_category":"Bravo"}},{"attributes":{"objectid":55,"gf_name":"Bravo_005","gf_category":"Bravo"}},{"attributes":{"objectid":57,"gf_name":"Bravo_007","gf_category":"Bravo"}},{"attributes":{"objectid":59,"gf_name":"Bravo_009","gf_category":"Bravo"}},{"attributes":{"objectid":62,"gf_name":"Bravo_012","gf_category":"Bravo"}},{"attributes":{"objectid":63,"gf_name":"Bravo_013","gf_category":"Bravo"}},{"attributes":{"objectid":64,"gf_name":"Bravo_014","gf_category":"Bravo"}},{"attributes":{"objectid":66,"gf_name":"Bravo_016","gf_category":"Bravo"}},{"attributes":{"objectid":68,"gf_name":"Bravo_018","gf_category":"Bravo"}},{"attributes":{"objectid":69,"gf_name":"Bravo_019","gf_category":"Bravo"}},{"attributes":{"objectid":70,"gf_name":"Bravo_020","gf_category":"Bravo"}},{"attributes":{"objectid":71,"gf_name":"Bravo_021","gf_category":"Bravo"}},{"attributes":{"objectid":72,"gf_name":"Bravo_022","gf_category":"Bravo"}},{"attributes":{"objectid":78,"gf_name":"Bravo_028","gf_category":"Bravo"}},{"attributes":{"objectid":79,"gf_name":"Bravo_029","gf_category":"Bravo"}},{"attributes":{"objectid":85,"gf_name":"Bravo_035","gf_category":"Bravo"}},{"attributes":{"objectid":87,"gf_name":"Bravo_037","gf_category":"Bravo"}},{"attributes":{"objectid":89,"gf_name":"Bravo_039","gf_category":"Bravo"}},{"attributes":{"objectid":90,"gf_name":"Bravo_040","gf_category":"Bravo"}},{"attributes":{"objectid":91,"gf_name":"Bravo_041","gf_category":"Bravo"}},{"attributes":{"objectid":93,"gf_name":"Bravo_043","gf_category":"Bravo"}},{"attributes":{"objectid":94,"gf_name":"Bravo_044","gf_category":"Bravo"}},{"attributes":{"objectid":95,"gf_name":"Bravo_045","gf_category":"Bravo"}},{"attributes":{"objectid":96,"gf_name":"Bravo_046","gf_category":"Bravo"}},{"attributes":{"objectid":97,"gf_name":"Bravo_047","gf_category":"Bravo"}},{"attributes":{"objectid":104,"gf_name":"Charlie_004","gf_category":"Charlie"}},{"attributes":{"objectid":105,"gf_name":"Charlie_005","gf_category":"Charlie"}},{"attributes":{"objectid":107,"gf_name":"Charlie_007","gf_category":"Charlie"}},{"attributes":{"objectid":108,"gf_name":"Charlie_008","gf_category":"Charlie"}},{"attributes":{"objectid":109,"gf_name":"Charlie_009","gf_category":"Charlie"}},{"attributes":{"objectid":112,"gf_name":"Charlie_012","gf_category":"Charlie"}},{"attributes":{"objectid":116,"gf_name":"Charlie_016","gf_category":"Charlie"}},{"attributes":{"objectid":117,"gf_name":"Charlie_017","gf_category":"Charlie"}},{"attributes":{"objectid":118,"gf_name":"Charlie_018","gf_category":"Charlie"}},{"attributes":{"objectid":119,"gf_name":"Charlie_019","gf_category":"Charlie"}},{"attributes":{"objectid":120,"gf_name":"Charlie_020","gf_category":"Charlie"}},{"attributes":{"objectid":126,"gf_name":"Charlie_026","gf_category":"Charlie"}},{"attributes":{"objectid":127,"gf_name":"Charlie_027","gf_category":"Charlie"}},{"attributes":{"objectid":128,"gf_name":"Charlie_028","gf_category":"Charlie"}},{"attributes":{"objectid":129,"gf_name":"Charlie_029","gf_category":"Charlie"}},{"attributes":{"objectid":130,"gf_name":"Charlie_030","gf_category":"Charlie"}},{"attributes":{"objectid":134,"gf_name":"Charlie_034","gf_category":"Charlie"}},{"attributes":{"objectid":136,"gf_name":"Charlie_036","gf_category":"Charlie"}},{"attributes":{"objectid":139,"gf_name":"Charlie_039","gf_category":"Charlie"}},{"attributes":{"objectid":140,"gf_name":"Charlie_040","gf_category":"Charlie"}},{"attributes":{"objectid":143,"gf_name":"Charlie_043","gf_category":"Charlie"}},{"attributes":{"objectid":145,"gf_name":"Charlie_045","gf_category":"Charlie"}},{"attributes":{"objectid":146,"gf_name":"Charlie_046","gf_category":"Charlie"}},{"attributes":{"objectid":147,"gf_name":"Charlie_047","gf_category":"Charlie"}},{"attributes":{"objectid":149,"gf_name":"Charlie_049","gf_category":"Charlie"}},{"attributes":{"objectid":150,"gf_name":"Charlie_050","gf_category":"Charlie"}}]}.
2019-11-07T18:06:57,563  Executing request GET /arcgis/rest/services/MyGeofences/FeatureServer/0/query?f=json.....&where=1%3D1&outFields=objectid%2Cgf_name%2Cgf_category&returnGeometry=false&returnIdsOnly=true HTTP/1.1
2019-11-07T18:06:57,573  Got response from HTTP request: {"objectIdFieldName":"objectid","objectIds":[3,5,6,8,10,11,13,14,18,21,22,23,24,26,28,30,33,34,36,38,39,41,42,43,45,46,49,50,53,54,55,57,59,62,63,64,66,68,69,70,71,72,78,79,85,87,89,90,91,93,94,95,96,97,104,105,107,108,109,112,116,117,118,119,120,126,127,128,129,130,134,136,139,140,143,145,146,147,149,150]}.
2019-11-07T18:07:00,360  Executing request POST /arcgis/admin/machines/localhost/status HTTP/1.1
2019-11-07T18:07:00,414  Got response from HTTP request: <html lang="en">
2019-11-07T18:07:03,673  Executing request POST /arcgis/admin/system/configstore HTTP/1.1
2019-11-07T18:07:03,688  Got response from HTTP request: <html lang="en">


There is quite a bit of JSON data embedded in the results above which can be helpful in identifying exactly what a feature service returns to a client when the client queries the service. The timestamps also help if you need to return to the full karaf.log and look for messages logged just before or just after a line matching the command's search patterns to see if there is additional information not captured by the command which might help debug an issue.

Information provided by the timestamps on each logged message can also provide empirical evidence of exactly how long it takes to get a response back from the feature service each time an HTTP request is made. Computing a delta between the date/time a request is logged and the response to the request can be valuable if you suspect latency introduced by geofence synchronization is causing a problem. Remember, nothing happens in zero time, and frequent queries every few seconds to a large feature record set can impact overall GeoEvent Server operations.

Also, keep in mind that a feature service may be configured to return a maximum number of feature records for any given query. GeoEvent Server may have to make several queries to page through a complete feature record set when there are more than 1000 feature records, for example, being imported to update geofences.

The techniques I have described provide a way to delve deeply into geofence synchronization to examine the REST requests and responses when interfacing with a feature service. You can use these techniques to obtain information on request latency as well as implementation details such as how GeoEvent Server pages through large feature record sets or how a feature service handles a number of queries sent in a series. I have attached a PDF illustration of the above two dozen formatted log messages with additional formatting I applied manually to make the JSON in each logged message easy to read. I hope that you find the combination of debug logging with scripted text extraction and string formatting a helpful debugging technique.

– RJ

This blog is one in a series of blogs discussing debugging techniques you can use when working to identify the root cause of an issue with a GeoEvent Server deployment or configuration. Click any link in the quick list below to jump to another blog in the series.

In this blog I will discuss GeoEvent Manager's user interface for viewing logged messages, the location of the actual log file on disk, and how logging can be configured -- specifically how to control the size of the log file and its rollover properties.

The GeoEvent Manager Logging Interface

ArcGIS GeoEvent Server uses Apache Karaf, a lightweight flexible container to support its Java runtime environment. A powerful logging system, based on OPS4j Pax Logging, is included with Apache Karaf.

The GeoEvent Manager web application includes a simple user-interface for the ops4j logging system. You can use this interface to see the most recent messages logged by different components of ArcGIS GeoEvent Server. The UI illustrated below caches up to 500 logged messages and allows you to scroll through logged messages specifying how many messages should be listed on a page, select a specific type of logged message (e.g. DEBUG, INFO, WARN, or ERROR) as well as perform keyword searches.

GeoEvent Manager Logging User Interface

A significant limitation of this logging interface is that only the most recent 500 logged messages are maintained in its cache, so review and keyword searches you perform are limited to recently logged messages. This means that the velocity and volume of event records being processed as well as the number of GeoEvent Services, inputs, and outputs you have configured can affect (and limit) your ability to isolate logged messages of interest. A valuable debugging technique is to locate the actual log file on disk and open it in a text editor.

Location of the log file on disk

On a Windows platform, assuming your ArcGIS GeoEvent Server has been installed in the default folder beneath C:\Program Files, you should be able to locate the following system folder which contains the actual system log files.

C:\Program Files\ArcGIS\Server\GeoEvent\data\log

In this folder you will find one or more files with a base name karaf.log – these files can be opened in a text editor of your choice for content review and search. You can also use command-line utilities like tail, string processing utilities like sed, grep, and awk, as well as regular expressions to help isolate logged messages. Examples using these are included in other blogs in this series.

Only one log file, the file named karaf.log, is actively being written at any one time. When this file's size has grown as large as the system configuration allows, the file will automatically rollover and a new karaf.log file will be created. Log files which have rolled over will have a numeric suffix (e.g. karaf.log.1) and the file's last updated date/time will be older than the karaf.log currently being written.

If you open the karaf.log in a text editor you should treat the file as read-only as the logging system is actively writing to this file. Be sure to periodically reload the file's content in your text editor to make sure you are reviewing the latest file.

How to specify an allowed log file size and rollover properties

Locate the org.ops4j.pax.logging.cfg configuration file in the ArcGIS GeoEvent Server's \etc folder:

C:\Program Files\ArcGIS\Server\GeoEvent\etc

Using a text editor run as an administrator, because the file is located beneath C:\Program Files, you can edit properties of the system log such the default logging level for all loggers (a "logger" in this context is any of several components that are actively logging messages, such as the outbound feature adapter or the inbound TCP transport).

For example, at the 10.7 release a change was made to quiet the system logs by reducing the ROOT logging level from INFO to WARN so that only warnings are logged by default. You can see this specified in the following line in the org.ops4j.pax.logging.cfg configuration file:

# Root logger

log4j2.rootLogger.level = WARN

Searching the configuration file for the keyword "rolling" you will find lines which specify the karaf.log file's allowed size and rollover policy. Be careful -- not all of the lines specifying the rollover policy are necessarily in the same section of the log file; some may be located deeper in the file:

# Rolling file appender

log4j2.appender.rolling.type = RollingRandomAccessFile = RollingFile

log4j2.appender.rolling.fileName = ${}/log/karaf.log

log4j2.appender.rolling.filePattern = ${}/log/karaf.log.%i

log4j2.appender.rolling.append = true

log4j2.appender.rolling.layout.type = PatternLayout

log4j2.appender.rolling.layout.pattern = ${log4j2.pattern}

log4j2.appender.rolling.policies.type = Policies

log4j2.appender.rolling.policies.size.type = SizeBasedTriggeringPolicy

log4j2.appender.rolling.policies.size.size = 16MB

log4j2.appender.rolling.strategy.type = DefaultRolloverStrategy

log4j2.appender.rolling.strategy.max = 10

The settings above reflect defaults for the 10.7 release which specify that the karaf.log should rollover when it reaches 16MB and up to 10 indexed files will be used to archive older logged messages.

The anatomy of a logged message

Before we conclude our discussion on configuring the application logger I would like to briefly discuss the format of logged messages. The logged message format is configurable and logged messages by default have six parts. Each part is separated by a pipe ( | ) character.

Logged messages have six parts

The thread identifier default specification (see illustration below) has a minimum of 16 characters but no maximum length; some thread identifiers can be quite long. The class identifier spec includes a precision which limits the identifier to the most significant part of the class name. In the illustration above the fully-qualified class identifier com.esri.ges.fabric.core.ZKSerializer has been shortened to simply ZKSerializer. We will discuss the impact of this more in a later blog.

You can edit the org.ops4j.pax.logging.cfg configuration file to specify different patterns for the appender. You should refer to in the Apache logging services on-line help before modifying the default appender pattern layout illustrated below.

# Common pattern layout for appenders

log4j2.pattern = %d{ISO8601} | %-5p | %-16t | %-32c{1} | %geoeventBundleID - %geoeventBundleName - %geoeventBundleVersion | %m%n

log4j2.out.pattern = \u001b[90m%d{HH:mm:ss\.SSS}\u001b[0m %highlight{%-5level}{FATAL=${color.fatal}, ERROR=${color.error}, WARN=${color.warn}, INFO=${}, DEBUG=${color.debug}, TRACE=${color.trace}} \u001b[90m[%t]\u001b[0m %msg%n%throwable


Using the logging interface provided by GeoEvent Manger is a quick, simple way of reviewing logged messages recently produced by system components as they ingest, process, and disseminate event data. Event record velocity and volume can of course increase the number of messages being logged. Increasing the logging level from ERROR or WARN to INFO or DEBUG can drastically increase the volume of logged messages. If running components are frequently logging messages in the system's log file only the most recent the messages will be displayed in the GeoEvent Manager user-interface. Messages which have been pushed out of the cache can be reviewed by editing the karaf.log in a text editor. This is a key debugging technique, but you must be aware that the karaf.log is actively being written and will rollover as it grows beyond a specified size.

As you make and save changes to the system logging, for example, to request DEBUG logging on a specific logger, the changes will immediately be reflected in the org.ops4j.pax.logging.cfg configuration file. You can edit this file as an administrator and any changes you save will be picked up immediately; you do not have to stop and restart the ArcGIS GeoEvent Server service.

This blog is one in a series of blogs discussing debugging techniques you can use when working to identify the root cause of an issue with a GeoEvent Server deployment or configuration. Click any link in the quick list below to jump to another blog in the series.

In a client / server context ArcGIS GeoEvent Server sometimes acts as a client and at other times acts as a server. When an Add a Feature or an Update a Feature output is configured to add / update feature records in a geodatabase feature class through a feature service, ArcGIS GeoEvent Server is a client making requests on an ArcGIS Server feature service. In this blog I will show how you can isolate requests GeoEvent Server sends to an ArcGIS Server service and how to use the JSON from the request to debug issues you are potentially encountering.


A customer reports that an input connector they have configured appears to be successfully receiving and adapting data from a provider and event records appear to be processed as expected through a GeoEvent Service. The event record count on their output increments, but they are not seeing some – or any – features displayed by a feature layer they have added to a web map.

Request DEBUG logs for the outbound feature service transport

Components in the ArcGIS GeoEvent Server runtime log messages to provide information as well as note warnings and/or errors. Each component uses a logger, an object responsible for logging messages in the system's log file, which can be configured to generate different levels of messages (e.g. DEBUG, INFO, WARN, or ERROR).

In this case we want to request the com.esri.ges.transport.featureService.FeatureServiceOutboundTransport component log DEBUG messages to help us identify the problem. To enable DEBUG logging for a single component's logger:

  • In GeoEvent Manager, navigate to the Logs page and click Settings
  • Enter the name of the logging component in the text field Logger and select the DEBUG log level
  • Click Save

As you type the name of a logger, if the GeoEvent Manager's cache of logged messages contains a message from a particular component's logger, IntelliSense will help you identify the logger's name.


Querying for additional information

When a processed event record is routed to an Update a Feature output the data is first reformatted as Esri Feature JSON so that it can be incorporated into a map/feature service request. A request is then made using the ArcGIS REST API to either Add Features or Update Features.

An Add a Feature output connector has the easier job – it doesn't care whether a feature record already exists since it is not going to request an update. An Update a Feature output connector on the other hand needs to know the objectid or row identifier of the feature record it should update.

If the output has previously received an event record with this event record's TRACK_ID then it has likely already sent a request to the targeted map/feature service to query for feature records whose Unique Feature Identifier Field was specified as the field to use to identify feature records to update. The output maintains a cache mapping every event record's TRACK_ID to a corresponding object or row identifier of a feature record.

Here is what the logged DEBUG messages look like when an Update a Feature output queries to discover an object or row identifier associated with a feature record:


2019-06-05T15:12:34,324 | DEBUG | FeatureJsonOutboundAdapter-FlushingThread-com.esri.ges.adapter.outbound/JSON/10.7.0 | FeatureServiceOutboundTransport | 91 - com.esri.ges.framework.transport.featureservice-transport - 10.7.0 | Querying for missing track id '8SKS617'


2019-06-05T15:12:34,489 | DEBUG | FeatureJsonOutboundAdapter-FlushingThread-com.esri.ges.adapter.outbound/JSON/10.7.0 | FeatureServiceOutboundTransport | 91 - com.esri.ges.framework.transport.featureservice-transport - 10.7.0 | Posting to URL: https : // with parameters: f=json&token=QNv27Ov9...&where=track_id IN ('8SKS617')



2019-06-05T15:12:34,674 | DEBUG | FeatureJsonOutboundAdapter-FlushingThread-com.esri.ges.adapter.outbound/JSON/10.7.0 | FeatureServiceOutboundTransport | 91 - com.esri.ges.framework.transport.featureservice-transport - 10.7.0 | Response was {"exceededTransferLimit":false,"features":[ ],"fields"...

Notice a few key values highlighted in the logged message's text above:

  • Line 1:  The output has recognized that it has not previously seen an event record with the TRACK_ID 8SKS617 (so it must query the map/feature service to see if it can find a matching feature record).
  • Line 2:  This is the actual query sent to the SampleRecord feature service's query endpoint requesting a feature record whose track_id attribute is one of several in a specified list (8SKS617 is actually the only value in the list). The query requests that the response include only the track_id attribute and an object identifier value.
  • Line 3:  The ArcGIS Server service responds with an empty array features[ ]. This indicates that there are no features whose track_id attribute matches any of the values in the query's list.

The output was configured with its Update Only parameter set to 'No' (the default). So, given that there is no existing record whose track_id attribute matches the event record's tagged TRACK_ID field, the output connector fails over to add a new feature record instead:


2019-06-05T15:12:34,769 | DEBUG | FeatureJsonOutboundAdapter-FlushingThread-com.esri.ges.adapter.outbound/JSON/10.7.0 | FeatureServiceOutboundTransport | 91 - com.esri.ges.framework.transport.featureservice-transport - 10.7.0 | Posting to URL: https : // with parameters: f=json&token=QNv27Ov9...&rollbackOnFailure=true features=[{"geometry":{"x":-115.625,"y":32.125, "spatialReference":{"wkid":4326}},"attributes":{"track_id":"8SKS617","reported_dt":1559772754211}}].


2019-06-05T15:12:34,935 | DEBUG | FeatureJsonOutboundAdapter-FlushingThread-com.esri.ges.adapter.outbound/JSON/10.7.0 | FeatureServiceOutboundTransport | 91 - com.esri.ges.framework.transport.featureservice-transport - 10.7.0 | Response was {"addResults":[{"objectId":1,"globalId":"{B1384CE2-7501-4753-983B-F6640AB63816}", "success":true}]}.

Again, take a moment to examine the highlighted text:

  • Line 4:  The ArcGIS REST API endpoint to which the request is sent is the Add Features endpoint. An Esri Feature JSON representation of the event data is highlighted in green.
  • Line 5:  The ArcGIS Server service responds with a block of JSON indicating that it successfully updated a feature record, assigning the new record the object identifier '1' and a globally unique identifier (the feature service I'm using in this example is actually one hosted by my ArcGIS Enterprise portal).

The debug logs include the Esri Feature JSON constructed by the output connector. You can actually copy and paste this JSON into the feature service's web page in the ArcGIS REST Services Directory. This is an excellent way to abstract ArcGIS GeoEvent Server from your debugging workflow and determine if there are problems with how the JSON is formatted or reasons why a feature service might reject a client's request.

Add Features using ArcGIS REST Services web form

I used this technique once to demonstrate that a polygon geometry created by a Create Buffer processor in a GeoEvent Service had several dozen vertices, allowing the geometry to approximate a circular area. When the polygon was committed to the geodatabase as a feature record, however, its geometry had been generalized such that it only had a few vertices. Web maps were displaying very rough approximations of the area of interest, not circular buffers. But it wasn't ArcGIS GeoEvent Server that had failed to produce a geometry representing a circular area. The problem was somewhere in the back-end relational database configuration.

Rollback on Failure?

There is a query parameter on Line 4 in the illustration above which is easily overlooked: rollbackOnFailure=true

The default action for both the Add a Feature and Update a Feature outputs is to request that the geodatabase rollback the feature record transaction request if a problem is encountered. In many cases this is why customers are not seeing all of the feature records they expect updated in a feature layer they have added to a web map. Consider the following fields specification for the targeted feature service's feature layer:

    track_id ( alias: track_id, type: esriFieldTypeString, length: 512, editable: true, nullable: true )
    reported_dt ( alias: reported_dt, type: esriFieldTypeDate, length: 29, editable: true, nullable: true )
    objectid ( alias: objectid, type: esriFieldTypeOID, length: 8, editable: false, nullable: false )
    globalid ( alias: globalid, type: esriFieldTypeGlobalID, length: 38, editable: false, nullable: false )

Suppose for a moment that the esriFieldTypeString specification for the track_id attribute specified that the string should not exceed seven characters. If a web application (client) were to send the feature service a request with a value for the track_id which was longer than seven characters, the data would not comply with the feature layer's specification and the feature service would be expected to reject the request.

Likewise, if attribute fields other than esriFieldTypeOID or esriFieldTypeGlobalID were specified as not allowing null values, and a client request was made whose attribute values were null, the data would not be compliant with the feature layer's specification; the feature service should reject the request.

By default both the Add a Feature and Update a Feature output connectors begin working through a cache of event records they have formatted as Esri Feature JSON placing the formatted data in one or more requests that are sent to the targeted feature service's feature layer. Each request, again by default, is allowed to contain up to 500 event / feature records.

Update a Feature default properties

It only takes one bad apple to spoil a batch. If even one processed event record's data in a transaction containing ten, fifty, or a hundred feature records in a single transaction request is not compliant with string length restrictions, value nullability restrictions – or any other restriction enforced by an ArcGIS Server feature service – the entire transaction will rollback and none of the feature records associated with that batch of processed event records will be updated.

Reduce the Maximum Features Per Transaction

You cannot change the rollback on failure behavior. The outbound connectors interfacing with ArcGIS Server feature services do not implement a mechanism to retry an add/update feature record operation because one or more feature records in a batch do not comply with a feature layer's specification.

You can change the number of processed event records an Add a Feature or Update a Feature output connector will include in each transaction. If you configure your output to specify a maximum number of one feature record per transaction you can begin to work around the issue of one bad record spoiling an entire transaction. If bad data or null values were to occasionally creep into processed event records then only the bad records will fail to update a corresponding feature record and the rollback on failure won't suppress any valid feature record updates.

The downside to this is that REST requests are inherently expensive. If it were to take as little as 20 milliseconds to make a round-trip to the database and receive a response to a transaction request you could effectively cut your event throughput to less than 50 event records per second if you throttle feature record updating by allowing only one processed event record per transaction. The upside to reducing, at least temporarily, the number of records allowed in a transaction is that it makes the messages being logged much, much easier to read. It also guarantees that each success / fail response from the ArcGIS Server feature service can be traced back to a single add / update feature request.

Timestamps – another benefit to logging DEBUG messages for the outbound transport

Every logged message includes a timestamp with millisecond precision. This can be very useful when debugging unexpected latency when interacting with a geodatabase's feature class through an ArcGIS Server's REST interface.

Looking back at the two tables above with the logged DEBUG messages, the time difference between the messages on Line 1 and Line 2 is 165 milliseconds (489 - 324 = 165). That tells us it took over a tenth of a second for the output to formulate its query for "missing" object identifiers needed to request updates for specific feature records. It takes another 185 milliseconds (674 - 489 = 185) to actually query for the needed identifiers and discover that there are no feature records with those track_id values.

To be fair, you should expect this latency to drop as ArcGIS Server and/or your RDBMS begin caching information about the requests being made by clients. But it is important to be able to measure the latency ArcGIS GeoEvent Server is experiencing. If every time an Add a Feature output connector's timer expires (which is once every second by default) it takes a couple hundred milliseconds to complete a transaction, you should have a pretty good idea how many transactions you can make in one second. You might need to increase your output's Update Interval so that it holds only its cache of processed event records longer before starting a series of transactions. If you do this, know that as updates arrive for a given tracked asset older records will be purged from the cache. When updating feature records the cache will be managed to contain only one processed event record for each unique TRACK_ID.


Taking the time to analyze the DEBUG messages logged by the outbound feature service transport can provide you a wealth of information. You can immediately see if values obtained from an event record's tagged TRACK_ID field are reasonably expected to be found in whatever feature layer's attribute field is being used to query for feature records that correlate to processed event records. You can check to see if any values in a processed event record are unexpectedly null, have strings which are longer than the feature layer will accept, or – my favorite – contain what ArcGIS Server suspects is HTML or SQL code resulting in a service rejecting the transaction to prevent a suspected injection attack.

ArcGIS GeoEvent Server, when interfacing with an RDBMS through a map / feature service's REST interface, is acting as any other web mapping application client would act in making requests on a service it assumes is available. You can eliminate GeoEvent Server entirely from your debugging workflow if you copy / paste information like the ESRI Feature JSON from a DEBUG message logged by the outbound transport into an HTML page in the ArcGIS REST Services Directory. I did exactly this to prove, once, that polygon geometries with hundreds of vertices modeling a circular area were somehow being generalized as they were committed into a SQL Server back-end geodatabase.

If a customer reports that some – or all – of the features they expect should be getting added or updated in a feature layer are not displayed by a web map's feature layer, take a close look at the requests the configured output is sending to the feature service.

This blog is one in a series of blogs discussing debugging techniques you can use when working to identify the root cause of an issue with a GeoEvent Server deployment or configuration. Click any link in the quick list below to jump to another blog in the series.

In this blog I will illustrate a couple of techniques I use to identify more granular component logging than requesting the ROOT component produce DEBUG messages for all component loggers. I will also introduce a couple command-line utilities I frequently use to interrogate the ArcGIS GeoEvent Server's system log file. I'll consider a specific scenario and show how to isolate logged messages that provide information about an output's requests to a feature service which identify the criteria used to discover and delete feature records.


A customer has configured the Delete Old Features capability on an Add a Feature output connector and reports feature records are being deleted from the geodatabase earlier than expected. Following advice from the blog Add/Update Feature Output Connectors they have captured a few logged messages from the outbound feature transport but are not seeing any information about criteria the connector is using to determine which feature records should be deleted or when the records should be deleted.

Feature Transport - Delete Features

What is the outbound feature transport telling us?

The illustration above does not give us much information. It confirms that an Add a Feature output is periodically, once a minute, making requests on a feature service to delete old feature records and that, for the three intervals shown, no feature records were deleted (the JSON array in the response from the feature service is empty).

If one or more existing feature records had satisfied criteria included in the delete features request, then the logged messages would contain feature record identifiers to confirm which feature records had been deleted. Hypothetically, looking at the raw logged messages in the karaf.log file, we would expect to see a message similar to the following:

2019-06-03T16:42:41,474 | DEBUG | OutboundFeatureServiceCleanerThread-[Default][/][SampleRecord][0][FeatureServer] | FeatureServiceOutboundTransport | 91 - com.esri. ges.framework.transport.featureservice-transport - 10.7.0 | Response was {"deleteResults":[{"objectid":3, ... "success":true},{ "objectid":4, ... "success": true}]}.

The outbound feature transport is only confirming what has been deleted, not criteria used to determine what should be deleted. The information we need, hopefully, is being logged by a different component logger.

How to determine which component logger to watch

As I mentioned in the blog Configuring the application logger, the logging system implemented by ArcGIS GeoEvent Server logs messages from the Java runtime. The messages being logged generally contain good information for software developers, but are rather hard for a GIS analyst to review and interpret. If someone from the product team has not identified a component logger from which you should request more detailed log messages, your only option is to request DEBUG logging on the ROOT component.

If you elect to do this you must know that the karaf.log will quickly grow very large and will roll over as described in the aforementioned blog.

All hope is not lost lost however. One technique I have found helpful is turn off as many of my running inputs and outputs as I can to quiet ArcGIS GeoEvent Server's activity and then briefly, for perhaps a minute or two, request DEBUG level messages be produced by setting the debugging level on the ROOT component. GeoEvent Manager's logging user interface will quickly cache up to 500 messages and you can use built-in IntelliSense to at least get an idea of which components are actively running and producing log messages.

IntelliSense illustration

Once you understand that both the Add a Feature and Update a Feature output connectors use endpoints exposed through the ArcGIS REST Services Directory to interface with their targeted feature services, one component logger should stand out – the HTTP Client component logger highlighted in the illustration above. The information we need on the criteria used to identify feature records to delete is probably being logged as part of an HTTP REST request.

Request DEBUG logs for the HTTP Client

In this case we want to request the com.esri.ges.httpclient.Http component log DEBUG messages to help us identify the problem. To enable DEBUG logging for a the identified component's logger:

  • Navigate to the Logs page in GeoEvent Manger and click the Settings button.
  • Restore the ROOT component logger to its default level WARN and click Save.
  • Specify the name of the HTTP Client component logger, select the DEBUG log level, and Save again.

ArcGIS GeoEvent Server is fundamentally RESTful, which means you will still have a high volume of messages being logged to the karaf.log – but not as many as if you had left DEBUG logging set on the ROOT component logger.

Useful command-line utilities for interrogating karaf.log

I operate almost exclusively on a Windows platform, but Cygwin is one of the first things I install whenever I get a new machine. Cygwin is a free, open source, environment which provides a native Windows integrated command-line shell from which I can execute some of my favorite Unix utilities like sed, grep, awk, and tail. There are probably other packages available which provide similar utilities and tools, but I like Cygwin.

If I open a Cygwin command-line shell I can change directory to where the karaf.log file is being written and generate an active tail of the log so that I don't have to open the log file in a text editor and frequently re-load the file as its content is updated. I am also able to pipe the streaming content from tail through grep to limit the logged messages displayed to those which contain specific keywords or phrases. For example:


rsunderman@localhost //localhost/C$/Program Files/ArcGIS/Server/GeoEvent/data/log


$ tail -0f karaf.log |grep --line-buffered 'where.*reported_dt'


2019-06-07T16:33:19,545 | DEBUG | OutboundFeatureServiceCleanerThread-[Default][/][New_SampleRecord][0][FeatureServer] | Http | 60 - com.esri.ges.framework. httpclient - 10.7.0 | Adding parameter (where/reported_dt < timestamp '2019-06-07 17:33:19').


2019-06-07T16:34:20,269 | DEBUG | OutboundFeatureServiceCleanerThread-[Default][/][New_SampleRecord][0][FeatureServer] | Http | 60 - com.esri.ges.framework. httpclient - 10.7.0 | Adding parameter (where/reported_dt < timestamp '2019-06-07 17:34:20').


2019-06-07T16:35:20,433 | DEBUG | OutboundFeatureServiceCleanerThread-[Default][/][New_SampleRecord][0][FeatureServer] | Http | 60 - com.esri.ges.framework. httpclient - 10.7.0 | Adding parameter (where/reported_dt < timestamp '2019-06-07 17:35:20').

The above quickly reduces all the noise logged by the HTTP Client component logger to only those messages which include the name of the attribute field reported_dt which the Add a Feature output was configured to use when identifying feature records older than a specified number of minutes. The criteria we are looking for is clearly identified as a parameter the HTTP Client is adding to the request it is constructing to send to the feature service to identify and delete old feature records.

The system I am running is in California, which is -07:00 hours behind GMT. The date/time values in the reported_dt attribute of each feature record in my feature are expressed as epoch long integers and represent GMT values. My output is configured to query every 60 seconds and delete feature records which are more than six hours old. The logged messages above bear timestamps which are roughly 60 seconds apart and the where clause identifies any feature record whose date/time is "now" + 07:00 hours (UTC offset) - 06:00 hours (the number of hours at which a feature record is considered "old").

Using the ArcGIS REST Services Directory to query feature records from the feature service, I can quickly see that feature records which are not yet six hours old (relative to GMT) remain but those I add or update with a reported_dt value which is at least six hours old get deleted every 60 seconds.

What if the above had not yielded the information we needed?

We could always fall back to set the ROOT logger to DEBUG so that all component loggers produced debug messages. While this is extremely verbose the technique which uses the tail and grep command-line utilities can still be used to try and find anything which mentions our particular feature service's REST endpoint.

In this case my feature service's name was New_SampleRecord, so I can reasonably expect to find logged messages which include references to:  New_SampleRecord/FeatureServer/0/deleteFeatures

A grep command, using a regular expression pattern match like the following should find only those logged messages which appear to be attempting to delete features from the feature layer in question:
tail -0f karaf.log |grep --line-buffered 'SampleRecord.*FeatureServer.*deleteFeatures'

Tests using the above grep log message filter reveal about 75 messages logged every 60 seconds which include a reference to the deleteFeatures endpoint for the feature layer my output is targeting. Copying and pasting these lines into a text editor I can review them to discover that only one message contains a SQL WHERE clause. Such a clause would be required to identify records with a date/time value which should be considered "old".

While the date/time value in this logged message is HTTP encoded, because this particular message depicts text ready to be sent out over the HTTP wire, we can still use the logged message to understand the criteria being applied by the ArcGIS GeoEvent Server's output.

2019-06-07T18:10:06,956 | DEBUG | HttpRequest Worker Thread: /server/rest/services/New_SampleRecord/FeatureServer/0/deleteFeatures | wire | 60 - com.esri.ges.framework.httpclient - 10.7.0 | http-outgoing-27360 >> "f=json&token=HM85k4E...&rollbackOnFailure=true&where=reported_dt+%3C+timestamp+%272019-06-07+19%3A10%3A06%27"

When someone asks you, "What time is it?", you are probably assuming he or she wants to know the local time where the two of you are right now. As I write this, the time now is Tuesday, March 12, 2019 at about 2:25 PM in Redlands, California, USA.

Typically, we do not qualify our answers so explicitly. We say "It's 2 o'clock" and assume it's understood that this is the time right now in Redlands, California. But that is sort of like answering a query about length or distance by simply saying "36". Is that feet, meters, miles, or kilometers?

Last weekend, here in California, we set our clocks ahead one hour to honor daylight savings time (DST). California is now observing Pacific Daylight Time (PDT) which is equal to UTC-7:00 hours. When we specify the time at which an event was observed, we should include the time zone in which the observation is made as well as whether or not the time reflects a local convention honoring daylight savings time.

When ArcGIS GeoEvent Server receives data for processing, event records usually include a date/time value with each observation. Often the date/time value is expressed as a string and does not specify the time zone in which the date/time is expressed or whether the value reflects a daylight savings time offset. These are sort of like the "units" (e.g. feet, meters, miles, or kilometers) which qualify a date/time value.

The intent of this blog is to identify when GeoEvent Server assumes a date/time value is expressed in Coordinated Universal Time (UTC) versus when it is assumed that a date/time expresses a value consistent with the system's locale. We'll explore a couple situations where this might be important and the steps you can take to configure how date/time values are handled and displayed.

Event data ingest should generally assume date/time values are expressed as UTC values

There are several reasons for this. In the interest of brevity, I'll simply note that GeoEvent Server is running in a "server" context. The assumption is that the server machine is not necessarily located in the same time zone as the sensors from which it is receiving data and that clients interested in visualizing the data are likewise not necessarily in the same time zone as the server or the sensors. UTC is the time standard commonly used around the world. The world's timing centers have agreed to synchronize, or coordinate, their date/time values -- hence the name Coordinated Universal Time.(1)

If you have ever used the ArcGIS REST Services Directory to examine the JSON representation of feature records which include a date/time field whose data type is esriFieldTypeDate, you have probably noticed that the value is not a string, it is a number; an epoch long integer representing the number of milliseconds since the UNIX Epoch (January 1, 1970, midnight). The default is to express the value in UTC.(2)(3)

When does GeoEvent Server assume the date/time values it receives are UTC values?

Out-of-the-box, GeoEvent Server supports the ISO 8601 standard for representing date/time values.(4)

It is unusual, however, to find sensor data which expresses the date/time value "March 12, 2019, 2:25:30 pm PDT" as 2019-03-12T14:25:30-07:00. So when a GeoEvent Definition specifies that a particular attribute should be handled as a Date, inbound adapters used by GeoEvent Server inputs will compare received string values to see if they match one of a few commonly used date/time patterns.

For example, GeoEvent Server, out-of-the-box, will recognize the following date/time values as Date values:

  • Tue Mar 12 14:25:30 PDT 2019
  • 03/12/2019 02:25:30 PM
  • 03/12/2019 14:25:30
  • 1552400730000

When one of the above date/time values is handled, and the input's Expected Date Format parameter does not specify a Java SimpleDateFormat expression / pattern, GeoEvent Server will assume the date/time value represents a Coordinated Universal Time (UTC) value.

When will GeoEvent Server assume a date/time value is expressed in the server machine's locale?

When a GeoEvent Server input is configured with a Java SimpleDateFormat expression / pattern the assumption is the input should convert date/time values it receives into an epoch long integer, but treat the value as a local time, not a UTC value.

For example, if your event data represents its date/time values as "Mar 12 2019 14:25:30" and you configure a new Receive JSON on a REST Endpoint  input to use the pattern matching expression MMM dd yyyy HH:mm:ss as its Expected Date Format property, then GeoEvent Server will assume the event record's date/time expresses a value consistent with the system's locale and will convert the date/time to the long integer value 1552425930000.

You can use the EpochConverter online utility to show equivalent date/time string values for this long integer value. Notice in the illustration below that the value 1552425930000 (expressed in epoch milliseconds) is equivalent to both the 12th of March, 2019, at 9:25 PM Greenwich Mean Time (GMT) and 2:25 PM Pacific Daylight Time (PDT):

EpochConverter online utility

The utility's conversion notes that clocks in my time zone are currently seven hours behind GMT and that daylight savings time is currently being observed. You should note that while GMT and UTC are often used interchangeably, they are not the same.(5)


What if I have to use a SimpleDateFormat expression, because my date/time values are not in a commonly recognized format, but my client applications expect date/time values will be expressed as UTC values?

You have a couple of options. First, if you have the ability to work with your data provider, you could request that the date/time values sent to you specify a time zone as well as the month, day, year, hour, minute, second (etc.).

For example, suppose the event data you want to process could be changed to specify "Mar 12 2019 14:25:30 GMT". This would enable you to configure a Receive JSON on a REST Endpoint  input to use the pattern matching expression MMM dd yyyy HH:mm:ss zzz as its Expected Date Format property since information on the time zone is now included in the date/time string. The input will convert the date/time string to 1552400730000 which is a long integer equivalent of the received date/time string value.

Using the EpochConverter online utility to show the equivalent date/time string values for this long integer value, you can see that the Date value GeoEvent Server is using is a GMT/UTC value:

If the data feed from your data provider cannot be modified you can use GeoEvent Server to compute the proper UTC offset for the ingested "local" date/time value within a GeoEvent Service.

Because GeoEvent Server handles Date attribute values as long integers, in epoch milliseconds, you can use a Field Calculator to add (or subtract) a number of milliseconds equal to the number of hours you need to offset a date/time value to change its representation from "local" time to UTC.

The problem, for a long time, was that you had to use a hard-coded constant value in your Field Calculator's expression which rendered your GeoEvent Service vulnerable twice a year to time changes if your community started and later stopped observing daylight savings time. Beginning with the ArcGIS GeoEvent Server 10.5.1, the Field Calculator supports a new wrapper function that helps address this: currentOffsetUTC()

A Field Calculator, running within a GeoEvent Service on my local server, evaluates currentOffsetUTC() and returns the value -25200000, the millisecond difference between my local system's current date/time and UTC. Currently, here in California, we are observing Pacific Daylight Time (PDT) which is equal to UTC-7:00.

Even though GeoEvent Server assumes date/time values such as "Mar 12 2019 14:25:30" (received without any time zone "units") represent local time values -- because a pattern matching expression MMM dd yyyy HH:mm:ss must be used to interpret the received date/time string values -- I was able to calculate a new date/time value using a dynamic offset and output a value which represents the received date/time as a UTC value. All I had to do was route the event record, with its attribute value ReportedDT (data type: Date) through a Field Calculator configured with the expression:  ReportedDT + currentOffsetUTC()

How do I configure a web map to display local time rather than UTC time values

When recommending that date/time values should generally be expressed as UTC values, a frequent complaint when feature records updated by GeoEvent Server are visualized on a web map, is that the web map's pop-up values show the date/time values in UTC rather than local time.

It is true that, generally, we do not want to assume that a server machine and sensor network are both located in the same time zone as the localized client applications querying the feature record data. That does not mean that folks in different time zones want to perform the mental arithmetic needed to convert a date/time value displayed by a web map's pop-up from UTC to their local time.

In the past I have recommended data administrators work around this issue using a Field Calculator to offset the date/time, as I've shown above, by a number of hours to "falsely" represent date/time values in their database as local time values. I say "falsely" because most map/feature services are not configured to use a specified time zone. For a long time it wasn't even possible to change the time zone a map/feature service used to represent its temporal data values. There are web pages in the ArcGIS REST API which still specify that feature services return date/time values only as epoch long integers whose UTC values represent the number of milliseconds since the UNIX Epoch (January 1, 1970, midnight). So even if a map/feature service is configured to use a specific time zone, we should not expect all client applications to honor the service's specification.

For now, let's assume our published feature service's JSON specification follows the default and client apps expect UTC values returned when they query the map/feature service. If we use GeoEvent Server to falsely offset the date/time values to local time, the data values in our geodatabase are effectively a lie. Sure, it is easy to say that all client applications have been localized, and assume all server machines, client applications, and reporting sensors are all in one time zone; all we are trying to do is get a web map to stop displaying date/time values in UTC.

But there is a better way to handle this problem. Testing the latest public release (10.6.1) of the Enterprise portal web map and ArcGIS Online web map I found that pop-ups can be configured with custom expressions which dynamically calculate new values from existing feature record attributes. These new values can then be selected as the attributes to show in a web map's pop-up rather than the "raw" values from the feature service.

Below are the basic steps necessary to accomplish this:

  1. In your web map, from the Content tab, expand the feature layer's context menu and click Configure Pop-up.
  2. On the lower portion of the Configure Pop-up panel, beneath Attribute Expressions, click Add.
  3. Search the available functions for date functions and build an expression like the one illustrated below.

Web Map | Custom Attributes

Assign the new custom attribute a descriptive name (e.g. localDateTime) and save the attribute calculation. You should now be able to select the dynamic attribute to display along with any other "raw" attributes from the feature layer.

Web Map | Custom Pop-up



(1)  UTC – Coordinated Universal Time

(2)  ArcGIS for Developers | ArcGIS REST API

(3)  ArcGIS for Developers | Common Data Types | Feature object

(4)  World Wide Web Consortium | Date and Time Formats

(5) - The Difference Between GMT and UTC

(6)  ArcGIS for Developers | ArcGIS REST API | Enterprise Administration | Server | Service Types



One of the first contributions I made to the GeoEvent space on GeoNet was a blog titled Understanding GeoEvent DefinitionsTechnical workshops and best practice discussions for years have recommended that, when you want to use data from event records to add or update feature records in a geodatabase, you start by importing a GeoEvent Definition from the targeted feature service. This allows you to explicitly map an event record’s structure as the last processing step before an add / update feature output. The field mapping guarantees that service requests made by GeoEvent Server match the schema expected by the feature service.

In this blog I would like to expand upon this recommendation and introduce flexibility you may not realize you have when working with feature records in both feature services and stream services. Let's begin by considering a relatively simple GeoEvent Definition describing the structure of a "sample" event record:

GeoEvent Definition


Different types of services will have different schema

I could use GeoEvent Manager and the event definition above to publish several different types of services:

  • A traditional feature service using my GIS Server's managed geodatabase (a relational database).
  • A hosted feature service using a spatiotemporal big data store configured with my ArcGIS Enterprise.
  • A stream service without any feature record persistence and no associated geodatabase.


Following the best practice recommendation, a Field Mapper Processor should be used to explicitly map an event record structure and ensure that event records routed to a GeoEvent Server output match the schema expected by the service. The GeoEvent Service illustrated below can be used to successfully store feature records in my GIS Server's managed geodatabase. The same feature records can be stored in my ArcGIS Enterprise's spatiotemporal big data store with copies of the feature records broadcast by a stream service:

GeoEvent Service


But if you compare the feature records broadcast by the stream service with feature records queried from the different feature services and data stores you should notice some subtle differences. The schema of the various feature records is not the same:


Feature Records


You might notice that the stream service's geometry is "complete". It has both the coordinate values for the point geometry and the geometry's spatial reference, but this is not what I want to highlight. The feature services also have the spatial reference, they just record it as part of the overall service's metadata rather than including the spatial reference as part of each feature record.

What I want to highlight are the attribute values in the relational data store's feature record and spatiotemporal big data store's feature record which are not in the stream service's feature record. These additional identifier values are created and maintained by the geodatabase and you cannot use GeoEvent Server to update them.

Recall that the SampleRecord GeoEvent Definition illustrated at the top of this article was successfully used to add and update feature records in the different data stores. If new GeoEvent Definitions were imported from each feature service, however, the imported event definitions would reflect the actual schema of their respective feature classes:

GeoEvent Definition

Since the highlighted attribute fields are created and maintained by the geodatabase and cannot be updated, the best practice recommendation is to delete them from the imported GeoEvent Definitions. Even if event records you ingest for processing happen to have string values you think appropriate to use as a globalid for a spatiotemporal feature record, altering the database's assigned identifier would be very bad.

But if I delete the fields from the imported GeoEvent Definitions ...

Exactly. The simplest way to convey the best practice recommendation to import a GeoEvent Definition from a feature service is to say that this ensures event records mapped to the imported event definition will exactly match the structure expected by the feature service. In service-oriented architecture (SOA) terminology this is "honoring the service's contract."

Maybe you did not know that the identifier fields could be safely deleted from the imported GeoEvent Definition, and so chose to keep them, but leave them unmapped when configuring your final Field Mapper Processor. The processor will assign null values to any unmapped attribute fields, and the feature service knows to ignore attempts to update the values that are created and maintained by the geodatabase, so there is really no harm in retaining the unneeded fields. But unless you want a Field Mapper Processor to place a null value in an attribute field, it is best not to leave attribute fields unmapped.

Is it OK to use a partial GeoEvent Definition when adding or updating feature records?

Yes, though you generally only do this when updating existing feature records, not when adding new feature records.

Say, for example, you had published a feature service which specified the codeword attribute could not be null. While such a restriction cannot be placed on a feature service published using GeoEvent Manager, you could use ArcGIS Desktop or ArcGIS Pro to place a restriction nullable: false on a feature class's attribute field to specify that the field's value may not be assigned a null value.

If you were using GeoEvent Server to add new feature records to the feature class, left one or more attribute fields unmapped in the final Field Mapper, and those attribute values are not allowed to be null, requests from GeoEvent Server will be rejected by the feature service -- the add record request does not include sufficient data to satisfy all the restrictions specified by the feature service.

Feature services which have nullable: false restrictions on attribute fields normally also specify a default value to use when a data value is not specified. Assuming the event record you were processing did not have a valid codeword, you could simply delete that attribute field from the Target GeoEvent Definition used by your final Field Mapper and allow the feature service to supply a default value for the missing, yet required, attribute. If the feature service spec does not include default values for required fields, well then, the processing you do within your GeoEvent Service will have to come up with a codeword value.

The point is, if you do not want to attempt to update a particular attribute value in a feature record, either because you do not have a meaningful value, or you do not want to push a null value into the feature record, you can simply not include that attribute field in the structure or schema of event records you route to an output.

Examples where feature record flexibility might be useful

I have worked with customers who use feature services to compile attribute data collected from different sensors. One type of sensor might provide barometric pressure and relative humidity. Another type of sensor might provide ambient temperature and yet another a measure of the amount of rainfall. No single sensor is supplying all the weather data, so no single event record will have all the attribute values you want to include in a single feature record. Presumably, the different sensor types are all associated with a single weather station, whose name could be used as the TRACK_ID for adding and updating feature records, so we can create partial GeoEvent Definitions supporting each type of sensor and update only the specific attribute fields of a feature record with the data provided by a particular type of sensor installed at the weather station.

Another example might be when data records arrive with different frequency. Consider an automated vehicle location (AVL) solution which receives data every two minutes reporting a vehicle's last observed position and speed. A different data feed might provide information for that same vehicle when the vehicle's brakes are pressed particularly hard (signaling, perhaps, an aggressive driving incident). You do not receive "hard brake" event records as frequently as you receive "vehicle position" event records, and you do not want to push null values for speed or location into a feature record whenever an event record signaling aggressive driving is received, so you prepare a partial GeoEvent Definition for the "hard brake" event records and only update that portion of a vehicle's feature record when that type of data is received.

A third example where using a GeoEvent Definition which either deliberately includes or excludes a attribute value may be helpful is described in the thread Find new entries when streaming real-time data

Are stream services as flexible as feature services?

They did not used to be, no, but changes made to stream services in the ArcGIS 10.6 release relaxed their event record schema requirements. You should still use a Field Mapper Processor to make sure that the spelling and case sensitivity of your event record's attribute fields match those in the stream service's specification. Stream services cannot transfer an attribute value from an event field named codeWord into a field named codeword for example, but you can now send event records whose structure is a subset of the stream service's schema to a Send Features to a Stream Service output. The output will attempt to handle any necessary data conversions, broadcasting a long integer value when a short integer is received, or broadcasting a string equivalent when a date value is received. The output will also omit any attribute value(s) from the feature record(s) it broadcasts when it does not receive a data value for a particular attribute.


Hopefully the additional detail and examples in this discussion illustrate flexibility you have when working with feature records in both feature services and stream services and helps clarify best practice recommendations to use a Field Mapper Processor to ensure the structure of event records sent to either a feature service or stream service output have a schema compatible with the service's specification. You can use partial GeoEvent Definitions which model a subset of a feature record's complete schema to avoid pushing null values into a data record and/or avoid attempting to update attribute values you do not want to update (or are not allowed to update).

- RJ

The GeoEvent Server team maintains sample servers which expose both simulated and live data via stream services. For this write-up I will use publicly available services from the following ArcGIS REST Services Directory:

This write-up assumes you have set up a base ArcGIS Enterprise and have included ArcGIS GeoEvent Server as an additional server role in your solution architecture. I will use a deployment which has the base ArcGIS Enterprise and GeoEvent Server installed on a single machine.

Your goal is to receive feature records, formatted as Esri Feature JSON, from an ArcGIS Server stream service. You could, of course, simply add the stream service to an ArcGIS Enterprise portal web map as a stream layer. For this write-up, however, we will look at the steps a custom client must perform to discover the WebSocket associated with a stream service and subscribe to begin receiving data broadcast by the service.

Stream Service Discovery

It is important to recognize that the GIS server hosting a stream service may be on a different server machine than GeoEvent Server. A stream service is discoverable via the ArcGIS Server REST Services Directory, but the WebSocket used to broadcast feature records is run from within the JVM (Java Virtual Machine) used to run GeoEvent Server. If your ArcGIS Enterprise portal and GeoEvent Server have been deployed on separate machines client applications will need to be able to access both servers to discover the stream service and subscribe to the stream service's WebSocket.

If you browse to the ArcGIS REST Services Directory mentioned above you should see a list of available services highlighted below:

GeoEvent Sample Server - stream services

Let’s examine how a client application might subscribe to the LABus stream service. First, the client will need to acquire a token which it will append to its request to subscribe to the stream service’s WebSocket. The WebSocket’s base endpoint is shown on the stream service’s properties page. The token you need is included in the stream service’s JSON specification.

  • Click the LABus stream service to open the service's properties page.
  • In the upper-left corner of  the LABus properties page, click the JSON link
    to open the stream service's JSON specification.

Stream service properties page

  • Scroll to the bottom of the LABus stream service’s JSON specification page and locate
    the stream service’s subscription token.


Stream service JSON specification


Client applications will need to construct a subscription request which includes both the WebSocket URL and the stream service’s subscription token as a query parameter. The format of the request is illustrated below; make sure to include subscribe in the request:



Client Subscription Examples

The website offers a connection test you can use to verify the subscription request you expect your client application will need to construct. Browse to and select DEMOS > Echo Test from the menu. Paste the subscription request with the stream service’s WebSocket URL and token into the Location field and click ConnectThe client should be able to reach the GeoEvent Server sample server and successfully subscribe to the service’s WebSocket. Esri feature records will be displayed for the Los Angeles Metro buses in the Log window. homepage Echo Test Echo Test


You can also configure an input connector in GeoEvent Server to subscribe to the LABus stream service.

  • Log in to GeoEvent Manager.
  • Add a new Subscribe to an External WebSocket for JSON input.
  • Enter a name for the input.
  • Paste the constructed subscription request to the Remote server WebSocket URI property.
  • Allow the input to create a GeoEvent Definition for you.

Subscribe to an External WebSocket for JSON

Do not configure the input to use event attribute values to build a geometry. The records being broadcast by the stream service are Esri feature records, formatted as Esri Feature JSON, which include attributes and geometry as separate values in an event record hierarchy.

Save the new input and navigate to the Monitor page in GeoEvent Manager – you should see your input’s event count increase as event records are received.

You can now incorporate the input into a GeoEvent Service and use filters and/or processors to apply real-time analytics on the event records being ingested. You might, for example, create a GeoEvent Definition with a simpler structure, tag the id field as the TRACK_ID, and use a Field Mapper Processor to flatten the hierarchical structure of each event record received so that you can send them to a TCP/Text output for display using GeoEvent Logger.

Hopefully the examples and illustrations in this write-up are helpful in guiding you through the discovery of stream services, their properties, and how you can use external clients – or configure GeoEvent Server inputs – to receive the feature records that are being broadcast.

In a separate blog, JSON Data Structures - Working with Hierarchy and Multicardinality, I wrote about how data can be organized in a JSON structure, how to recognize data hierarchy and cardinality from a GeoEvent Definition, and how to access data values given a hierarchical, multi-cardinal, data structure.

In this blog, we'll explore XML, another self-describing data format which -- like JSON -- has a specific syntax that organizes data using key/value pairs. XML is similar to JSON, but the two data formats are not interchangeable.

What does XML support that JSON does not?

One difference is that XML supports both attribute and element values whereas JSON really only supports key/value pairs. With JSON you generally expect data values will be associated with named fields. Consider the two examples below (credit:

<person sex="female">

The XML in this first example above provides information on a person, "Anna". Her first and last name are provided as elements whereas her gender is provided as an attribute value.


The XML in this second example above provides the same information, except now all of the data is provided using element values

Both XML structures are valid, but if you have any influence with your data provider, it is probably better to avoid attribute values and instead use elements exclusively when ingesting XML data into GeoEvent Server. This is only a recommendation, not a requirement. As you will see in the following examples, GeoEvent Server can successfully adapt XML which contains attribute values.

Here's a little secret:  GeoEvent Server does not actually handle XML data at all.

GeoEvent Server uses third party libraries to translate XML it receives to JSON. The JSON adapter is used interpret the data and create event records from the translated data. Because JSON does not support attribute values, all data values in an XML structure must be translated as elements. Consider the following illustration which shows how a block of XML data might be translated to JSON by GeoEvent Server:


Notice the JSON on the right in this example organizes each event record as separate elements in a JSON array. Also notice the first line of the XML on the left which declares the version and encoding being used. The libraries GeoEvent Server uses to translate the XML to JSON really like seeing this information as part of the XML data. Finally, sometimes XML will include non-visible characters such as a BOM (byte-order mark). If the XML you are trying to ingest is not being recognized by an input you've configured, try copying the XML into a text editor and saving a text-only version to strip out any hidden characters.


Other limitations to consider when ingesting XML

There are several other limitations to consider when ingesting XML data into GeoEvent Server. Sometimes a block of JSON might pass an online JSON validator such as the one provided by JSON Lint but still not be ingested into GeoEvent Server. The JSON syntax rules, for example, do not require that every nested element have a name; yet without a name, it is impossible to construct a GeoEvent Definition since every event attribute must have a name to create a complete GeoEvent Definition.

Similarly, there are XML structures which are perfectly valid which GeoEvent Server may have trouble ingesting. Consider the following block of XML data as an example:

<?xml version="1.0" encoding="utf-8"?>
    <vehicle make="Ford" model="Explorer">
    <vehicle make="Toyota" model="Prius">
    <person fname="James" lname="Albert">
    <person fname="Mary" lname="Smith">

The XML data illustrated above contains a mix of both "vehicles" and "personnel". The self-describing nature of the XML makes it apparent to a reader which data elements are which, but an input in GeoEvent Server may still have trouble identifying the multiple occurrences of the different data items if the inbound adapter's XML Object Name property is not specified.

Here is the GeoEvent Definition the inbound adapter generates when its XML Object Name property is left unspecified and the XML data sample above is ingested into GeoEvent Server:

GeoEvent Definition

In testing, the very first time the XML with the combination of "vehicles" and "personnel" was received and written out as JSON to a system text file, I observed only one person and one vehicle were written to the output file. Worse yet, without changing the generated GeoEvent Definition or any of the input connector's properties, sending the exact same XML a second time produced an output file with "vehicles" and "personnel" elements that were empty.

We know from the JSON Data Structures - Working with Hierarchy and Multicardinality blog that, at the very least, the cardinality specified by the generated GeoEvent Definition is not correct. The GeoEvent Definition also implies a nesting of groups within groups, which is probably not correct.

Working around the issue

Let's explore how you might work around the issue identified above using the configurable properties available in GeoEvent Server. First, ensure the XML input connector specifies which node in the XML should be treated as the root node by setting the XML Object Name property accordingly as illustrated below:

GeoEvent Input

Second, verify the GeoEvent Definition has the correct cardinality for the data sub-structure beneath the specified root node as illustrated below:

GeoEvent Definition

By configuring these above properties accordingly, GeoEvent Server will only consider data within a sub-structure found beneath a "vehicles" root node and should make allowances that the sub-structure may contain more than one "vehicle".

XML Sample

With this approach, there are two ramifications you might want consider. First, the inbound adapter is literally throwing half of the received data away by excluding data from any sub-structure found beneath the "personnel" nodes. This can be addressed by making a copy of the existing Receive XML on a REST Endpoint input and configuring this copy to use "personnel" as its XML Object Name. The copied input should also use a different GeoEvent Definition -- one which specifies "person" as an event attribute with cardinality Many and the attributes of a "person" (rather than a "vehicle") as illustrated below.

Copied Input Configuration

Second, the event record being ingested has multiple vehicles (or people) as items in an array. You'll likely want to process each vehicle (or person) as individual event records. To address this, it's recommended you use a processor available on the ArcGIS GeoEvent Server Gallery, specifically the Multicardinal Field Splitter Processor. There are two different field splitter processors provided in the download, so make sure to use the processor that handles multicardinal data structures.

A Multicardinal Field Splitter Processor, added to a GeoEvent Service illustrated below, will clone event records it receives and split the event record so that each record output has only one vehicle (or person). Notice that each event record output from the Multicardinal Field Splitter Processor includes an index at which the element was found in the original array.

GeoEvent Service


The examples I've referenced in this blog are obviously academic. There's no good reason why a data provider would mashup people and vehicles this way in the same XML data structure. However, you might come across data structures which are not homogeneous and need to use one or more of the approaches highlighted in this blog to extract a portion of the data out of a data structure. Or you might need to debug your input connector's configuration to figure out why attribute or element values you know to exist in the XML being received are not coming through in the event records that output. Or maybe in the data you're receiving you expect multiple event records to be ingested and end up only observing a few -- or maybe only one -- event records being ingested. Hopefully the information provided will help you address these challenges when you encounter them.

To summarize, below are the tips I highlighted in this article:

  • Use the GeoEvent Definition as a clue to the hierarchy and cardinality GeoEvent Server is using to define each event record's structure.
  • Specify the root node or element when ingesting XML or JSON; don't let the inbound adapter assume which node should be considered the root. If necessary, specify an interior node as the root node so only a subset of the data is actually considered.
  • Avoid XML data which uses attributes. If you must use XML data with attributes, know that an attempt will be made to promote these as elements when the XML is translated to JSON.
  • Encourage your data providers to design data structures whose records are homogeneous. This can run counter to database normalization instincts where data common to all records is included in a sub-section above each of the actual records. Sometimes simple is better, even when "simple" makes individual data records verbose.
  • Make sure the XML you ingest includes a header specifying its version and encoding -- the libraries GeoEvent Server is using really like seeing this metadata. Also, watch out for hidden characters which are sometimes present in the data.

GeoEvent Server Automatic Configuration Backup Files

It is possible, and in fact preferred, to create XML snapshots of your ArcGIS GeoEvent Server configuration using GeoEvent Manager (Site > GeoEvent > Configuration Store > Export Configuration).

But what if something has gone sideways and you cannot access GeoEvent Manager? Before you delete GeoEvent Server’s ZooKeeper distributed configuration store, you will want to locate a recent XML configuration and see if recent changes to inputs, outputs, GeoEvent Definitions, and GeoEvent Services are in the configuration file.

Beginning with GeoEvent Server 10.5, a copy of the configuration is exported automatically for you, daily, at 00:00:00 hours (local time).

  • Automatic backup files, by default, are written to the following folder:
  • You can change the folder used by editing the folder registered for 'Automatic Backups':
    Site > GeoEvent > Data Stores > Register Folder
  • You can change when and how often snapshots of your configuration are taken:
    Site > Settings > Configure Global Settings > Automatic Backup Settings


GeoEvent Server ZooKeeper Files

At the 10.5 / 10.5.1 release – GeoEvent Server uses the “synchronization service” platform service in ArcGIS Server, which is running an Apache ZooKeeper behind the scenes. Since this is an ArcGIS Server service, the application files are found in the ArcGIS Server 'local' folder (e.g. C:\arcgisserver\local).

If a system administrator wanted to administratively clear a configuration of GeoEvent Server they could stop the ArcGIS Server platform service -- using the Administrative API -- or stop the ArcGIS Server Windows service and delete the files and folders found beneath C:\arcgisserver\local\zookeeper\.

  • You should leave the parent folder, C:\arcgisserver\local\zookeeper intact.
  • You should also confirm with Esri Technical Support that patches, service packs, or hot-fixes you may have installed have not changed how the “synchronization service” platform service is used by other ArcGIS Enterprise components before administratively deleting files from beneath the ArcGIS Server directories. (ArcGIS GeoAnalytics Server, for example, uses the platform service to elect a machine participating in a multiple-machine analytic as the "leader" for an operation.)

Beginning with the 10.6 release – GeoEvent Server is running its own Apache ZooKeeper instance within the ArcGIS GeoEvent Gateway Windows service. If a system administrator wanted to administratively clear a 10.6 configuration of GeoEvent Server they could stop the ArcGIS GeoEvent Gateway Windows service – which will also stops the dependent ArcGIS GeoEvent Server Windows service – and then delete the files and folders found beneath: C:\ProgramData\Esri\GeoEvent-Gateway\zookeeper-data.

GeoEvent Server Kafka File

NOTE: The following only applies to 10.6 and later releases of GeoEvent Server.

Beginning with the 10.6 release – GeoEvent Server is running an Apache Kafka instance as an event message broker within the ArcGIS GeoEvent Gateway Windows service. The message broker uses on-disk topic queues to manage event records. The event records which have been sent from the message broker to a GeoEvent Server instance for processing are recorded within the broker's associated configuration store (e.g. Apache ZooKeeper).

The Kafka message broker provides a transactional message guarantee that the RabbitMQ message broker (used in 10.5.1 and earlier releases) does not provide. If the GeoEvent Gateway on a machine were stopped and restarted, the configuration store will have recorded where event message processing was suspended and will use indexes into the topic queues to resume processing previously received event records.

The topic queue files are closed, new files created, and old files deleted according to configurable data retention strategy. However, if the GeoEvent Gateway were stopped and its ZooKeeper configuration were deleted, the Kafka topic queues will likely be orphaned and potentially large message log files may not be deleted from disk according to the data retention strategy. In this case, a system administrator might need to locate and delete the topic queue files from beneath C:\ProgramData\Esri\GeoEvent-Gateway\kafka.


GeoEvent Server Runtime Files

When GeoEvent Server is initially launched, following a new product installation, a number of files are created as the system framework is built. These files, referred to as “cached bundles” are written into a \data folder in the GeoEvent Server installation directory (e.g  C:\Program Files\ArcGIS\Server\GeoEvent\data). Again, if something has gone sideways, a system administrator might want to try deleting these files, forcing the system framework to be rebuilt, before deciding to uninstall and then reinstall GeoEvent Server.

This might be necessary if, for example, you continue to see the message "No Services Found" displayed in a browser window (after several minutes and a browser refresh) when attempting to launch GeoEvent Manager. In this case, deleting the runtime files from the \data folder to force the system framework to be rebuilt may remedy an issue which prevented GeoEvent Server from launching correctly the first time.

Another reason a system administrator may need to force the system framework to be rebuilt might be observing a message that the ArcGIS GeoEvent Server Windows service could not be stopped “in a timely fashion” (when selecting to stop the service using the Windows Task Manager). In this case, an administrator should ensure the process identified in the C:\Program Files\ArcGIS\Server\GeoEvent\instances\ file has been stopped. Administratively terminating this processes to stop GeoEvent Server can leave the system framework in a bad state, requiring the \data files be deleted so the framework can be rebuilt.


Administratively Reset GeoEvent Server

Deleting the Apache ZooKeeper files (to administratively clear the GeoEvent Server configuration), the product’s runtime files (to force the system framework to be rebuilt), and removing previously received event messages (by deleting Kafka topic queues from disk) is how system administrators reset a GeoEvent Server instance to look like the product has just been installed. Below are the steps and system folders you need to access to administratively reset GeoEvent Server at the 10.5.x and 10.6.x releases.


If you have custom components in the C:\Program Files\ArcGIS\Server\GeoEvent\deploy folder, move these from the \deploy folder to a local temporary folder, while GeoEvent Server is running, to prevent the component from being restored (from the distributed configuration store) when GeoEvent Server is restarted. Also, make sure you have a copy of the most recent XML export of your GeoEvent Server configuration if you want to save the elements you have created.


  You should confirm with Esri Technical Support that system folders and files you plan to delete before executing the steps below. Files you delete following the steps below are irrecoverable.

  1. Stop the ArcGIS Server Windows service.
    (This will also stop the GeoEvent Server Windows service)
  2. Locate and delete the files and folders beneath C:\Program Files\ArcGIS\Server\GeoEvent\data
    (Leave the \data folder intact)
  3. Locate and delete the files and folders beneath C:\arcgisserver\local\zookeeper
    (Leave the \zookeeper folder intact)
  4. Locate and delete the files and folders beneath C:\ProgramData\Esri\GeoEvent
    (Leave the \GeoEvent folder intact)
  5. Start the ArcGIS Server Windows service.
    (Confirm you can log in to the ArcGIS Server Manager web application)
  6. Start the ArcGIS GeoEvent Server Windows service.


  Note that the lifecycle of the ArcGIS GeoEvent Gateway service is intended to mirror that of the operating system.
  You can administratively reset GeoEvent Server (e.g. deleting its runtime files from its \data folder) without stopping the ArcGIS GeoEvent Gateway service -- unless you also want to administratively delete the ZooKeeper files from the configuration store (which in the 10.6.x are maintained as part of the ArcGIS GeoEvent Gateway service).

  1. Stop the ArcGIS GeoEvent Server Windows service.
  2. Locate and delete the files and folders beneath the following directories (leaving the parent folders intact):
    C:\Program Files\ArcGIS\Server\GeoEvent\data\
  3. Stop the ArcGIS GeoEvent Gateway Windows service.
    This will also stop the ArcGIS GeoEvent Server Windows service if it is running.
  4. Locate and delete the files and folders beneath the following directories.
    Leave the parent folders (highlighted) intact:
    C:\Program Files\ArcGIS\Server\GeoEvent\gateway\log
  5. If you delete the zookeeper-data files, you should remove any orphaned topic queues
    by deleting the on-disk Kafka logs (delete the 'logs' sub-folder, leave the 'kafka' folder intact):
  6. Locate and delete the GeoEvent Gateway configuration file (a new file will be rebuilt).
    C:\Program Files\ArcGIS\Server\GeoEvent\etc\com.esri.ges.gateway.cfg
  7. Start the ArcGIS GeoEvent Server Windows service.
    This will start the ArcGIS GeoEvent Gateway service if it has been stopped.
    Confirm you can log in to GeoEvent Manager.

At this point you can also review the contents of the rebuilt com.esri.ges.gateway.cfg file. The GeoEvent Gateway will record its message broker and configuration store port configurations in this file if it was able to launch successfully:





When speaking with customers who want to get started with ArcGIS GeoEvent Server, I'm often asked if GeoEvent Server has an input connector for a specific data vendor or type of device. My answer is almost always that we prefer to integrate via REST and the question you should be asking is: "Does the vendor or device offer a RESTful API whose endpoints a GeoEvent Server input can be configured to query?"

Ideally, you want to be able to answer two integration questions:

  1. How is the data being sent to a GeoEvent Server input?
  2. How is the data formatted; what does the data's structure look like?

For example, an input can be configured to accept data sent to a GeoEvent Server hosted REST endpoint. That answers the first question - integration will occur via REST with the vendor sending data as an HTTP/POST request to a GeoEvent Server endpoint. The second question, how is the data formatted, is the focus of this blog.

What does a typical JSON data record look like?

Typically, when a data vendor sends event data formatted as JSON, there will be multiple event records organized within a list such as this:

    "items": [{
                  "id": 3201,
                  "status": "",
                  "calibrated": 1521135120000,
                  "location": {
                         "latitude": -117.125,
                         "longitude": 31.125
                  "id": 5416,
                  "status": "offline",
                  "calibrated": 1521638100000,
                  "location": {
                         "latitude": -113.325,
                         "longitude": 33.325
                  "id": 9823,
                  "status": "error",
                  "calibrated": 1522291320000,
                  "location": {
                         "latitude": -111.625,
                         "longitude": 35.625


There are three elements, or objects, in the block of JSON data illustrated above. It would be natural to think of each element as an event record with its own "id", "status", and "location". Each event record also has a date/time the item was last "calibrated" (expressed as an epoch long integer in milliseconds).


What do we mean when we refer to a "multi-cardinal" JSON structure?

The JSON data illustrated above is multi-cardinal because the data has been organized within an array. We say the data structure is multi-cardinal because its cardinality, in a mathematical sense of the number of elements in a group, is more than one. The array is enclosed within a pair of square brackets:  "items": [ ... ]

If the array were a list of simple integers the data would look something like:  "values": [ 1, 3, 5, 7, 9 ]

The data elements in the illustration above are not simple integers. Each item is bracketed within curl-braces which is how JSON identifies an object. For GeoEvent Server, it is important that both the array have a name and that each object within the array have a homogeneous structure, meaning that every event record should, generally speaking, use a common schema or collection of name/value pairs to communicate the item's data.

What do we mean when we refer to a "hierarchical" JSON structure?

The data elements in the array are themselves hierarchical. Values associated with "id", "status", and "calibrated" are simple numeric, string, or Boolean values. The "location" value, on the other hand, is an object which encapsulates two child values -- "latitude" and "longitude". Because "location" organizes its data within a sub-structure the overall structure of each data element in the array is considered hierarchical.

It should be noted that the coordinate values within the "location" sub-structure can be used to create a point geometry, but "location" itself is not a geometry. This is evident by examining how a GeoEvent Definition is used to represent the data contained in the illustrated block of JSON.

Different ways of looking viewing this data using a GeoEvent Definition

In GeoEvent Server, if you were to configure a new Receive JSON on a REST Endpoint input, leaving the JSON Object Name property unspecified, selecting to have an GeoEvent Definition created for you, and specifying that the inbound adapter not attempt to construct a geometry from received attribute values, the GeoEvent Definition created would match the one illustrated below:

GeoEvent Definition

Notice the cardinality of "items" is specified as Many (the infinity sign simply means "more than one"). Also, when the block of JSON data illustrated above is sent to the input via HTTP/POST, the input's event count only increments by one, indicating that only one event record was received.

Also notice that, in this configuration, "items" is a Group element type. This implies that in addition to the structure being multi-cardinal, it's also organized as a group of elements, which in JSON is typically an array.

Finally, notice that the "location" is also a Group element type. The cardinality of "location", however, is One not Many. This tells you that the value is a single element, not an array of elements or values.

Accessing data values

Working with the structure specified in the GeoEvent Definition illustrated above, if you wanted to access the coordinate values for "latitude" or "longitude" you would have to specify which latitude and longitude you wanted. Remember, the data was received as a single event record and "items" is a list or array of elements. Each element in the array has its own set of coordinate values. Consider the following expressions:



The expressions above specify that the third element in the "items" list is the one in which you are interested. You cannot refer to items.location.latitude because you have not specified an index to select one of the three elements in the "items" array. The array's index is zero-based, which means the first item is at index 0, the second is at index 1, and so on.

Ingesting this data as a single event record is probably not what you would want to do. It is unlikely that an arbitrary choice to use the third element's coordinates, rather than the first or second element in the list, would appropriately represent the items in the list. These three items have significantly different geographic locations, so we should find a way to ingest them as three separate event records.

Re-configuring the data ingest

When I first mentioned configuring a Receive JSON on a REST Endpoint input to allow the illustrated block of JSON to be ingested into GeoEvent Server for processing, I indicated that the JSON Object Name property should be left unspecified. This was done to support a discussion of the data's structure.

If the illustrated JSON data were representative of data you wanted to ingest, you should specify an explicit value for the JSON Object Name parameter when configuring the GeoEvent Server input. In this case, you would specify "items" as the root node of the data structure.

Specifying "items" as the JSON Object Name tells the input to handle the data as an array of values and to ingest each item from the array as its own event record. If you make this change to our input, and delete the GeoEvent Definition it created the last time the JSON data was received, you will get a slightly different GeoEvent Definition generated as illustrated below:

 GeoEvent Definition

The first thing you should notice, when the illustrated block of JSON data is sent to the input, is the input's event count increments by three -- indicating that three event records were received by GeoEvent Server. Looking at the new GeoEvent Definition, notice there is no attribute named "items" -- the elements in the array have been split out so that the event records could be ingested separately. Also notice the cardinality of each of the event record attributes is now One. There are no lists or arrays of multiple elements in the structure specified by this GeoEvent Definition. The "location" is still a Group which is fine; each event record should have (one) location and the coordinate values can legitimately be organized as children within a sub-structure.

The updates to the structure specified in the GeoEvent Definition change how the coordinate values are accessed. Now that the event records have been separated, you can access each record's attributes without specifying one of several element indices to select an element from a list.

You should now be ready to re-configure the input to construct a geometry as well as make some minor updates to the data types of each attribute in the GeoEvent Definition in order to handle "id" as a Long and "calibrated" as a Date. You also need to add a new field of type Geometry to the GeoEvent Definition to hold the geometry being constructed.

GeoEvent Input

GeoEvent Definition

Hopefully this blog provided some additional insight on working with hierarchical and multi-cardinal JSON data structures in GeoEvent Server. If you have ideas for future blog posts, let me know, the team is always looking for ways to make you more successful with the Real-Time & Big Data GIS capabilities of ArcGIS.

When a GeoEvent Service processes an event record, the processing is generally atomic. In other words, a filter or processor considering an event record's attributes and geometry has no information on other event records previously processed and will not cache or save the current event record's attributes or geometry for later consideration by an event record not yet received.


There are a few exceptions - monitor processors such as the Incident Detector or Track Gap Detector necessarily cache some information in order to monitor ongoing conditions. And filters configured with ENTER or EXIT criteria need to know something about the position of the last reported event with a given TRACK_ID.


So how do you configure real-time analytics to compare an event's geometry against some other geometry?  You use geofences. Christopher Dufault has collected some best practices for importing, synchronizing, and using geofences in GeoEvent Server. Check out his blog Geofence Best Practices and comment with tips and tricks with geofences you've found useful in analytics you've designed.


- RJ

This article is the second of two articles examining enhancements made to the HTTP transport for the GeoEvent Server 10.5 release. This article examines the outbound transport. The previous article examining the inbound transport can be found here.


In this article, I would like to provide detail for an enhancement made to the HTTP outbound transport for the GeoEvent Server 10.5 release. The following capability is listed on the What's new in ArcGIS GeoEvent Server web help page:

  • HTTP outbound transport now supports field value substitutions in the HTTP GET mode


Beginning with the 10.5 product release, an output leveraging the HTTP transport can be configured to substitute event attribute values into the URL of a request GeoEvent Server will send to an external server. The attribute values are incorporated as query parameters (as opposed to the request’s content body).

The new capabilities of the HTTP transport will be described below with exercise steps you can follow to demonstrate the capabilities.


When you want to send data from event records to an external server or application you typically configure an outbound connector – such as the Push JSON to an External Website output. GeoEvent Server will incorporate the event data into the content body of a REST request and send the request to the external server as an HTTP/POST. This capability has been available in the last several releases.

A device on the edge of the Internet of Things, however, might prefer to receive requests with event data organized as query parameters rather than in a request's content body. This way the entire data payload is in the URL of the request -- leaving the content body of the request empty.

It might seem a little odd for a GeoEvent Server output, which is not intended to receive or process any type response, to make an HTTP/GET request. But the capability was introduced to enable GeoEvent Server to issue activation requests to devices which require data values be sent using query parameters.


Exercise 2A – Use HTTP/GET to send event data as query parameters to an external server


Why exactly are we configuring a custom outbound connector?

How's it different than the the Push JSON to an External Website connector available out-of-the-box?


For this exercise:

  1. Configure the following GeoEvent Server output connector.
    Browse to Site > GeoEvent > Connectors and select to create a new outbound connector. Default values for the "Shown", "Advanced", and "Hidden" properties are included beneath the illustration.


    Shown PropertiesDefault Value
    URL[ no default value defined ]


    Advanced PropertiesDefault Value
    Use URL ProxyFalse
    URL Proxy[ no default value defined ]
    HTTP Timeout (in seconds)30


    Hidden PropertiesDefault Value
    Formatted JSONFalse
    MIME Typetext/plain
    Acceptable MIME Typestext/plain
    Post/Put body MIME Typetext/plain
    Parameters[ no default value defined ]
    Header Parameter Name:Value List( blank )
    HTTP MethodGet
  2. Save your newly configured custom outbound connector.
  3. Navigate to Services > Outputs and select to create a new (Custom) HTTP/GET request with event data as query parameters output. Configure the output as illustrated below, replacing yourServer and yourDomain with a valid server and domain for your organization.

    Note the URL specified in the illustration:

    https ://yourServer.yourDomain/server/rest/services/SampleWorldCities/MapServer/0/query?where=city_name='${Origin}'&f=json

    The format of the URL assumes that an ArcGIS web adapter (named 'server') has been configured and that an external server or client application receiving this URL could use it to query the "Sample World Cities" map service on your ArcGIS Server. GeoEvent Server will substitute the variable ${Origin} in the URL's query parameter with an actual attribute value from a received event record, enabling the external server or client application to make a more specific query based on real-time events.
  4. Save your updated output, then publish a GeoEvent Service which incorporates your output and an input of your choice. You can use any type of input, so long as the GeoEvent Definition associated with event records received by the input includes an attribute field named Origin.

    Queries through a web adapter to a Portal secured web service from an unauthenticated source will return an error. Since the Sample World Cities web service is secured by Portal in my current deployment, I expect the request made by GeoEvent Server will generate an error. In order to complete the demonstration we will use the GeoEvent Server's debug logs to confirm that the output has constructed a valid query and sent the request to the ArcGIS Server map service.
  5. Navigate to the Logs page in GeoEvent Manager. Click 'Settings' and enable DEBUG logging for the feature service outbound transport logger (com.esri.ges.transport.http.HttpOutboundTransport).
  6. Send an event record to your GeoEvent Server input whose Origin attribute is the name of one of the cities in the Sample World Cities map service (e.g. Chicago). Refresh the Logs page in GeoEvent Manager and you should see log messages with information similar to the following:


The first message shows that 'Chicago' was indeed substituted into the query parameters by the GeoEvent Server output and a request was made. The error may or may not be displayed; as indicated above, the map service in my case is Portal secured and this request did not include a token authenticating the request.


There are a couple of things you'll want to keep in mind. The URL you use to configure the the output must URL Encode its query parameters to make them HTTP safe. But the value is being substituted by GeoEvent Server is based on a string received from a real-time data source. This means you may have some work to do to make sure that "San Francisco" is represented as San%20Francisco not San Francisco before an event record is sent to an output.


Also, the enhancement being introduced in this article was designed specifically for HTTP/GET since those requests do not include a JSON payload in the request’s body. However, some rudimentary testing suggests that you can use HTTP/POST as well; I suppose it would be up to the external server receiving the request whether or not to honor an HTTP/POST and either ignore the request’s JSON payload or potentially consider its content in addition to the values in the query parameter.


Finally, you do have some freedom in how the request’s query string is specified. For example, you could construct a parameterized string something like; GeoEvent Server will handle the substitution of the multiple parameter values:



If you send the string highlighted above through an HTML decoder you'll see that it is equivalent to:

where=city_name IN ('${CityA}','${CityB}')&f=json


I hope these two blogs were helpful.  Please comment below with questions and I'll do my best to answer them.


-- RJ

This article is the first of two articles examining enhancements made to the HTTP transport for the GeoEvent Server 10.5 release. This article examines the inbound transport. The second article examining the outbound transport can be found here.


In this article, I would like to provide detail for an enhancement made to the HTTP inbound transport for the GeoEvent Server 10.5 release. The following capability is listed on the What's new in ArcGIS GeoEvent Server web help page:

  • HTTP inbound transport now accepts GET requests in the query parameters


Beginning with the 10.5 product release, an input leveraging the HTTP transport can be configured to support an external server or application which incorporates its data payload in the URL of the request (as opposed to the request’s content body).

The new capabilities of the HTTP transport will be described below with exercise steps you can follow to demonstrate the capabilities.


When you want to receive event records as an HTTP/POST request from an external server or application you typically configure an inbound connector – such as the Receive JSON on a REST Endpoint input. GeoEvent Server will create a REST endpoint to which the external server can post its event data with the event data included in the content body of the request. This capability has been available in the last several releases.

A device on the edge of the Internet of Things, however, might prefer to organize the event data as query parameters and incorporate its data payload in the URL of the request -- leaving the content body of the request empty. For example:

  • http :// localhost:6080/geoevent/rest/receiver/http-receiver?field1=v1&field2=v2&field3=v3
  • http :// localhost:6080/geoevent/rest/receiver/http-receiver?data=v1,v2,v3

Beginning with the 10.5 product release an input pairing either the out-of-the-box JSON or TEXT adapter with the HTTP inbound transport can be configured to support the use cases above with an HTTP/GET request.


Exercise 1A – Use HTTP/GET requests to send event data to GeoEvent Server as query parameters

  1. Create the following GeoEvent Definition

  2. Configure the following GeoEvent Server input connector

    Note the new 10.5 parameter:  Get Request Contains Raw Data

    Review the help tip provided for this parameter. If the inbound connector is running in SERVER mode and receives an HTTP/GET request, if the request content body is empty and the request URL includes query parameters, the default (‘No’) will consider each name/value pair as a separate attribute value in an event record. If the default is changed to ‘Yes’ you will be expected to specify the one query parameter which will be considered the event’s raw data.

  3. Configure a GeoEvent Server output connector and publish a GeoEvent Service

    You can use any outbound connector which supports JSON event record displays. Recommended output connectors are ‘Send Features to a Stream Service’ or ‘Write to a JSON File’.

  4. Send the following HTTP/GET request to your input connector’s endpoint



You should observe the event count of your ‘Receive JSON on a REST Endpoint’ input increment as HTTP/GET requests are made on your input’s REST endpoint


Exercise 1B – Explore HTTP/GET requests whose query parameters include comma delimited values

Rather than incorporating the event data into a series of key/value pairs, the event data can be conveyed using a single query parameter whose value is a set of comma delimited values. The delimited text values will require an inbound connector which leverages the TEXT adapter (rather than the JSON adapter used in the previous exercise).

GeoEvent Server does not include a “Receive TEXT on a REST Endpoint” inbound connector out-of-the-box, so you will need to configure one for this exercise.

  1. Configure the following GeoEvent Server input connector.
    Browse to Site > GeoEvent > Connectors and select to create a new inbound connector. Default values for the "Shown", "Advanced", and "Hidden" properties are included beneath the illustration.

    Shown PropertiesDefault Value
    Event Separator\n (newline)
    Field Separator, (comma)
    Incoming Data Contains GeoEvent DefinitionFalse
    Create Unrecognized Event DefinitionsFalse
    Create Fixed GeoEvent DefinitionsFalse
    GeoEvent Definition Name (New)[ no default value defined ]
    GeoEvent Definition Name (Existing)[ no default value defined ]
    Language for Number Formatting[ no default value defined ]

    Advanced PropertiesDefault Value
    Acceptable MIME Types (Server Mode)text/plain
    Expected Date Format[ no default value defined ]
    Build Geometry From FieldsFalse
    X Geometry Field[ no default value defined ]
    Y Geometry Field[ no default value defined ]
    Z Geometry Field[ no default value defined ]
    Well Known Text Geometry Field[ no default value defined ]
    wkid Geometry Field[ no default value defined ]
    Get Request Contains Raw DataTrue
    Parameter Name for the Raw Datadata

    Hidden PropertiesDefault Value
    Use Long PollingFalse
    Frequency (in seconds)[ no default value defined ]
    Receive New Data OnlyFalse
    Post/Put body MIME Type[ no default value defined ]
    HTTP MethodGet
    Header Parameter Name:Value List( blank )
    Post/Put FromParameters
    Post/Put Parameters( blank )
    Content Body[ no default value defined ]
    Parameters[ no default value defined ]
    URL[ no default value defined ]
    URL Proxy[ no default value defined ]
    Use URL ProxyFalse
    Acceptable MIME Types (Client Mode)[ no default value defined ]
    HTTP Timeout (in seconds)30
    Append to the End of Payload[ no default value defined ]
  2. Save your newly configured custom inbound connector.
  3. Navigate to Services > Inputs and select to create a new (Custom) Receive TEXT on a REST Endpoint input.
    Configure the input as illustrated below. Use the GeoEvent Definition you created for the last exercise.

  4. Publish a GeoEvent Service which incorporates your newly configured input and any outbound connector which supports JSON event record displays. You can use the outputs configured for the previous exercise if you wish.
  5. Send the following HTTP/GET request to your input connector’s endpoint (note the endpoint's name has changed):



You should observe the event count of your ‘(Custom) Receive TEXT on a REST Endpoint’ input increment as HTTP/GET requests are made on your input’s REST endpoint.