Skip navigation
All Places > GIS > Enterprise GIS > GeoEvent > Blog
1 2 3 Previous Next

GeoEvent

39 posts

I sent the following to one of our contractors today. The information on configuring SSL certificates, administrative tips for multi-machine deployments following a 'site' model, and things to check when GeoEvent Server fails to load its ArcGIS Server's configured certificates and instead uses its own SelfSignedCertificate might be of more general use, so I'll leave this here in case it helps someone working with GeoEvent Server deployments.

With a multi-machine ‘site’ configuration it is critical that all machines trust one another. That means that not only do I have to configure an SSL certificate on Box#1 and configure that machine’s ArcGIS Server to use that certificate as its Web Server Certificate … I have to import certificates for Box#2, Box#3, … Box#N into the ArcGIS Server so that it trusts all the other machines participating in the site. I have to do this “fan-out” on every server, setting *that* server’s Web Server Certificate and importing certificates from all the *other* machines onto that server.

I’ve captured what I do that works for me when setting up a couple of machines. But to be honest, SSL certificate configuration is not something I understand at a deep, technical level. Likely there is a “better” way of doing what I propose in the attached, maybe using a wild-card certificate, but I don’t know how to set that up.

I’d also like to break the problem you’re seeing into two pieces. The first being SSL certificate configuration, for which I’ll capture some screenshots (see attached PDF). The second piece involves things I look at when GeoEvent Server seems unable to locate and load the certificates its ArcGIS Server is configured to use.

The second part probably has more to do with why GeoEvent Server completes a fail over to use its SelfSignedCertificate rather than the certificate its ArcGIS Server is configured to use. I’ll apologize if anything I share is overly pedestrian … like I said, SSL certificates are not my cup of tea, so all I can do is show you what works for me and hope that your experience will allow you to iterate and adapt what I have to share.

The first part, SSL certificate configuration, is attached.

For the second part … I would caution against opening the Java Keystore using a command like keytool.  I’ve watched developer’s do this, but I’ve never seen that administratively editing the JKS do anything to resolve a problem. GeoEvent Server, when it launches for the first time, interrogates its ArcGIS Server for information on its site and SSL certificates. If you would like to see some evidence for this, you can request DEBUG logging on the com.esri.ges.security.arcgis.sslconfig GeoEvent Server logger component. GeoEvent Server will attempt to copy the certificate configuration of the ArcGIS Server is it running beneath. If GeoEvent Server cannot obtain the certificates from the ArcGIS Server configuration, it will fail over to use its own SelfSignedCertificate. The fail over is intended to at least allow GeoEvent Server to complete its startup – but if GeoEvent Server does not trust machines the same way as its ArcGIS Server does, lots of stuff is probably not going to work.

By the way, it is precisely because GeoEvent Server interrogates its ArcGIS Server for information that it is best to have your ArcGIS Enterprise (Portal for ArcGIS, hosting ArcGIS Server, ArcGIS Data Store) fully configured with a site created, federated and all SSL certificates configured before you introduce GeoEvent Server to the Enterprise. Installing – or at least starting the GeoEvent Gateway and GeoEvent Server – before ArcGIS Server and Portal for ArcGIS are fully configured means that the initial interrogation fails. Security topology may change … you may later decide to federate for example, or SSL certificates have to change … in which case resetting your GeoEvent Server configuration from within GeoEvent Manager (e.g. not an “administrative reset”) should force GeoEvent to pick-up changes made to the Enterprise configuration. Worst case you have to stop and restart GeoEvent Server after resetting its configuration then import your inputs, outputs, …etc. You don’t always have to re-install, but installation order can make your life easier administratively when deploying all this s/w for the first time.

There are a few things I check when I find that GeoEvent Server is using its own SelfSignedCertificate rather than the certificate its ArcGIS Server specifies as its Web server SSL certificate.

  • Did I accurately follow the certificate configuration laid out in the attached PDF?

Sometimes a machine gets re-imaged, or a something else invalidates a certificate I had previously generated, applied, and imported using the attached procedure. That is when I have to walk through that whole process again. Sometimes it is just that a certificate has expired. They do that, and rarely when it’s convenient.

  • ArcGIS Server maintains two different certificate stores – do their contents match?
Seriously, this has bitten us more than once. There’s a certificate store beneath …\ArcGIS\Server\framework used, I think, by web clients. ArcGIS Server maintains a copy of these certificates in its configuration store for each machine in the site. This second key store is used, I think, by thick client applications.
  • C:\Program Files\ArcGIS\Server\framework\etc\certificates
  • C:\arcgisserver\config‑store\machines\MYMACHINE.ESRI.COM

The two certificate stores should be identical. I’ve found once or twice that files had not been copied from the Server framework into its configuration store. When this happened I had to stop ArcGIS Server, manually create the folder named for the machine (e.g. CARMON.ESRI.COM beneath …\config-store\machines) and copy the files from the framework into the configuration store folder. When I restarted ArcGIS Server and administratively reset GeoEvent Server, it adopted its Server’s certificates and began working as expected.

  • ArcGIS Server maintains both JSON and XML copies of its SSL configuration – do they match?

When debugging we’ve found a couple of times that the SSL configuration reported by ArcGIS Server by its Admin API did not match an XML file’s content that GeoEvent Server was using to retrieve certificate information. Specifically a file D:\arcgisserver\config-store\machines\10.0.0.131.json specified a webServerCertificateAlias which did not match what should have been the same information in a C:\Program Files\ArcGIS\Server\framework\etc\machine-config.xml file.

When this happens you might try stopping GeoEvent Server (and GeoEvent Gateway) and reconfiguring the ArcGIS Server’s certificates. If the files match after ArcGIS Server completes a restart, then you can administratively reset GeoEvent Server and it should pick-up the correct certificate configuration.

  • Does the GeoEvent Gateway have its correct hostname / IP Address in its com.esri.ges.gateway.cfg file?

Part of the GeoEvent Server administrative reset is to delete this file and make sure that it gets regenerated automatically when GeoEvent Gateway (or maybe its when GeoEvent Server) comes up for the “first” time.

If you look at the file’s content in a text editor you’ll see that it instructs the Gateway as to which server and port it should use for connecting to the Zookeeper distributed configuration store which manages your GeoEvent Server’s configuration. It also specifies the Apache Kafka topic partitions, replication and how to reach the broker. If the machine information in this file designates a machine which does not exist – like when you use cloud image utilities to push a machine image out to multiple virtual machine instances – when GeoEvent Gateway launches it never reaches a stable state and cannot support its GeoEvent Server.

The procedures to administratively reset GeoEvent Server are in a blog:  Administratively Reset GeoEvent Server

You can follow the procedures for 10.6.x as they will be the same for 10.7.x and 10.8 deployments. These are the steps, by the way that you have to run on each server when following a multi-machine deployment with a ‘site’ configuration and one of the machines drops out of the configuration and does not automatically re-integrate.

Resetting a multi-machine ‘site’ configuration is both tedious and error prone. You basically have to work as if you’re installing all of the s/w for the first time:

  • Install ArcGIS Server, create site, configure certificates, install GeoEvent Server
  • Install ArcGIS Server, join site, configure certificates, install GeoEvent Server
  • Install ArcGIS Server, join site, configure certificates, install GeoEvent Server (lather, rinse, repeat)

When you already have an ArcGIS Server site with, say, three machines things get messy. I think what you do is use ArcGIS Server Manger to ‘STOP’ two of the machines – you’ll want to stop GeoEvent Gateway and GeoEvent Server on those machines first. The idea is that as far as the ArcGIS Server site is concerned it only has one machine. Complete the admin reset for GeoEvent Server on that machine then start its Gateway, wait a couple minutes, then start its GeoEvent Server.

Then, back in ArcGIS Server Manger to ‘START’ a second machine. The site now thinks it has two machines, only one of which is running GeoEvent Server. Complete the admin reset for GeoEvent Server on the second machine then start its Gateway, wait a couple minutes, then start its GeoEvent Server. As the GeoEvent Gateway and GeoEvent Server come up they’ll discover and coordinate with the running GeoEvent Server, through the AGS site, and work out among themselves how to balance the kafka topics and brokers.

Finally, in ArcGIS Server Manager, ‘START’ the third machine. The site now thinks it has thee machines, only two of which are running GeoEvent Server. Complete the admin reset for GeoEvent Server on the third machine then start its Gateway, wait a couple minutes, then start its GeoEvent Server. As the GeoEvent Gateway and GeoEvent Server come up on this final machine they’ll integrate with the other two.

If you try to bring all three machines on-line at the same time and they were not properly integrated / balanced when they were taken down … they’ll likely not integrate correctly with one another. You have to stage their startup so that the ArcGIS Server site never has more than one machine ‘STARTED’ which does not have a fully initialized and integrated GeoEvent Server. When two or more GeoEvent Server’s try to integrate at the same time things tend to fail. It is precisely this sort of fragility, and the fact that it is so administratively difficult to determine if the machines were not properly integrated / balanced in the first place, that I feel a ‘site’ configuration really doesn’t provide the resiliency it was designed to provide. Sure, when everything is working it works beautifully. But when a machine falls out of configuration … getting the ‘site’ back to nominal is difficult (to say the least).

 

Hope this information is helpful –
RJ

Have you ever had to employ a file-based output connector such as Write to CSV File to better understand and troubleshoot what is happening with your real-time data as it's processed in a GeoEvent Service? Or rather, have you ever had to publish a pseudo stream service or feature service just to verify the geometry of your event data is as expected? What happens when you forget to configure an output connector while creating a new GeoEvent Service? What do you do when you realize you need to edit an input in a GeoEvent Service before you're ready to publish it?

 

If you're like me, these are just a few hurdles you've encountered when working with different elements in the service designer of ArcGIS GeoEvent Manager. It can take time, patience, and a certain level of familiarity to be successful. We've recognized how many steps you must go through to simply and quickly determine what's happening with your real-time data as it's processed in a GeoEvent Service. Additionally, having to navigate to the multiple pages in GeoEvent Manager to create new or edit existing inputs, outputs, site settings, and more can take time. Given these realities, i'm excited to announce many new usability enhancements as well as a new sampling utility available in the service designer at 10.8. These new capabilities will help you be more efficient and effective when defining your real-time services, making the hurdles mentioned above a thing of the past.

 

So, let's explore some of these new capabilities in more detail!

 

Working with elements and settings in the service designer

 

First, we’ve made some exciting functionality and usability enhancements to the service designer in GeoEvent Manager. Many of you are probably familiar with the New Element list where you could previously only add new filters and processors to a GeoEvent Service. With 10.8, you now have the ability to create new inputs and outputs as well as copy existing inputs/outputs and configure them directly in the service designer.

 

So if you're in a GeoEvent Service, and you forget to create an input or output connector, there's no longer a need to leave the current GeoEvent Service or open another browser tab just to create it. Simply add an input element from the New Element list to the canvas, configure the properties, and save it. It's the same workflow you've become familiar with when adding and configuring both filters and processors.

 

Create and add Input and Output Connectors from the service tray in ArcGIS GeoEvent Server 10.8

 

You may be thinking, “what do I do if I need to edit an input or start or stop that input?” Well, there's now the ability to start, stop, and even edit existing connectors from inside the service designer. Simply right click an input/output you've added to the canvas or one that exists in the element list to access the action menu. For example, in the illustration below, by right-clicking the Receive Flights Data input, you can choose to edit the input's properties, delete the input, fit the input's bounding box to the text, as well as start or stop the input to control the flow of data. By choosing to edit the input properties, a new window will appear allowing you access to all the familiar properties of the input. Should you choose to stop the input, a new status icon on the element will change from green to gray to reflect the stopped state.

 

Start, stop, or edit input and output connectors while editing a GeoEvent Service in ArcGIS GeoEvent Server 10.8

Quickly edit, delete, start, and stop an input or output by double clicking the element in the canvas.

 

While inputs and outputs connectors are certainly important, they're only half the story when it comes to publishing a working GeoEvent Service. There's also several GeoEvent Server site settings that need to be considered and set appropriately, oftentimes, before successfully publishing a GeoEvent Service. That leads to our next exciting enhancement, the ability to access and edit several key GeoEvent Server site settings directly in the service designer. These settings include access to your GeoEvent Definitions, GeoFences, Data Stores, and Spatiotemporal Big Data Stores.

 

Have you ever realized halfway through configuring a spatial filter that you forgot to configure your GeoFences? Or, maybe you forgot to create the target GeoEvent Definition for a Field Mapper Processor? Rather than leave your GeoEvent Service, and potentially lose the work you put into configuring it, simply access those settings directly in the service designer. From the Site Settings list, you can double click any of the settings available to open and access those particular settings. When you're finished making any necessary updates, just save your changes and continue configuring your GeoEvent Service.

 

Site settings are available in the service designer at ArcGIS GeoEvent Server 10.8+

 

Sample and view real-time data

 

In addition to the above enhancements to the service designer, i'd like to next introduce you to our newest utility in ArcGIS GeoEvent Server 10.8; the GeoEvent Sampler. As the first of its kind, the GeoEvent Sampler is an embedded utility in the GeoEvent Manager's service designer that allows you to quickly sample, review, and even visualize processed data in real-time as it flows through routes and elements in a GeoEvent Service. No longer is it necessary to spend time and effort configuring different types of ephemeral outputs just to review, visualize, or verify your processed data is as expected.

 

Unlike GeoEvent Logger or GeoEvent Simulator, which are separate Windows applications, GeoEvent Sampler is embedded in GeoEvent Manager. Therefore, you'll be able to use this new utility in both Windows and Linux environments.

 

Let’s explore how the GeoEvent Sampler can be used to help you build and/or troubleshoot a GeoEvent Service.

 

Verifying your schema

Let’s say you want to ensure your Field Mapper Processor is altering the schema of your processed event data correctly. Prior to 10.8, you could write the data emitted from the Field Mapper Processor to an external JSON file. This meant first creating and configuring a separate Write to a JSON File Output Connector. Next, you would need to add that output to your existing GeoEvent Service. Once this was set up, and your outbound data was writing to a JSON file, you would then need to open the JSON file in a text editor to review the data as formatted JSON. While this workflow isn't necessarily difficult to accomplish, it can be time consuming.

 

Using the new GeoEvent Sampler at 10.8, you can simply select the route that connects the Field Mapper Processor to your next element (e.g. an Output Connector as shown in the illustration below) and sample the event records (formatted in JSON) that are being emitted in real-time on that route.

 

Verify the schema of processed GeoEvents with the GeoEvent Sampler at ArcGIS GeoEvent Server 10.8

A sampled event record allows you to confirm the definition and schema are correct.

 

By never having to leave the service designer, you can quickly confirm whether or not the schema is being updated as expected. If you happen to notice that one of your target definition fields is spelled incorrectly, or that the data type of a field is a string instead of an integer, simply edit the target GeoEvent Definition without leaving the service designer. Remember, GeoEvent Definitions can now be edited directly in the service designer at 10.8 as mentioned above. After making your edits, want to double check that the changes are correct? Just publish the GeoEvent Service and sample the event data on that route again!

 

Another useful scenario where GeoEvent Sampler could come in handy is if you wanted to compare event data on two routes in a GeoEvent Service. For instance, if you wanted to compare your event data's original schema to the schema after it's emitted from a Field Mapper Processor. First, select and sample the first route that's sending the data into the Field Mapper Processor. This will sample the data right before its schema is transformed by the processor. Next, select and sample the route that's emitting data from the Field Mapper Processor for comparison. This will sample the data after the schema as been transformed. With just a few clicks, you can see the data changing in front of you, in real-time.

 

You can compare GeoEvents on up to two routes using the GeoEvent Sampler at ArcGIS GeoEvent Server 10.8.

Comparison sampling confirms the source and target fields are mapped correctly. For example, the MPH field (left) was renamed to Speed (right).

 

Verify attribute values

GeoEvent Sampler can also be used to verify the attribute values of your real-time data. Continuing with the flight data example in the illustration above, there is a new field called Speed whose value is in miles per hour (e.g. 500 mph). Let's say you want to change the mph value to kilometers per hour (kph). To do this, you need to use a Field Calculator Processor to multiply the mileage value in the Speed field by 1.609344 to get the kilometers per hour (i.e. Speed * 1.609344). While this is easy enough, how can we quickly verify the conversion is happening before configuring the rest of the GeoEvent Service?

 

Prior to 10.8, one option would be to create and configure a Push Text to an External TCP Socket Output Connector and view the output in GeoEvent Logger. Data from before and after the Field Calculator Processor could be routed to the output and viewed in GeoEvent Logger. While this is relatively simple to set up, it does take time, just like the previous JSON file example above. Something else worth considering with this type of workflow is data velocity. Depending on the rate at which your real-time data is being received and processed, it could be a challenge to review the data in GeoEvent Logger since it could be constantly updating with new information. You could close the TCP connection in order to allow you to review the data in GeoEvent Logger, but that's just another factor to consider. 

 

Now at 10.8, you can use GeoEvent Sampler and select the route before the Field Calculator Processor to observe the speed data in miles per hour (mph) and then select the route after the processor to view the speed data after its been converted to kilometers per hour (kph). It's important to note that GeoEvent Sampler is not a logging utility, you can only sample a fixed number of GeoEvents on a route (1, 10, or 100 at a time). So, in the case of the flights example, you could quickly sample a single GeoEvent on each route to verify the processor is correctly calculating the speed in kph. There's no need to comb through hundreds of processed GeoEvents to verify the same thing. The graphic below illustrates this.

 

The attributes of processed GeoEvents can be quickly reviewed using the GeoEvent Sampler at ArcGIS GeoEvent Server 10.8

Comparison sampling of two non-adjacent routes confirm the original MPH field (left) and it's value of 502.0 was correctly converted to kph in the Speed field (right) with a new value of 807.89.

 

Verify geometry

So far we've covered how GeoEvent Sampler can be used to review and validate your schema and attributes, but what about the geometry of your processed data? Well, included with GeoEvent Sampler is a capability called the Event Viewer. It can be used to display the geometry of processed event data whose GeoEvent Definition geometry field has been tagged with GEOMETRY.

 

Let's say you're tracking airplanes and using a Buffer Creator Processor to create a buffer for each airplane. You want to ensure the airplanes are being buffered by the correct distance before proceeding with the configuration of the GeoEvent Service. In this case, you need to make sure the point geometry is being changed to polygon geometry. Prior to 10.8, you could verify this a few different ways. First, you could write the buffered GeoEvent to a JSON file and review the geometry object for rings. But even then, how do you know that this geometry will be displayed correctly in a feature or stream service? Unless you're an avid fan of deciphering JSON syntax, you'll likely try to send the data to a feature or stream service to see if it takes. After all, if you can see the geometry of the data display in a web map, you can be confident that buffering is happening, and that the geometry is what you expect. Of course, checking the geometry of a GeoEvent by using a temporary feature or stream service first means creating and configuring those services. Before that can be done, there's GeoEvent Definitions, data store connections, and other factors to consider.

 

Using GeoEvent Sampler at 10.8, you can now sample the GeoEvents emitted from the Buffer Creator Processor. Once you have a sampled GeoEvent, you can then simply open the Event Viewer to display the geometry. If you choose to sample a single route, the Event Viewer will display the geometry of the sampled records from that route. If you choose to sample two routes for comparison, the Event Viewer will display two map views that show the geometry of the sampled data from both routes respectively. In the illustration below, you can see the point geometry of the source flight data and the polygon geometry of the buffered data output from the Buffer Creator Processor.

 

The Event Viewer can be used to display the geometry of sampled GeoEvents in ArcGIS GeoEvent Server 10.8.

The Event Viewer in GeoEvent Sampler, allows you to confirm the valid point geometry of the source event data and the valid polygon geometry of the buffered event data.

 

These are just a couple ways GeoEvent Sampler can be used to assist with the creation or troubleshooting of your GeoEvent Services. Other example use cases of GeoEvent Sampler not covered here include (but are not limited to) checking datetime values, filtering, regular expressions, construction point geometries, and more.

 

Hope you enjoy using these new enhancements! If you have ideas for future enhancements, please submit those on the Real-Time GIS place on ArcGIS Ideas.


Back in June of 2019, at the 10.7.1 release of ArcGIS GeoEvent Server, we shared the news of new documentation for the available out-of-the-box input and output connectors. Today, we are happy to announce new and expanded documentation for each of the available processors in GeoEvent Server 10.8. Like the connector documentation, each processor now has expanded information to help you properly configure the processor. Access the new processor documentation by visiting What are Processors? where you can access links to each individual processor you may be interested in.

 

Processors at ArcGIS GeoEvent Server 10.8

 

Alternatively, you can access the new processor help directly in ArcGIS GeoEvent Manager when editing or configuring a new processor. Simply click the Help button in the processor property dialog to expand the embedded help. It's important to note that even though this new documentation is available with the 10.8 release, the information is still applicable to previous versions as well.

 

The new processor documentation follows the same format and style as the input and output connector documentation that you may already be familiar with. First, you’ll see a general summary as well as some example use cases for each processor. Beneath that you'll find detailed usage notes. These notes are intended to provide some extra contextual information for successful configuration of each processor.

 

New processor summary and examples with the 10.8 release of ArcGIS GeoEvent Server.

 

Following the usage notes, is a parameters table that includes detailed information about the parameters for each processor, what each parameter does, options to configure it, and more. The parameter table list all of the available parameters, meaning you can learn about all the parameters that are shown by default in additional to all of the conditional parameters that are initially hidden due to their dependencies on other parameters.

 

New processor parameter help at ArcGIS GeoEvent Server 10.8

 

We recognize that many of the available processors have their own nuances and quirks, so we’ve included a final section that details various considerations and limitations. This information is intended to provide some additional context about how the processor fundamentally works, and therefore, will hopefully help guide how you approach incorporating it into your GeoEvent Services.

 

New processor considerations and limitations documentation at the 10.8 release of ArcGIS GeoEvent Server

 

As with our input and output connectors, you can find step-by-step tutorials on how to configure many of the available processors on the ArcGIS GeoEvent Server Gallery.

Overview

Polygons which model areas of interest – counties, national parks, or property boundaries for example – are generally static. A new area of interest might be established requiring a geofence to be added, or an existing area’s geographic extent might occasionally change requiring a geofence to be updated, but in general the geofences don't change very often. This scenario fits well with GeoEvent Server’s ability to synchronize its geofences with a feature record set maintained as part of a feature service. The areas of interest can be maintained as feature records and occasionally imported to establish or update the relatively static geofences. A synchronization rule can periodically poll the feature service to obtain updates.

This blog explores a different scenario. Suppose you need geofences to be created dynamically, managed for only a short period of time, and then frequently and automatically destroyed when no longer needed. Constantly polling a feature service to check and see if there have been any changes is impractical.

In a dynamic scenario, we need to push changes to GeoEvent Server immediately as the changes are received. A video attached to this blog will show how a GeoEvent Service can be used to receive attributes describing an area of interest, compute an effective date/time range during which the area of interest is considered relevant, and generate a polygon to model the area of interest. A stream service will be used to broadcast dynamically generated polygons and computed date/time values as a feature records allowing them to be registered with GeoEvent Server as new or updated geofences via a synchronization rule.

Objectives

  • Import and review a pair of GeoEvent Services configured to process a tracked asset's current location and dynamic geofences constructed for a given center point of geographic interest.
  • Review how stream services are published and configured outbound connectors updated to use the published stream services to broadcast processed event records as feature records.
  • Use the GeoEvent Simulator to send simulated vehicle location observations to GeoEvent Server and display those locations, live, on a web map.
  • Configure a synchronization rule to subscribe to a stream service and receive polygon feature records as they are broadcast (rather than relying on a feature service which must be frequently polled for updates).
  • Demonstrate how information can be sent to GeoEvent Server, on demand, via HTTP/POST to drive the generation of dynamic areas of interest (e.g. geofences).
  • Demonstrate the display and update of dynamic geofences both on a web map and in GeoEvent Server.
  • Extend a GeoEvent Service with an analytic which detects when a tracked asset's location intersects a dynamic geofence and produce an alert message which can be displayed using the GeoEvent Logger.
  • Discuss the temporal relevance of geofences, how analytics you configure will ignore geofences which are not temporally relevant, and how the GeoEvent Server AOI Manager automatically purges geofences which are no longer being used to clean-up its registry.

Demo Resources

I have included demonstration resources with this blog post so you can recreate this demonstration in your own environment. An attached ZIP archive includes an XML snapshot of a GeoEvent Server configuration which includes a couple of GeoEvent Services as well as inbound and outbound connectors. The configuration file was taken from a 10.7.1 deployment, but should work in the upcoming 10.8 and 10.8.1 releases.

  • A pre-configured GeoEvent Service Trackpoints connects a TCP/TEXT input with a stream service output to broadcast point feature records and report the location of a simulated tracked asset.
  • A second pre-configured GeoEvent Service, AOI_Centerpoint, connects an HTTP/JSON input with a stream service output to buffer received point locations and broadcast each buffer as a polygon feature record suitable for use as a geofence.
  • A video will walk you through the import of the provided configuration, necessary stream service publication, and the configuration of a geofence synchronization rule to subscribe to receive feature records broadcast from the second stream service.

I hope you find the video tutorial and information useful –
RJ

This blog is one in a series of blogs discussing debugging techniques you can use when working to identify the root cause of an issue with a GeoEvent Server deployment or configuration. Click any link in the quick list below to jump to another blog in the series.

In this blog I will discuss a technique I have used to perform a targeted extraction of debug messages being logged as GeoEvent Server queries a feature record set from an available polygon feature service to synchronize its geofences. The technique expands the use of command-line utilities first introduced in a previous blog. These utilities enable us to perform pattern matching on specific sections of a logged message then extract and apply string substitution and formatting to the logged messages, live, as the messages are being written, to make them easier to read.

A lot of the analysis I am about to demonstrate could be done using a static log file and a text editor, but I have come to really appreciate the power inherent in the command-line utilities I am covering in this blog. Our goal will be to find and review HTTP requests GeoEvent Server makes on a feature service resource being used as the authoritative source of area of interest polygons as well as the feature service's responses to the requests.

Scenario

A customer has published a feature service with several dozen polygons representing different areas of interest and configured a Geofence Synchronization Rule to enable the polygons to be periodically imported and synchronized to keep a set of geofences up-to-date. We know that GeoEvent Server polls the feature service to obtain a feature record set and registers the geometries with its AOI Manager – in this context AOI is short for "Area of Interest".

For this exercise we are interested in the interface between GeoEvent Server and an ArcGIS Server feature service, not the internal operations of the AOI Manager. We want to capture information on feature record requests and the responses to these requests. GeoEvent Manager does not provide an indication of when geofence synchronization occurs, only that it occurs once every 10 minutes in the customer's configuration, so the customer would like to know if enabling debug logging for a specific component logger will grant them additional visibility into the periodic geofence refresh as it takes place. Knowing when a synchronization is about to occur will more deterministic testing on the real-time analytics configured without resorting to aggressive synchronizations every few seconds.

Geofence Synchronization

To begin testing the scenario described above I published a feature service and added a few dozen polygon feature records to the service's feature data set. I can query the feature records via the feature service's REST endpoint:

Notice that each feature record has two attribute fields, gf_category and gf_name, which can be used to uniquely name and organize geofences when they are imported into GeoEvent Server.

Next, in GeoEvent Manager, I configure a synchronization rule that will query the feature service every 10 minutes. The feature records illustrated above will be loaded into the GeoEvent Server’s AOI Manager which handles the addition and update of geofences.

At this point I know that GeoEvent Server is periodically querying the feature service, but the GeoEvent Manager web application does not provide any indication of when the synchronizations will occur. I know the synchronization cycle starts when I click the Synchronize button and occurs every 10 minutes after that, but I do not know which component loggers would be most appropriate to watch for DEBUG messages. My only real choice, then, is to request debug logging for all component loggers by setting the level DEBUG on the ROOT component (knowing that this will cause the karaf.log file to grow very large very quickly).

In a previous blog, Application logging tips and tricks, I introduced tail and grep, a couple of command-line tools that can be used to help identify and isolate logged messages based on keywords or phrases. Using this technique to identify logged messages which include specific keywords allows me to focus on messages of particular interest.

In this case, however, using grep to search will not work as well because a pattern match may occur anywhere in a logged message's text. Using grep to look for something like MyGeofences[/]FeatureServer[/]0 to is likely to match more than we are interested in, specifically because the feature service’s URL appears in both the thread identifier as well as the actual message portion of numerous logged messages. So we need a more discriminating technique. We need a way to apply a regular expression pattern match to a specific portion of a logged message and associate a successful pattern match with an action we can run on the text of messages as they are written to the system’s log file.

Power tools for text processing and data extraction

Consider the following command which leverages awk rather than grep and a new stream editing utility sed:

rsunderman@localhost  //localhost/C$/Program Files/ArcGIS/Server/GeoEvent/data/log
tail -0f karaf.log | awk -F\| '$6 ~ /rest.*services.*MyGeofences/ { print $1 $4 $6; fflush() }' |sed 's/[&]token[=][0-9a-zA-Z_-]*/.../'

The awk command is typically used for data extraction and reporting. It is a text processing language developed by Aho, Weinberger, and Kernighan (yes, AWK is an acronym). The sed command is a stream editor used to filter and transform text. When interpreting the command line illustrated above remember that logged messages have six parts and each part is separated by a pipe ( | ) character.

Logged messages have six parts

As new messages are added to the karaf.log each message’s text is processed by the awk script which specifies that a pipe character should be used as the field delimiter and that the sixth field, the actual message, should match the specified regular expression pattern. If the pattern is matched then fields 1, 4, and 6 from each logged message are printed as output. The fflush( ) is important to force the command's buffered content to be flushed as each line of text is processed so that the sed command can identify a string of characters matching a query parameter &token= and replace the entire string with a few literal dots (simplifying the overall string).

There is a lot of power packed into this command. It enables us to apply a dynamic if / then evaluation to each logged message as the message is committed to the system log file, discard any message when a specific field does not match a specific pattern, and reformat messages on-the-fly to simplify their display. Wow.

You can read all about the power of sed and awk online. O’Reilly Media has an entire book dedicated to using sed and awk as power tools for text processing and data extraction.

Determining which component logger(s) to watch

The following illustration shows the output produced when the command above is used to filter the large volume of messages logged by all components when debug logging is requested at the ROOT level. For this example, assume that the command was run just before the Synchronize button is clicked to force a geofence synchronization rule to perform a set of queries against the feature service.

One pattern that stands out immediately is that there appear to be four requests made. Different component loggers represent these the requests in their own way, but we see key phrases repeated such as "Executing following request", "Main Client Exec ...Executing request" and the request's outgoing headers and actual request going out over the HTTP wire:


(Click to Enlarge Image)

We certainly don't need to see each request represented four different ways, and a quick search of the karaf.log for the key word "MainClientExec" shows the raw (unprocessed) log messages are associated with a particular class and bundle. These are clues to loggers we can interrogate further:

(Click to Enlarge Image)

If we are careful to leave DEBUG logging turned on at the ROOT level for only as long as it takes to navigate to the GeoFence Synchronization Rules and click Synchronize, then return to the Logs and change the settings back to WARN for the ROOT level, we can use the cached logged messages to generate a list of possible component loggers we might be interested in looking at more closely.

Two loggers that seem specifically appropriate are org.apache.http.impl.execchain.MainClientExec (because "MainClientExec" was identified as a class name of interest) and com.esri.ges.httpclient.Http (because the bundle identifier "com.esri.ges.framework.httpclient" was part of each logged message).

Requesting DEBUG logging on the HTTP Client logger will still produce a large number of logged messages. By targeting a single logger, however, we reduce the number of messages being logged overall; we are not interested in examining debug messages from the header or wire components for example. Also, we can tailor our sed and awk command to help further identify messages of particular interest. If we run our text extraction and format command on an active tail of the karaf.log – and take care to start and end the tail around the time that we navigate to GeoFence Synchronization Rules and click Synchronize – the number of logged messages is surprisingly manageable.

I have included the 24 lines extracted and formatted by the sample command below which is looking specifically for the key phrases "Executing request" and "Got response":

$ tail -0f karaf.log |awk -F\| '$6 ~ /(Executing request|Got response)/ { print $1 $6; fflush() }' |sed 's/[&]token[=][0-9a-zA-Z_-]*/.../'

2019-11-07T18:06:40,294  Executing request POST /arcgis/admin/machines/localhost/status HTTP/1.1
2019-11-07T18:06:40,342  Got response from HTTP request: <html lang="en">
2019-11-07T18:06:43,629  Executing request POST /arcgis/admin/system/configstore HTTP/1.1
2019-11-07T18:06:43,646  Got response from HTTP request: <html lang="en">
2019-11-07T18:06:46,622  Executing request GET /arcgis/help/en/geoevent HTTP/1.1
2019-11-07T18:06:46,626  Executing request GET /arcgis/help/en/geoevent/ HTTP/1.1
2019-11-07T18:06:47,250  Executing request GET /arcgis/rest/info?f=json HTTP/1.1
2019-11-07T18:06:47,253  Got response from HTTP request: {"currentVersion":10.8,"fullVersion":"10.8.0","soapUrl":"https://localhost:6443/arcgis/services","secureSoapUrl":null,"authInfo":{"isTokenBasedSecurity":true,"tokenServicesUrl":"https://localhost:6443/arcgis/tokens/","shortLivedTokenValidity":900}}.
2019-11-07T18:06:47,710  Executing request GET /arcgis/rest/services/?f=json..... HTTP/1.1
2019-11-07T18:06:47,720  Got response from HTTP request: {"currentVersion":10.8,"folders":["System","Utilities"],"services":[{"name":"AffectedTransLines-Buffers","type":"StreamServer"},{"name":"AffectedTransLines-Intersections","type":"StreamServer"},{"name":"CriticalInfrastructure","type":"FeatureServer"},{"name":"CriticalInfrastructure","type":"MapServer"},{"name":"Geofence_Stream","type":"StreamServer"},{"name":"MyGeofences","type":"FeatureServer"},{"name":"MyGeofences","type":"MapServer"},{"name":"SampleWorldCities","type":"MapServer"},{"name":"TropicalStormPolygons","type":"StreamServer"}]}.
2019-11-07T18:06:48,060  Executing request GET /arcgis/rest/services/?f=json..... HTTP/1.1
2019-11-07T18:06:48,068  Got response from HTTP request: {"currentVersion":10.8,"folders":["System","Utilities"],"services":[{"name":"AffectedTransLines-Buffers","type":"StreamServer"},{"name":"AffectedTransLines-Intersections","type":"StreamServer"},{"name":"CriticalInfrastructure","type":"FeatureServer"},{"name":"CriticalInfrastructure","type":"MapServer"},{"name":"Geofence_Stream","type":"StreamServer"},{"name":"MyGeofences","type":"FeatureServer"},{"name":"MyGeofences","type":"MapServer"},{"name":"SampleWorldCities","type":"MapServer"},{"name":"TropicalStormPolygons","type":"StreamServer"}]}.
2019-11-07T18:06:56,608  Executing request GET /arcgis/rest/services/MyGeofences/FeatureServer/0/query?f=json.....&where=1%3D1&outFields=gf_name%2Cgf_category&outSR=4326 HTTP/1.1
2019-11-07T18:06:56,635  Got response from HTTP request: {"objectIdFieldName":"objectid","globalIdFieldName":"","geometryType":"esriGeometryPolygon","spatialReference":{"wkid":4326,"latestWkid":4326},"fields":[{"name":"gf_name","alias":"gf_name","type":"esriFieldTypeString","length":50},{"name":"gf_category","alias":"gf_category","type":"esriFieldTypeString","length":50}],"features":[{"attributes":{"gf_name":"Alpha_003","gf_category":"Alpha"},"geometry":{"rings":[[[-120.252028,30.944518],[-119.784204,29.644623],[-120.566595,29.483390],[-121.447948,30.461197],[-121.275115,30.841066],[-120.252028,30.944518]]]}},{"attributes":{"gf_name":"Alpha_005","gf_category":"Alpha"},"geometry":{"rings":[[[-120.943999,33.487086],[-121.032831,32.575755],[-121.690575,31.901015],[-122.421752,32.119503],[-122.245335,33.447119],[-120.943999,33.487086]]]}},{"attributes":{"gf_name":"Alpha_006","gf_category":"Alpha"},"geometry":{"rings":[[[-122.691280,29.516679],[-123.226533,29.802332],[-122.749277,31.805495],[-122.429246,32.118518],[-122.421752,32.119503],[-121.690575,31.901015],[-121.275115,30.841066],[-121.447948,30.461197],[-122.691280,29.516679]]]}},{"attributes":{"gf_name":"Alpha_008","gf_category":"Alpha"},"geometry":{"rings":[[[-120.851764,33.649721],[-120.165423,33.593953],[-119.397317,32.932664],[-120.074747,32.236847],[-121.032831,32.575755],[-120.943999,33.487086],[-120.851764,33.649721]]]}},{"attributes":{"gf_name":"Alpha_010","gf_category":"Alpha"},"geometry":{"rings":[[[-116.132660,31.584451],[-116.047511,31.421341],[-115.681330,29.828279],[-116.996163,30.707680],[-116.957586,31.120668],[-116.132660,31.584451]]]}},{"attributes":{"gf_name":"Alpha_011","gf_category":"Alpha"},"geometry":{"rings":[[[-117.397404,29.178025],[-117.888219,29.222649],[-118.816738,30.272808],[-118.787736,30.371437],[-118.275610,30.321049],[-117.406325,29.315306],[-117.397404,29.178025]]]}},{"attributes":{"gf_name":"Alpha_013","gf_category":"Alpha"},"geometry":{"rings":[[[-118.461679,30.835274],[-118.017691,30.803414],[-118.017804,30.732340],[-118.275610,30.321049],[-118.787736,30.371437],[-118.830849,30.513622],[-118.712642,30.726205],[-118.461679,30.835274]]]}},{"attributes":{"gf_name":"Alpha_014","gf_category":"Alpha"},"geometry":{"rings":[[[-118.017804,30.732340],[-117.331629,29.805684],[-117.406325,29.315306],[-118.275610,30.321049],[-118.017804,30.732340]]]}},{"attributes":{"gf_name":"Alpha_018","gf_category":"Alpha"},"geometry":{"rings":[[[-118.291482,32.187915],[-118.236504,32.105900],[-118.461679,30.835274],[-118.712642,30.726205],[-118.910999,31.691403],[-118.291482,32.187915]]]}},{"attributes":{"gf_name":"Alpha_021","gf_category":"Alpha"},"geometry":{"rings":[[[-118.236504,32.105900],[-117.610715,31.368293],[-118.017691,30.803414],[-118.461679,30.835274],[-118.236504,32.105900]]]}},{"attributes":{"gf_name":"Alpha_022","gf_category":"Alpha"},"geometry":{"rings":[[[-118.415540,32.957686],[-118.306950,32.857367],[-118.241221,32.789177],[-118.291482,32.187915],[-118.910999,31.691403],[-119.764803,31.398264],[-120.074747,32.236847],[-119.397317,32.932664],[-118.415540,32.957686]]]}},{"attributes":{"gf_name":"Alpha_023","gf_category":"Alpha"},"geometry":{"rings":[[[-116.802682,34.081536],[-115.652745,33.453531],[-115.641477,32.336068],[-115.871176,32.188122],[-117.476538,32.953794],[-116.802682,34.081536]]]}},{"attributes":{"gf_name":"Alpha_024","gf_category":"Alpha"},"geometry":{"rings":[[[-122.562145,38.611239],[-122.186957,38.336612],[-122.195569,38.107073],[-123.151426,37.051286],[-122.951406,38.539622],[-122.562145,38.611239]]]}},{"attributes":{"gf_name":"Alpha_026","gf_category":"Alpha"},"geometry":{"rings":[[[-121.521585,38.401753],[-121.317484,37.884274],[-121.542840,36.899873],[-121.646568,36.833617],[-122.195569,38.107073],[-122.186957,38.336612],[-121.521585,38.401753]]]}},{"attributes":{"gf_name":"Alpha_028","gf_category":"Alpha"},"geometry":{"rings":[[[-122.340072,35.533098],[-121.150046,34.948623],[-120.851764,33.649721],[-120.943999,33.487086],[-122.245335,33.447119],[-122.980897,34.359173],[-122.340072,35.533098]]]}},{"attributes":{"gf_name":"Alpha_030","gf_category":"Alpha"},"geometry":{"rings":[[[-122.195569,38.107073],[-121.646568,36.833617],[-122.497375,35.977360],[-123.316198,36.487165],[-123.151426,37.051286],[-122.195569,38.107073]]]}},{"attributes":{"gf_name":"Alpha_033","gf_category":"Alpha"},"geometry":{"rings":[[[-120.462003,35.205002],[-119.434621,34.271019],[-120.165423,33.593953],[-120.851764,33.649721],[-121.150046,34.948623],[-120.462003,35.205002]]]}},{"attributes":{"gf_name":"Alpha_034","gf_category":"Alpha"},"geometry":{"rings":[[[-120.314583,38.136693],[-119.722418,38.072922],[-120.702231,36.942569],[-121.542840,36.899873],[-121.317484,37.884274],[-120.314583,38.136693]]]}},{"attributes":{"gf_name":"Alpha_036","gf_category":"Alpha"},"geometry":{"rings":[[[-121.315802,38.800687],[-120.314583,38.136693],[-121.317484,37.884274],[-121.521585,38.401753],[-121.315802,38.800687]]]}},{"attributes":{"gf_name":"Alpha_038","gf_category":"Alpha"},"geometry":{"rings":[[[-117.414676,35.037767],[-116.873703,34.464625],[-116.878218,34.445776],[-117.452834,34.349222],[-117.874553,34.354357],[-117.900979,34.521991],[-117.414676,35.037767]]]}},{"attributes":{"gf_name":"Alpha_039","gf_category":"Alpha"},"geometry":{"rings":[[[-118.276782,35.277533],[-118.210805,35.269057],[-118.075170,35.188227],[-117.900979,34.521991],[-117.874553,34.354357],[-118.597699,33.869008],[-118.895173,34.292433],[-118.827343,34.677451],[-118.276782,35.277533]]]}},{"attributes":{"gf_name":"Alpha_041","gf_category":"Alpha"},"geometry":{"rings":[[[-119.837637,35.784433],[-118.827343,34.677451],[-118.895173,34.292433],[-119.434621,34.271019],[-120.462003,35.205002],[-119.837637,35.784433]]]}},{"attributes":{"gf_name":"Alpha_042","gf_category":"Alpha"},"geometry":{"rings":[[[-117.922812,36.598206],[-117.057992,36.394203],[-118.210805,35.269057],[-118.276782,35.277533],[-118.493063,35.699818],[-118.285579,36.557670],[-117.922812,36.598206]]]}},{"attributes":{"gf_name":"Alpha_043","gf_category":"Alpha"},"geometry":{"rings":[[[-118.075170,35.188227],[-117.434176,35.060678],[-117.414676,35.037767],[-117.900979,34.521991],[-118.075170,35.188227]]]}},{"attributes":{"gf_name":"Alpha_045","gf_category":"Alpha"},"geometry":{"rings":[[[-119.400803,37.662407],[-118.363534,37.428185],[-118.320407,36.580012],[-119.191764,36.798038],[-119.400803,37.662407]]]}},{"attributes":{"gf_name":"Alpha_046","gf_category":"Alpha"},"geometry":{"rings":[[[-119.629592,38.114023],[-119.400803,37.662407],[-119.191764,36.798038],[-119.814395,36.016233],[-120.702231,36.942569],[-119.722418,38.072922],[-119.629592,38.114023]]]}},{"attributes":{"gf_name":"Alpha_049","gf_category":"Alpha"},"geometry":{"rings":[[[-118.645011,38.788713],[-116.375726,38.414554],[-117.879285,37.762548],[-117.981593,37.812142],[-118.645011,38.788713]]]}},{"attributes":{"gf_name":"Alpha_050","gf_category":"Alpha"},"geometry":{"rings":[[[-117.981593,37.812142],[-117.879285,37.762548],[-117.850493,37.692040],[-117.922812,36.598206],[-118.285579,36.557670],[-118.320407,36.580012],[-118.363534,37.428185],[-117.981593,37.812142]]]}},{"attributes":{"gf_name":"Bravo_003","gf_category":"Bravo"},"geometry":{"rings":[[[-114.966407,29.010553],[-115.556070,29.010613],[-115.565899,29.576853],[-114.877388,30.652564],[-113.973903,31.144828],[-113.874728,31.024001],[-114.966407,29.010553]]]}},{"attributes":{"gf_name":"Bravo_004","gf_category":"Bravo"},"geometry":{"rings":[[[-116.047511,31.421341],[-114.877388,30.652564],[-115.565899,29.576853],[-115.681330,29.828279],[-116.047511,31.421341]]]}},{"attributes":{"gf_name":"Bravo_005","gf_category":"Bravo"},"geometry":{"rings":[[[-113.665880,33.089569],[-113.578775,33.084994],[-113.595481,32.109917],[-113.642988,31.702545],[-113.997771,31.250836],[-115.038456,32.239077],[-114.828726,32.423734],[-113.665880,33.089569]]]}},{"attributes":{"gf_name":"Bravo_007","gf_category":"Bravo"},"geometry":{"rings":[[[-116.842159,36.261753],[-116.346604,35.977788],[-116.271652,35.913982],[-116.010855,34.931918],[-116.723780,34.610674],[-116.842159,36.261753]]]}},{"attributes":{"gf_name":"Bravo_009","gf_category":"Bravo"},"geometry":{"rings":[[[-115.760073,34.464355],[-115.288817,34.426023],[-115.232718,33.750965],[-115.651894,33.454681],[-115.760073,34.464355]]]}},{"attributes":{"gf_name":"Bravo_012","gf_category":"Bravo"},"geometry":{"rings":[[[-112.309934,34.738007],[-111.899280,34.156477],[-111.896265,34.000485],[-112.259220,33.812066],[-112.559940,33.882035],[-112.785387,34.665098],[-112.309934,34.738007]]]}},{"attributes":{"gf_name":"Bravo_013","gf_category":"Bravo"},"geometry":{"rings":[[[-113.866163,31.017642],[-112.821893,30.803882],[-112.208051,30.141331],[-112.201103,29.892425],[-113.605198,29.548920],[-113.866163,31.017642]]]}},{"attributes":{"gf_name":"Bravo_014","gf_category":"Bravo"},"geometry":{"rings":[[[-111.159012,32.522856],[-110.824649,32.494792],[-110.141501,31.918549],[-110.403977,31.052692],[-111.259674,30.945511],[-111.315692,30.965117],[-111.474622,31.590875],[-111.467117,31.957874],[-111.159012,32.522856]]]}},{"attributes":{"gf_name":"Bravo_016","gf_category":"Bravo"},"geometry":{"rings":[[[-113.946703,29.010449],[-114.966407,29.010553],[-113.874728,31.024001],[-113.866163,31.017642],[-113.605198,29.548920],[-113.946703,29.010449]]]}},{"attributes":{"gf_name":"Bravo_018","gf_category":"Bravo"},"geometry":{"rings":[[[-112.935806,32.530579],[-112.905623,32.022493],[-113.642988,31.702545],[-113.595481,32.109917],[-112.935806,32.530579]]]}},{"attributes":{"gf_name":"Bravo_019","gf_category":"Bravo"},"geometry":{"rings":[[[-111.317819,34.348952],[-110.978438,33.833734],[-111.375070,33.198857],[-111.896265,34.000485],[-111.899280,34.156477],[-111.317819,34.348952]]]}},{"attributes":{"gf_name":"Bravo_020","gf_category":"Bravo"},"geometry":{"rings":[[[-112.909116,33.056288],[-112.454372,32.971595],[-112.423551,31.937676],[-112.551553,31.875448],[-112.905623,32.022493],[-112.935806,32.530579],[-112.909116,33.056288]]]}},{"attributes":{"gf_name":"Bravo_021","gf_category":"Bravo"},"geometry":{"rings":[[[-112.150814,33.001880],[-111.414706,32.914951],[-111.159012,32.522856],[-111.467117,31.957874],[-112.423551,31.937676],[-112.454372,32.971595],[-112.150814,33.001880]]]}},{"attributes":{"gf_name":"Bravo_022","gf_category":"Bravo"},"geometry":{"rings":[[[-114.546436,34.566336],[-114.000237,34.215944],[-114.203561,33.611652],[-114.485905,33.625307],[-114.892467,33.864714],[-114.546436,34.566336]]]}},{"attributes":{"gf_name":"Bravo_028","gf_category":"Bravo"},"geometry":{"rings":[[[-116.878218,34.445776],[-116.802682,34.081536],[-117.476538,32.953794],[-118.241221,32.789177],[-118.306950,32.857367],[-117.452834,34.349222],[-116.878218,34.445776]]]}},{"attributes":{"gf_name":"Bravo_029","gf_category":"Bravo"},"geometry":{"rings":[[[-116.010855,34.931918],[-115.974970,34.925304],[-115.760073,34.464355],[-115.651894,33.454681],[-115.652745,33.453531],[-116.802682,34.081536],[-116.878218,34.445776],[-116.873703,34.464625],[-116.723780,34.610674],[-116.010855,34.931918]]]}},{"attributes":{"gf_name":"Bravo_035","gf_category":"Bravo"},"geometry":{"rings":[[[-113.322646,36.492659],[-112.866662,35.823773],[-113.157328,34.983246],[-113.614524,35.263022],[-113.911112,36.188262],[-113.322646,36.492659]]]}},{"attributes":{"gf_name":"Bravo_037","gf_category":"Bravo"},"geometry":{"rings":[[[-115.337130,36.999392],[-114.286610,36.369889],[-114.190413,36.202457],[-114.581011,35.827259],[-114.984871,35.568119],[-115.038457,35.629228],[-115.332937,36.457187],[-115.360534,36.658888],[-115.337130,36.999392]]]}},{"attributes":{"gf_name":"Bravo_039","gf_category":"Bravo"},"geometry":{"rings":[[[-114.581011,35.827259],[-113.881352,35.215058],[-114.599067,34.811896],[-114.604860,34.809841],[-114.934130,35.107656],[-115.025696,35.248019],[-114.984871,35.568119],[-114.581011,35.827259]]]}},{"attributes":{"gf_name":"Bravo_040","gf_category":"Bravo"},"geometry":{"rings":[[[-115.635838,37.895516],[-114.244290,37.688867],[-114.286610,36.369889],[-115.337130,36.999392],[-115.635838,37.895516]]]}},{"attributes":{"gf_name":"Bravo_041","gf_category":"Bravo"},"geometry":{"rings":[[[-116.236850,38.430295],[-115.893363,38.243036],[-115.830905,38.042138],[-116.762045,36.853773],[-117.850493,37.692040],[-117.879285,37.762548],[-116.375726,38.414554],[-116.236850,38.430295]]]}},{"attributes":{"gf_name":"Bravo_043","gf_category":"Bravo"},"geometry":{"rings":[[[-113.032255,37.207939],[-112.165492,36.820693],[-111.999148,36.398822],[-112.273275,35.883587],[-112.866662,35.823773],[-113.322646,36.492659],[-113.032255,37.207939]]]}},{"attributes":{"gf_name":"Bravo_044","gf_category":"Bravo"},"geometry":{"rings":[[[-113.614524,35.263022],[-113.157328,34.983246],[-113.059816,34.730975],[-113.114472,34.672716],[-114.599067,34.811896],[-113.881352,35.215058],[-113.614524,35.263022]]]}},{"attributes":{"gf_name":"Bravo_045","gf_category":"Bravo"},"geometry":{"rings":[[[-113.372466,34.311458],[-113.327958,33.182361],[-113.578775,33.084994],[-113.665880,33.089569],[-114.203561,33.611652],[-114.000237,34.215944],[-113.372466,34.311458]]]}},{"attributes":{"gf_name":"Bravo_046","gf_category":"Bravo"},"geometry":{"rings":[[[-112.273275,35.883587],[-111.846116,35.154557],[-112.309934,34.738007],[-112.785387,34.665098],[-113.059816,34.730975],[-113.157328,34.983246],[-112.866662,35.823773],[-112.273275,35.883587]]]}},{"attributes":{"gf_name":"Bravo_047","gf_category":"Bravo"},"geometry":{"rings":[[[-111.265839,37.278314],[-111.175263,37.246313],[-110.752701,36.701053],[-110.999562,36.098601],[-111.999148,36.398822],[-112.165492,36.820693],[-111.896582,37.016041],[-111.265839,37.278314]]]}},{"attributes":{"gf_name":"Charlie_004","gf_category":"Charlie"},"geometry":{"rings":[[[-111.990476,39.623930],[-111.672383,39.412941],[-111.919637,38.212856],[-111.999787,38.197742],[-112.565550,38.183798],[-112.769221,38.858292],[-112.622809,39.071739],[-111.990476,39.623930]]]}},{"attributes":{"gf_name":"Charlie_005","gf_category":"Charlie"},"geometry":{"rings":[[[-112.811170,38.874945],[-112.769221,38.858292],[-112.565550,38.183798],[-113.120292,37.481661],[-113.857748,37.760542],[-113.852376,37.928864],[-113.462539,38.716036],[-112.811170,38.874945]]]}},{"attributes":{"gf_name":"Charlie_007","gf_category":"Charlie"},"geometry":{"rings":[[[-111.919637,38.212856],[-111.687861,38.152417],[-111.265839,37.278314],[-111.896582,37.016041],[-111.999787,38.197742],[-111.919637,38.212856]]]}},{"attributes":{"gf_name":"Charlie_008","gf_category":"Charlie"},"geometry":{"rings":[[[-111.315692,30.965117],[-111.259674,30.945511],[-110.796944,30.198316],[-110.956477,29.722505],[-112.072384,29.643578],[-112.201103,29.892425],[-112.208051,30.141331],[-111.315692,30.965117]]]}},{"attributes":{"gf_name":"Charlie_009","gf_category":"Charlie"},"geometry":{"rings":[[[-110.141501,31.918549],[-109.185745,31.877346],[-108.663232,31.118540],[-108.398365,30.600397],[-108.351497,29.968889],[-109.843830,30.505716],[-110.403977,31.052692],[-110.141501,31.918549]]]}},{"attributes":{"gf_name":"Charlie_012","gf_category":"Charlie"},"geometry":{"rings":[[[-110.403977,31.052692],[-109.843830,30.505716],[-110.466160,30.249463],[-110.796944,30.198316],[-111.259674,30.945511],[-110.403977,31.052692]]]}},{"attributes":{"gf_name":"Charlie_016","gf_category":"Charlie"},"geometry":{"rings":[[[-111.251511,38.338078],[-110.907121,38.232546],[-110.744795,37.758057],[-111.175263,37.246313],[-111.265839,37.278314],[-111.687861,38.152417],[-111.251511,38.338078]]]}},{"attributes":{"gf_name":"Charlie_017","gf_category":"Charlie"},"geometry":{"rings":[[[-110.848814,35.410481],[-110.395022,35.218355],[-110.629724,33.983079],[-110.978438,33.833734],[-111.317819,34.348952],[-111.227231,35.202718],[-110.848814,35.410481]]]}},{"attributes":{"gf_name":"Charlie_018","gf_category":"Charlie"},"geometry":{"rings":[[[-108.957760,35.823204],[-108.582120,34.448969],[-108.695642,34.413530],[-109.616894,35.160616],[-109.861048,35.391741],[-109.580750,35.671064],[-108.957760,35.823204]]]}},{"attributes":{"gf_name":"Charlie_019","gf_category":"Charlie"},"geometry":{"rings":[[[-109.616894,35.160616],[-108.695642,34.413530],[-109.209776,34.009453],[-109.616894,35.160616]]]}},{"attributes":{"gf_name":"Charlie_020","gf_category":"Charlie"},"geometry":{"rings":[[[-109.580423,33.485896],[-109.019515,32.893275],[-109.185745,31.877346],[-110.141501,31.918549],[-110.824649,32.494792],[-109.600599,33.481973],[-109.580423,33.485896]]]}},{"attributes":{"gf_name":"Charlie_026","gf_category":"Charlie"},"geometry":{"rings":[[[-107.448109,32.465305],[-107.543777,31.824440],[-108.398365,30.600397],[-108.663232,31.118540],[-108.172089,32.338211],[-107.448109,32.465305]]]}},{"attributes":{"gf_name":"Charlie_027","gf_category":"Charlie"},"geometry":{"rings":[[[-107.000647,33.131979],[-106.927124,33.119847],[-105.707650,32.307470],[-105.776106,31.834468],[-105.924658,31.432486],[-106.583294,31.970314],[-107.000647,33.131979]]]}},{"attributes":{"gf_name":"Charlie_028","gf_category":"Charlie"},"geometry":{"rings":[[[-106.583294,31.970314],[-105.924658,31.432486],[-105.907446,31.138703],[-106.150128,30.782825],[-107.095091,31.764133],[-106.583294,31.970314]]]}},{"attributes":{"gf_name":"Charlie_029","gf_category":"Charlie"},"geometry":{"rings":[[[-107.359704,33.554365],[-107.142280,33.201815],[-107.448109,32.465305],[-108.172089,32.338211],[-108.685156,32.962142],[-107.359704,33.554365]]]}},{"attributes":{"gf_name":"Charlie_030","gf_category":"Charlie"},"geometry":{"rings":[[[-105.994738,33.337477],[-105.436816,32.627620],[-105.422219,32.568667],[-105.707650,32.307470],[-106.927124,33.119847],[-105.994738,33.337477]]]}},{"attributes":{"gf_name":"Charlie_034","gf_category":"Charlie"},"geometry":{"rings":[[[-105.313424,33.867144],[-105.436816,32.627620],[-105.994738,33.337477],[-105.313424,33.867144]]]}},{"attributes":{"gf_name":"Charlie_036","gf_category":"Charlie"},"geometry":{"rings":[[[-110.055886,36.915457],[-110.000338,36.888631],[-109.580750,35.671064],[-109.861048,35.391741],[-110.224385,35.222138],[-110.395022,35.218355],[-110.848814,35.410481],[-110.999562,36.098601],[-110.752701,36.701053],[-110.055886,36.915457]]]}},{"attributes":{"gf_name":"Charlie_039","gf_category":"Charlie"},"geometry":{"rings":[[[-109.580447,36.961026],[-108.825944,35.920920],[-108.957760,35.823204],[-109.580750,35.671064],[-110.000338,36.888631],[-109.580447,36.961026]]]}},{"attributes":{"gf_name":"Charlie_040","gf_category":"Charlie"},"geometry":{"rings":[[[-108.381504,36.137964],[-106.993809,34.882178],[-107.053820,34.603020],[-107.376441,34.146431],[-108.254492,34.413094],[-108.396412,36.127488],[-108.381504,36.137964]]]}},{"attributes":{"gf_name":"Charlie_043","gf_category":"Charlie"},"geometry":{"rings":[[[-109.532017,37.942050],[-109.235501,37.903205],[-109.059044,37.837445],[-108.846176,37.328161],[-109.580447,36.961026],[-110.000338,36.888631],[-110.055886,36.915457],[-110.093643,37.604001],[-109.532017,37.942050]]]}},{"attributes":{"gf_name":"Charlie_045","gf_category":"Charlie"},"geometry":{"rings":[[[-110.295214,38.567151],[-110.205307,37.706183],[-110.744795,37.758057],[-110.907121,38.232546],[-110.295214,38.567151]]]}},{"attributes":{"gf_name":"Charlie_046","gf_category":"Charlie"},"geometry":{"rings":[[[-111.672383,39.412941],[-111.040607,39.348197],[-111.251511,38.338078],[-111.687861,38.152417],[-111.919637,38.212856],[-111.672383,39.412941]]]}},{"attributes":{"gf_name":"Charlie_047","gf_category":"Charlie"},"geometry":{"rings":[[[-108.586684,38.599815],[-107.751418,38.385245],[-107.882355,37.022654],[-108.096641,36.914409],[-108.846176,37.328161],[-109.059044,37.837445],[-108.586684,38.599815]]]}},{"attributes":{"gf_name":"Charlie_049","gf_category":"Charlie"},"geometry":{"rings":[[[-110.089960,38.906828],[-109.532017,37.942050],[-110.093643,37.604001],[-110.205307,37.706183],[-110.295214,38.567151],[-110.089960,38.906828]]]}},{"attributes":{"gf_name":"Charlie_050","gf_category":"Charlie"},"geometry":{"rings":[[[-109.171210,39.584714],[-108.586684,38.599815],[-109.059044,37.837445],[-109.235501,37.903205],[-109.527535,39.304970],[-109.171210,39.584714]]]}}]}.
2019-11-07T18:06:56,895  Executing request GET /arcgis/rest/services/MyGeofences/FeatureServer/0/query?f=json.....&where=1%3D1&outFields=gf_name%2Cgf_category&outSR=4326&returnIdsOnly=true HTTP/1.1
2019-11-07T18:06:56,906  Got response from HTTP request: {"objectIdFieldName":"objectid","objectIds":[3,5,6,8,10,11,13,14,18,21,22,23,24,26,28,30,33,34,36,38,39,41,42,43,45,46,49,50,53,54,55,57,59,62,63,64,66,68,69,70,71,72,78,79,85,87,89,90,91,93,94,95,96,97,104,105,107,108,109,112,116,117,118,119,120,126,127,128,129,130,134,136,139,140,143,145,146,147,149,150]}.
2019-11-07T18:06:57,294  Executing request GET /arcgis/rest/services/MyGeofences/FeatureServer/0/query?f=json.....&where=1%3D1&outFields=objectid%2Cgf_name%2Cgf_category&returnGeometry=false HTTP/1.1
2019-11-07T18:06:57,305  Got response from HTTP request: {"objectIdFieldName":"objectid","globalIdFieldName":"","geometryType":"esriGeometryPolygon","spatialReference":{"wkid":4326,"latestWkid":4326},"fields":[{"name":"objectid","alias":"OBJECTID","type":"esriFieldTypeOID"},{"name":"gf_name","alias":"gf_name","type":"esriFieldTypeString","length":50},{"name":"gf_category","alias":"gf_category","type":"esriFieldTypeString","length":50}],"features":[{"attributes":{"objectid":3,"gf_name":"Alpha_003","gf_category":"Alpha"}},{"attributes":{"objectid":5,"gf_name":"Alpha_005","gf_category":"Alpha"}},{"attributes":{"objectid":6,"gf_name":"Alpha_006","gf_category":"Alpha"}},{"attributes":{"objectid":8,"gf_name":"Alpha_008","gf_category":"Alpha"}},{"attributes":{"objectid":10,"gf_name":"Alpha_010","gf_category":"Alpha"}},{"attributes":{"objectid":11,"gf_name":"Alpha_011","gf_category":"Alpha"}},{"attributes":{"objectid":13,"gf_name":"Alpha_013","gf_category":"Alpha"}},{"attributes":{"objectid":14,"gf_name":"Alpha_014","gf_category":"Alpha"}},{"attributes":{"objectid":18,"gf_name":"Alpha_018","gf_category":"Alpha"}},{"attributes":{"objectid":21,"gf_name":"Alpha_021","gf_category":"Alpha"}},{"attributes":{"objectid":22,"gf_name":"Alpha_022","gf_category":"Alpha"}},{"attributes":{"objectid":23,"gf_name":"Alpha_023","gf_category":"Alpha"}},{"attributes":{"objectid":24,"gf_name":"Alpha_024","gf_category":"Alpha"}},{"attributes":{"objectid":26,"gf_name":"Alpha_026","gf_category":"Alpha"}},{"attributes":{"objectid":28,"gf_name":"Alpha_028","gf_category":"Alpha"}},{"attributes":{"objectid":30,"gf_name":"Alpha_030","gf_category":"Alpha"}},{"attributes":{"objectid":33,"gf_name":"Alpha_033","gf_category":"Alpha"}},{"attributes":{"objectid":34,"gf_name":"Alpha_034","gf_category":"Alpha"}},{"attributes":{"objectid":36,"gf_name":"Alpha_036","gf_category":"Alpha"}},{"attributes":{"objectid":38,"gf_name":"Alpha_038","gf_category":"Alpha"}},{"attributes":{"objectid":39,"gf_name":"Alpha_039","gf_category":"Alpha"}},{"attributes":{"objectid":41,"gf_name":"Alpha_041","gf_category":"Alpha"}},{"attributes":{"objectid":42,"gf_name":"Alpha_042","gf_category":"Alpha"}},{"attributes":{"objectid":43,"gf_name":"Alpha_043","gf_category":"Alpha"}},{"attributes":{"objectid":45,"gf_name":"Alpha_045","gf_category":"Alpha"}},{"attributes":{"objectid":46,"gf_name":"Alpha_046","gf_category":"Alpha"}},{"attributes":{"objectid":49,"gf_name":"Alpha_049","gf_category":"Alpha"}},{"attributes":{"objectid":50,"gf_name":"Alpha_050","gf_category":"Alpha"}},{"attributes":{"objectid":53,"gf_name":"Bravo_003","gf_category":"Bravo"}},{"attributes":{"objectid":54,"gf_name":"Bravo_004","gf_category":"Bravo"}},{"attributes":{"objectid":55,"gf_name":"Bravo_005","gf_category":"Bravo"}},{"attributes":{"objectid":57,"gf_name":"Bravo_007","gf_category":"Bravo"}},{"attributes":{"objectid":59,"gf_name":"Bravo_009","gf_category":"Bravo"}},{"attributes":{"objectid":62,"gf_name":"Bravo_012","gf_category":"Bravo"}},{"attributes":{"objectid":63,"gf_name":"Bravo_013","gf_category":"Bravo"}},{"attributes":{"objectid":64,"gf_name":"Bravo_014","gf_category":"Bravo"}},{"attributes":{"objectid":66,"gf_name":"Bravo_016","gf_category":"Bravo"}},{"attributes":{"objectid":68,"gf_name":"Bravo_018","gf_category":"Bravo"}},{"attributes":{"objectid":69,"gf_name":"Bravo_019","gf_category":"Bravo"}},{"attributes":{"objectid":70,"gf_name":"Bravo_020","gf_category":"Bravo"}},{"attributes":{"objectid":71,"gf_name":"Bravo_021","gf_category":"Bravo"}},{"attributes":{"objectid":72,"gf_name":"Bravo_022","gf_category":"Bravo"}},{"attributes":{"objectid":78,"gf_name":"Bravo_028","gf_category":"Bravo"}},{"attributes":{"objectid":79,"gf_name":"Bravo_029","gf_category":"Bravo"}},{"attributes":{"objectid":85,"gf_name":"Bravo_035","gf_category":"Bravo"}},{"attributes":{"objectid":87,"gf_name":"Bravo_037","gf_category":"Bravo"}},{"attributes":{"objectid":89,"gf_name":"Bravo_039","gf_category":"Bravo"}},{"attributes":{"objectid":90,"gf_name":"Bravo_040","gf_category":"Bravo"}},{"attributes":{"objectid":91,"gf_name":"Bravo_041","gf_category":"Bravo"}},{"attributes":{"objectid":93,"gf_name":"Bravo_043","gf_category":"Bravo"}},{"attributes":{"objectid":94,"gf_name":"Bravo_044","gf_category":"Bravo"}},{"attributes":{"objectid":95,"gf_name":"Bravo_045","gf_category":"Bravo"}},{"attributes":{"objectid":96,"gf_name":"Bravo_046","gf_category":"Bravo"}},{"attributes":{"objectid":97,"gf_name":"Bravo_047","gf_category":"Bravo"}},{"attributes":{"objectid":104,"gf_name":"Charlie_004","gf_category":"Charlie"}},{"attributes":{"objectid":105,"gf_name":"Charlie_005","gf_category":"Charlie"}},{"attributes":{"objectid":107,"gf_name":"Charlie_007","gf_category":"Charlie"}},{"attributes":{"objectid":108,"gf_name":"Charlie_008","gf_category":"Charlie"}},{"attributes":{"objectid":109,"gf_name":"Charlie_009","gf_category":"Charlie"}},{"attributes":{"objectid":112,"gf_name":"Charlie_012","gf_category":"Charlie"}},{"attributes":{"objectid":116,"gf_name":"Charlie_016","gf_category":"Charlie"}},{"attributes":{"objectid":117,"gf_name":"Charlie_017","gf_category":"Charlie"}},{"attributes":{"objectid":118,"gf_name":"Charlie_018","gf_category":"Charlie"}},{"attributes":{"objectid":119,"gf_name":"Charlie_019","gf_category":"Charlie"}},{"attributes":{"objectid":120,"gf_name":"Charlie_020","gf_category":"Charlie"}},{"attributes":{"objectid":126,"gf_name":"Charlie_026","gf_category":"Charlie"}},{"attributes":{"objectid":127,"gf_name":"Charlie_027","gf_category":"Charlie"}},{"attributes":{"objectid":128,"gf_name":"Charlie_028","gf_category":"Charlie"}},{"attributes":{"objectid":129,"gf_name":"Charlie_029","gf_category":"Charlie"}},{"attributes":{"objectid":130,"gf_name":"Charlie_030","gf_category":"Charlie"}},{"attributes":{"objectid":134,"gf_name":"Charlie_034","gf_category":"Charlie"}},{"attributes":{"objectid":136,"gf_name":"Charlie_036","gf_category":"Charlie"}},{"attributes":{"objectid":139,"gf_name":"Charlie_039","gf_category":"Charlie"}},{"attributes":{"objectid":140,"gf_name":"Charlie_040","gf_category":"Charlie"}},{"attributes":{"objectid":143,"gf_name":"Charlie_043","gf_category":"Charlie"}},{"attributes":{"objectid":145,"gf_name":"Charlie_045","gf_category":"Charlie"}},{"attributes":{"objectid":146,"gf_name":"Charlie_046","gf_category":"Charlie"}},{"attributes":{"objectid":147,"gf_name":"Charlie_047","gf_category":"Charlie"}},{"attributes":{"objectid":149,"gf_name":"Charlie_049","gf_category":"Charlie"}},{"attributes":{"objectid":150,"gf_name":"Charlie_050","gf_category":"Charlie"}}]}.
2019-11-07T18:06:57,563  Executing request GET /arcgis/rest/services/MyGeofences/FeatureServer/0/query?f=json.....&where=1%3D1&outFields=objectid%2Cgf_name%2Cgf_category&returnGeometry=false&returnIdsOnly=true HTTP/1.1
2019-11-07T18:06:57,573  Got response from HTTP request: {"objectIdFieldName":"objectid","objectIds":[3,5,6,8,10,11,13,14,18,21,22,23,24,26,28,30,33,34,36,38,39,41,42,43,45,46,49,50,53,54,55,57,59,62,63,64,66,68,69,70,71,72,78,79,85,87,89,90,91,93,94,95,96,97,104,105,107,108,109,112,116,117,118,119,120,126,127,128,129,130,134,136,139,140,143,145,146,147,149,150]}.
2019-11-07T18:07:00,360  Executing request POST /arcgis/admin/machines/localhost/status HTTP/1.1
2019-11-07T18:07:00,414  Got response from HTTP request: <html lang="en">
2019-11-07T18:07:03,673  Executing request POST /arcgis/admin/system/configstore HTTP/1.1
2019-11-07T18:07:03,688  Got response from HTTP request: <html lang="en">

 

There is quite a bit of JSON data embedded in the results above which can be helpful in identifying exactly what a feature service returns to a client when the client queries the service. The timestamps also help if you need to return to the full karaf.log and look for messages logged just before or just after a line matching the command's search patterns to see if there is additional information not captured by the command which might help debug an issue.

Information provided by the timestamps on each logged message can also provide empirical evidence of exactly how long it takes to get a response back from the feature service each time an HTTP request is made. Computing a delta between the date/time a request is logged and the response to the request can be valuable if you suspect latency introduced by geofence synchronization is causing a problem. Remember, nothing happens in zero time, and frequent queries every few seconds to a large feature record set can impact overall GeoEvent Server operations.

Also, keep in mind that a feature service may be configured to return a maximum number of feature records for any given query. GeoEvent Server may have to make several queries to page through a complete feature record set when there are more than 1000 feature records, for example, being imported to update geofences.

The techniques I have described provide a way to delve deeply into geofence synchronization to examine the REST requests and responses when interfacing with a feature service. You can use these techniques to obtain information on request latency as well as implementation details such as how GeoEvent Server pages through large feature record sets or how a feature service handles a number of queries sent in a series. I have attached a PDF illustration of the above two dozen formatted log messages with additional formatting I applied manually to make the JSON in each logged message easy to read. I hope that you find the combination of debug logging with scripted text extraction and string formatting a helpful debugging technique.

– RJ

For the 10.7.1 release of ArcGIS GeoEvent Server, we are excited to announce new documentation for the existing out-of-the-box input and output connectors. A separate documentation page has been provided for each connector that includes a summary, unique usage notes, list of properties help, and known limitations.

 

To access this content, you are welcome to visit the existing Available input connectors and Available output connector landing pages where you'll notice that the 10.7 version of the documentation includes links for each of the existing connectors in place of the original text-based list. Clicking on any of these links will bring you to the new documentation for the specified connector. Additionally, you can view the new material as a list by accessing the Input connectors and Output connector topics under Connect to Data and Send Updates and Alerts

 

Example of new input connector documentation landing page.

 

As mentioned before, the new documentation for each input and output connector includes unique usage notes. These usage notes are intended to help provide additional information about each connector. You'll find information regarding best practices, tips-and-tricks, expected behavior, references to additional documentation, and configuration considerations.

 

Example of the usage notes for the new connector documentation.

 

Below the usage notes for each input and output connector are a complete list of available parameters. It is worth noting that the parameters in the list include all of those which are shown by default as well as those which are hidden since they are "conditional" (or dependent) on other parameters being configured a certain way to then first appear. You'll find that each parameter is paired with a unique description that explains what the parameter is for, what configurable options are available, what the expected input value(s) may be, and in some cases what the default value is.

 

Example of parameters and descriptions in the new connector documentation.

 

As always, step-by-step documentation on how to configure various input and output connectors can be found in our existing tutorial-based documentation here: ArcGIS GeoEvent Server Gallery.

This blog is one in a series of blogs discussing debugging techniques you can use when working to identify the root cause of an issue with a GeoEvent Server deployment or configuration. Click any link in the quick list below to jump to another blog in the series.

In this blog I will discuss GeoEvent Manager's user interface for viewing logged messages, the location of the actual log file on disk, and how logging can be configured -- specifically how to control the size of the log file and its rollover properties.

The GeoEvent Manager Logging Interface

ArcGIS GeoEvent Server uses Apache Karaf, a lightweight flexible container to support its Java runtime environment. A powerful logging system, based on OPS4j Pax Logging, is included with Apache Karaf.

The GeoEvent Manager web application includes a simple user-interface for the ops4j logging system. You can use this interface to see the most recent messages logged by different components of ArcGIS GeoEvent Server. The UI illustrated below caches up to 500 logged messages and allows you to scroll through logged messages specifying how many messages should be listed on a page, select a specific type of logged message (e.g. DEBUG, INFO, WARN, or ERROR) as well as perform keyword searches.

GeoEvent Manager Logging User Interface

A significant limitation of this logging interface is that only the most recent 500 logged messages are maintained in its cache, so review and keyword searches you perform are limited to recently logged messages. This means that the velocity and volume of event records being processed as well as the number of GeoEvent Services, inputs, and outputs you have configured can affect (and limit) your ability to isolate logged messages of interest. A valuable debugging technique is to locate the actual log file on disk and open it in a text editor.

Location of the log file on disk

On a Windows platform, assuming your ArcGIS GeoEvent Server has been installed in the default folder beneath C:\Program Files, you should be able to locate the following system folder which contains the actual system log files.

C:\Program Files\ArcGIS\Server\GeoEvent\data\log

In this folder you will find one or more files with a base name karaf.log – these files can be opened in a text editor of your choice for content review and search. You can also use command-line utilities like tail, string processing utilities like sed, grep, and awk, as well as regular expressions to help isolate logged messages. Examples using these are included in other blogs in this series.

Only one log file, the file named karaf.log, is actively being written at any one time. When this file's size has grown as large as the system configuration allows, the file will automatically rollover and a new karaf.log file will be created. Log files which have rolled over will have a numeric suffix (e.g. karaf.log.1) and the file's last updated date/time will be older than the karaf.log currently being written.

If you open the karaf.log in a text editor you should treat the file as read-only as the logging system is actively writing to this file. Be sure to periodically reload the file's content in your text editor to make sure you are reviewing the latest file.

How to specify an allowed log file size and rollover properties

Locate the org.ops4j.pax.logging.cfg configuration file in the ArcGIS GeoEvent Server's \etc folder:

C:\Program Files\ArcGIS\Server\GeoEvent\etc

Using a text editor run as an administrator, because the file is located beneath C:\Program Files, you can edit properties of the system log such the default logging level for all loggers (a "logger" in this context is any of several components that are actively logging messages, such as the outbound feature adapter or the inbound TCP transport).

For example, at the 10.7 release a change was made to quiet the system logs by reducing the ROOT logging level from INFO to WARN so that only warnings are logged by default. You can see this specified in the following line in the org.ops4j.pax.logging.cfg configuration file:

# Root logger

log4j2.rootLogger.level = WARN

Searching the configuration file for the keyword "rolling" you will find lines which specify the karaf.log file's allowed size and rollover policy. Be careful -- not all of the lines specifying the rollover policy are necessarily in the same section of the log file; some may be located deeper in the file:

# Rolling file appender

log4j2.appender.rolling.type = RollingRandomAccessFile

log4j2.appender.rolling.name = RollingFile

log4j2.appender.rolling.fileName = ${karaf.data}/log/karaf.log

log4j2.appender.rolling.filePattern = ${karaf.data}/log/karaf.log.%i

log4j2.appender.rolling.append = true

log4j2.appender.rolling.layout.type = PatternLayout

log4j2.appender.rolling.layout.pattern = ${log4j2.pattern}

log4j2.appender.rolling.policies.type = Policies

log4j2.appender.rolling.policies.size.type = SizeBasedTriggeringPolicy

log4j2.appender.rolling.policies.size.size = 16MB

log4j2.appender.rolling.strategy.type = DefaultRolloverStrategy

log4j2.appender.rolling.strategy.max = 10

The settings above reflect defaults for the 10.7 release which specify that the karaf.log should rollover when it reaches 16MB and up to 10 indexed files will be used to archive older logged messages.

The anatomy of a logged message

Before we conclude our discussion on configuring the application logger I would like to briefly discuss the format of logged messages. The logged message format is configurable and logged messages by default have six parts. Each part is separated by a pipe ( | ) character.

Logged messages have six parts

The thread identifier default specification (see illustration below) has a minimum of 16 characters but no maximum length; some thread identifiers can be quite long. The class identifier spec includes a precision which limits the identifier to the most significant part of the class name. In the illustration above the fully-qualified class identifier com.esri.ges.fabric.core.ZKSerializer has been shortened to simply ZKSerializer. We will discuss the impact of this more in a later blog.

You can edit the org.ops4j.pax.logging.cfg configuration file to specify different patterns for the appender. You should refer to https://logging.apache.org/log4j/2.x/manual/layouts.html#PatternLayout in the Apache logging services on-line help before modifying the default appender pattern layout illustrated below.

# Common pattern layout for appenders

log4j2.pattern = %d{ISO8601} | %-5p | %-16t | %-32c{1} | %geoeventBundleID - %geoeventBundleName - %geoeventBundleVersion | %m%n

log4j2.out.pattern = \u001b[90m%d{HH:mm:ss\.SSS}\u001b[0m %highlight{%-5level}{FATAL=${color.fatal}, ERROR=${color.error}, WARN=${color.warn}, INFO=${color.info}, DEBUG=${color.debug}, TRACE=${color.trace}} \u001b[90m[%t]\u001b[0m %msg%n%throwable

Conclusion

Using the logging interface provided by GeoEvent Manger is a quick, simple way of reviewing logged messages recently produced by system components as they ingest, process, and disseminate event data. Event record velocity and volume can of course increase the number of messages being logged. Increasing the logging level from ERROR or WARN to INFO or DEBUG can drastically increase the volume of logged messages. If running components are frequently logging messages in the system's log file only the most recent the messages will be displayed in the GeoEvent Manager user-interface. Messages which have been pushed out of the cache can be reviewed by editing the karaf.log in a text editor. This is a key debugging technique, but you must be aware that the karaf.log is actively being written and will rollover as it grows beyond a specified size.

As you make and save changes to the system logging, for example, to request DEBUG logging on a specific logger, the changes will immediately be reflected in the org.ops4j.pax.logging.cfg configuration file. You can edit this file as an administrator and any changes you save will be picked up immediately; you do not have to stop and restart the ArcGIS GeoEvent Server service.

This blog is one in a series of blogs discussing debugging techniques you can use when working to identify the root cause of an issue with a GeoEvent Server deployment or configuration. Click any link in the quick list below to jump to another blog in the series.

In a client / server context ArcGIS GeoEvent Server sometimes acts as a client and at other times acts as a server. When an Add a Feature or an Update a Feature output is configured to add / update feature records in a geodatabase feature class through a feature service, ArcGIS GeoEvent Server is a client making requests on an ArcGIS Server feature service. In this blog I will show how you can isolate requests GeoEvent Server sends to an ArcGIS Server service and how to use the JSON from the request to debug issues you are potentially encountering.

Scenario

A customer reports that an input connector they have configured appears to be successfully receiving and adapting data from a provider and event records appear to be processed as expected through a GeoEvent Service. The event record count on their output increments, but they are not seeing some – or any – features displayed by a feature layer they have added to a web map.

Request DEBUG logs for the outbound feature service transport

Components in the ArcGIS GeoEvent Server runtime log messages to provide information as well as note warnings and/or errors. Each component uses a logger, an object responsible for logging messages in the system's log file, which can be configured to generate different levels of messages (e.g. DEBUG, INFO, WARN, or ERROR).

In this case we want to request the com.esri.ges.transport.featureService.FeatureServiceOutboundTransport component log DEBUG messages to help us identify the problem. To enable DEBUG logging for a single component's logger:

  • In GeoEvent Manager, navigate to the Logs page and click Settings
  • Enter the name of the logging component in the text field Logger and select the DEBUG log level
  • Click Save

As you type the name of a logger, if the GeoEvent Manager's cache of logged messages contains a message from a particular component's logger, IntelliSense will help you identify the logger's name.

IntelliSense

Querying for additional information

When a processed event record is routed to an Update a Feature output the data is first reformatted as Esri Feature JSON so that it can be incorporated into a map/feature service request. A request is then made using the ArcGIS REST API to either Add Features or Update Features.

An Add a Feature output connector has the easier job – it doesn't care whether a feature record already exists since it is not going to request an update. An Update a Feature output connector on the other hand needs to know the objectid or row identifier of the feature record it should update.

If the output has previously received an event record with this event record's TRACK_ID then it has likely already sent a request to the targeted map/feature service to query for feature records whose Unique Feature Identifier Field was specified as the field to use to identify feature records to update. The output maintains a cache mapping every event record's TRACK_ID to a corresponding object or row identifier of a feature record.

Here is what the logged DEBUG messages look like when an Update a Feature output queries to discover an object or row identifier associated with a feature record:

1

2019-06-05T15:12:34,324 | DEBUG | FeatureJsonOutboundAdapter-FlushingThread-com.esri.ges.adapter.outbound/JSON/10.7.0 | FeatureServiceOutboundTransport | 91 - com.esri.ges.framework.transport.featureservice-transport - 10.7.0 | Querying for missing track id '8SKS617'

2

2019-06-05T15:12:34,489 | DEBUG | FeatureJsonOutboundAdapter-FlushingThread-com.esri.ges.adapter.outbound/JSON/10.7.0 | FeatureServiceOutboundTransport | 91 - com.esri.ges.framework.transport.featureservice-transport - 10.7.0 | Posting to URL: https : //localhost.esri.com/server/rest/services/SampleRecord/FeatureServer/0/query with parameters: f=json&token=QNv27Ov9...&where=track_id IN ('8SKS617')

&outFields=track_id,objectid.

3

2019-06-05T15:12:34,674 | DEBUG | FeatureJsonOutboundAdapter-FlushingThread-com.esri.ges.adapter.outbound/JSON/10.7.0 | FeatureServiceOutboundTransport | 91 - com.esri.ges.framework.transport.featureservice-transport - 10.7.0 | Response was {"exceededTransferLimit":false,"features":[ ],"fields"...

Notice a few key values highlighted in the logged message's text above:

  • Line 1:  The output has recognized that it has not previously seen an event record with the TRACK_ID 8SKS617 (so it must query the map/feature service to see if it can find a matching feature record).
  • Line 2:  This is the actual query sent to the SampleRecord feature service's query endpoint requesting a feature record whose track_id attribute is one of several in a specified list (8SKS617 is actually the only value in the list). The query requests that the response include only the track_id attribute and an object identifier value.
  • Line 3:  The ArcGIS Server service responds with an empty array features[ ]. This indicates that there are no features whose track_id attribute matches any of the values in the query's list.

The output was configured with its Update Only parameter set to 'No' (the default). So, given that there is no existing record whose track_id attribute matches the event record's tagged TRACK_ID field, the output connector fails over to add a new feature record instead:

4

2019-06-05T15:12:34,769 | DEBUG | FeatureJsonOutboundAdapter-FlushingThread-com.esri.ges.adapter.outbound/JSON/10.7.0 | FeatureServiceOutboundTransport | 91 - com.esri.ges.framework.transport.featureservice-transport - 10.7.0 | Posting to URL: https : //localhost.esri.com/server/rest/services/SampleRecord/FeatureServer/0/addFeatures with parameters: f=json&token=QNv27Ov9...&rollbackOnFailure=true features=[{"geometry":{"x":-115.625,"y":32.125, "spatialReference":{"wkid":4326}},"attributes":{"track_id":"8SKS617","reported_dt":1559772754211}}].

5

2019-06-05T15:12:34,935 | DEBUG | FeatureJsonOutboundAdapter-FlushingThread-com.esri.ges.adapter.outbound/JSON/10.7.0 | FeatureServiceOutboundTransport | 91 - com.esri.ges.framework.transport.featureservice-transport - 10.7.0 | Response was {"addResults":[{"objectId":1,"globalId":"{B1384CE2-7501-4753-983B-F6640AB63816}", "success":true}]}.

Again, take a moment to examine the highlighted text:

  • Line 4:  The ArcGIS REST API endpoint to which the request is sent is the Add Features endpoint. An Esri Feature JSON representation of the event data is highlighted in green.
  • Line 5:  The ArcGIS Server service responds with a block of JSON indicating that it successfully updated a feature record, assigning the new record the object identifier '1' and a globally unique identifier (the feature service I'm using in this example is actually one hosted by my ArcGIS Enterprise portal).

The debug logs include the Esri Feature JSON constructed by the output connector. You can actually copy and paste this JSON into the feature service's web page in the ArcGIS REST Services Directory. This is an excellent way to abstract ArcGIS GeoEvent Server from your debugging workflow and determine if there are problems with how the JSON is formatted or reasons why a feature service might reject a client's request.

Add Features using ArcGIS REST Services web form

I used this technique once to demonstrate that a polygon geometry created by a Create Buffer processor in a GeoEvent Service had several dozen vertices, allowing the geometry to approximate a circular area. When the polygon was committed to the geodatabase as a feature record, however, its geometry had been generalized such that it only had a few vertices. Web maps were displaying very rough approximations of the area of interest, not circular buffers. But it wasn't ArcGIS GeoEvent Server that had failed to produce a geometry representing a circular area. The problem was somewhere in the back-end relational database configuration.

Rollback on Failure?

There is a query parameter on Line 4 in the illustration above which is easily overlooked: rollbackOnFailure=true

The default action for both the Add a Feature and Update a Feature outputs is to request that the geodatabase rollback the feature record transaction request if a problem is encountered. In many cases this is why customers are not seeing all of the feature records they expect updated in a feature layer they have added to a web map. Consider the following fields specification for the targeted feature service's feature layer:

Fields:
    track_id ( alias: track_id, type: esriFieldTypeString, length: 512, editable: true, nullable: true )
    reported_dt ( alias: reported_dt, type: esriFieldTypeDate, length: 29, editable: true, nullable: true )
    objectid ( alias: objectid, type: esriFieldTypeOID, length: 8, editable: false, nullable: false )
    globalid ( alias: globalid, type: esriFieldTypeGlobalID, length: 38, editable: false, nullable: false )

Suppose for a moment that the esriFieldTypeString specification for the track_id attribute specified that the string should not exceed seven characters. If a web application (client) were to send the feature service a request with a value for the track_id which was longer than seven characters, the data would not comply with the feature layer's specification and the feature service would be expected to reject the request.

Likewise, if attribute fields other than esriFieldTypeOID or esriFieldTypeGlobalID were specified as not allowing null values, and a client request was made whose attribute values were null, the data would not be compliant with the feature layer's specification; the feature service should reject the request.

By default both the Add a Feature and Update a Feature output connectors begin working through a cache of event records they have formatted as Esri Feature JSON placing the formatted data in one or more requests that are sent to the targeted feature service's feature layer. Each request, again by default, is allowed to contain up to 500 event / feature records.

Update a Feature default properties

It only takes one bad apple to spoil a batch. If even one processed event record's data in a transaction containing ten, fifty, or a hundred feature records in a single transaction request is not compliant with string length restrictions, value nullability restrictions – or any other restriction enforced by an ArcGIS Server feature service – the entire transaction will rollback and none of the feature records associated with that batch of processed event records will be updated.

Reduce the Maximum Features Per Transaction

You cannot change the rollback on failure behavior. The outbound connectors interfacing with ArcGIS Server feature services do not implement a mechanism to retry an add/update feature record operation because one or more feature records in a batch do not comply with a feature layer's specification.

You can change the number of processed event records an Add a Feature or Update a Feature output connector will include in each transaction. If you configure your output to specify a maximum number of one feature record per transaction you can begin to work around the issue of one bad record spoiling an entire transaction. If bad data or null values were to occasionally creep into processed event records then only the bad records will fail to update a corresponding feature record and the rollback on failure won't suppress any valid feature record updates.

The downside to this is that REST requests are inherently expensive. If it were to take as little as 20 milliseconds to make a round-trip to the database and receive a response to a transaction request you could effectively cut your event throughput to less than 50 event records per second if you throttle feature record updating by allowing only one processed event record per transaction. The upside to reducing, at least temporarily, the number of records allowed in a transaction is that it makes the messages being logged much, much easier to read. It also guarantees that each success / fail response from the ArcGIS Server feature service can be traced back to a single add / update feature request.

Timestamps – another benefit to logging DEBUG messages for the outbound transport

Every logged message includes a timestamp with millisecond precision. This can be very useful when debugging unexpected latency when interacting with a geodatabase's feature class through an ArcGIS Server's REST interface.

Looking back at the two tables above with the logged DEBUG messages, the time difference between the messages on Line 1 and Line 2 is 165 milliseconds (489 - 324 = 165). That tells us it took over a tenth of a second for the output to formulate its query for "missing" object identifiers needed to request updates for specific feature records. It takes another 185 milliseconds (674 - 489 = 185) to actually query for the needed identifiers and discover that there are no feature records with those track_id values.

To be fair, you should expect this latency to drop as ArcGIS Server and/or your RDBMS begin caching information about the requests being made by clients. But it is important to be able to measure the latency ArcGIS GeoEvent Server is experiencing. If every time an Add a Feature output connector's timer expires (which is once every second by default) it takes a couple hundred milliseconds to complete a transaction, you should have a pretty good idea how many transactions you can make in one second. You might need to increase your output's Update Interval so that it holds only its cache of processed event records longer before starting a series of transactions. If you do this, know that as updates arrive for a given tracked asset older records will be purged from the cache. When updating feature records the cache will be managed to contain only one processed event record for each unique TRACK_ID.

Conclusion

Taking the time to analyze the DEBUG messages logged by the outbound feature service transport can provide you a wealth of information. You can immediately see if values obtained from an event record's tagged TRACK_ID field are reasonably expected to be found in whatever feature layer's attribute field is being used to query for feature records that correlate to processed event records. You can check to see if any values in a processed event record are unexpectedly null, have strings which are longer than the feature layer will accept, or – my favorite – contain what ArcGIS Server suspects is HTML or SQL code resulting in a service rejecting the transaction to prevent a suspected injection attack.

ArcGIS GeoEvent Server, when interfacing with an RDBMS through a map / feature service's REST interface, is acting as any other web mapping application client would act in making requests on a service it assumes is available. You can eliminate GeoEvent Server entirely from your debugging workflow if you copy / paste information like the ESRI Feature JSON from a DEBUG message logged by the outbound transport into an HTML page in the ArcGIS REST Services Directory. I did exactly this to prove, once, that polygon geometries with hundreds of vertices modeling a circular area were somehow being generalized as they were committed into a SQL Server back-end geodatabase.

If a customer reports that some – or all – of the features they expect should be getting added or updated in a feature layer are not displayed by a web map's feature layer, take a close look at the requests the configured output is sending to the feature service.

This blog is one in a series of blogs discussing debugging techniques you can use when working to identify the root cause of an issue with a GeoEvent Server deployment or configuration. Click any link in the quick list below to jump to another blog in the series.

In this blog I will illustrate a couple of techniques I use to identify more granular component logging than requesting the ROOT component produce DEBUG messages for all component loggers. I will also introduce a couple command-line utilities I frequently use to interrogate the ArcGIS GeoEvent Server's system log file. I'll consider a specific scenario and show how to isolate logged messages that provide information about an output's requests to a feature service which identify the criteria used to discover and delete feature records.

Scenario

A customer has configured the Delete Old Features capability on an Add a Feature output connector and reports feature records are being deleted from the geodatabase earlier than expected. Following advice from the blog Add/Update Feature Output Connectors they have captured a few logged messages from the outbound feature transport but are not seeing any information about criteria the connector is using to determine which feature records should be deleted or when the records should be deleted.

Feature Transport - Delete Features

What is the outbound feature transport telling us?

The illustration above does not give us much information. It confirms that an Add a Feature output is periodically, once a minute, making requests on a feature service to delete old feature records and that, for the three intervals shown, no feature records were deleted (the JSON array in the response from the feature service is empty).

If one or more existing feature records had satisfied criteria included in the delete features request, then the logged messages would contain feature record identifiers to confirm which feature records had been deleted. Hypothetically, looking at the raw logged messages in the karaf.log file, we would expect to see a message similar to the following:

2019-06-03T16:42:41,474 | DEBUG | OutboundFeatureServiceCleanerThread-[Default][/][SampleRecord][0][FeatureServer] | FeatureServiceOutboundTransport | 91 - com.esri. ges.framework.transport.featureservice-transport - 10.7.0 | Response was {"deleteResults":[{"objectid":3, ... "success":true},{ "objectid":4, ... "success": true}]}.

The outbound feature transport is only confirming what has been deleted, not criteria used to determine what should be deleted. The information we need, hopefully, is being logged by a different component logger.

How to determine which component logger to watch

As I mentioned in the blog Configuring the application logger, the logging system implemented by ArcGIS GeoEvent Server logs messages from the Java runtime. The messages being logged generally contain good information for software developers, but are rather hard for a GIS analyst to review and interpret. If someone from the product team has not identified a component logger from which you should request more detailed log messages, your only option is to request DEBUG logging on the ROOT component.

If you elect to do this you must know that the karaf.log will quickly grow very large and will roll over as described in the aforementioned blog.

All hope is not lost lost however. One technique I have found helpful is turn off as many of my running inputs and outputs as I can to quiet ArcGIS GeoEvent Server's activity and then briefly, for perhaps a minute or two, request DEBUG level messages be produced by setting the debugging level on the ROOT component. GeoEvent Manager's logging user interface will quickly cache up to 500 messages and you can use built-in IntelliSense to at least get an idea of which components are actively running and producing log messages.

IntelliSense illustration

Once you understand that both the Add a Feature and Update a Feature output connectors use endpoints exposed through the ArcGIS REST Services Directory to interface with their targeted feature services, one component logger should stand out – the HTTP Client component logger highlighted in the illustration above. The information we need on the criteria used to identify feature records to delete is probably being logged as part of an HTTP REST request.

Request DEBUG logs for the HTTP Client

In this case we want to request the com.esri.ges.httpclient.Http component log DEBUG messages to help us identify the problem. To enable DEBUG logging for a the identified component's logger:

  • Navigate to the Logs page in GeoEvent Manger and click the Settings button.
  • Restore the ROOT component logger to its default level WARN and click Save.
  • Specify the name of the HTTP Client component logger, select the DEBUG log level, and Save again.

ArcGIS GeoEvent Server is fundamentally RESTful, which means you will still have a high volume of messages being logged to the karaf.log – but not as many as if you had left DEBUG logging set on the ROOT component logger.

Useful command-line utilities for interrogating karaf.log

I operate almost exclusively on a Windows platform, but Cygwin is one of the first things I install whenever I get a new machine. Cygwin is a free, open source, environment which provides a native Windows integrated command-line shell from which I can execute some of my favorite Unix utilities like sed, grep, awk, and tail. There are probably other packages available which provide similar utilities and tools, but I like Cygwin.

If I open a Cygwin command-line shell I can change directory to where the karaf.log file is being written and generate an active tail of the log so that I don't have to open the log file in a text editor and frequently re-load the file as its content is updated. I am also able to pipe the streaming content from tail through grep to limit the logged messages displayed to those which contain specific keywords or phrases. For example:

1

rsunderman@localhost //localhost/C$/Program Files/ArcGIS/Server/GeoEvent/data/log

2

$ tail -0f karaf.log |grep --line-buffered 'where.*reported_dt'

3

2019-06-07T16:33:19,545 | DEBUG | OutboundFeatureServiceCleanerThread-[Default][/][New_SampleRecord][0][FeatureServer] | Http | 60 - com.esri.ges.framework. httpclient - 10.7.0 | Adding parameter (where/reported_dt < timestamp '2019-06-07 17:33:19').

4

2019-06-07T16:34:20,269 | DEBUG | OutboundFeatureServiceCleanerThread-[Default][/][New_SampleRecord][0][FeatureServer] | Http | 60 - com.esri.ges.framework. httpclient - 10.7.0 | Adding parameter (where/reported_dt < timestamp '2019-06-07 17:34:20').

5

2019-06-07T16:35:20,433 | DEBUG | OutboundFeatureServiceCleanerThread-[Default][/][New_SampleRecord][0][FeatureServer] | Http | 60 - com.esri.ges.framework. httpclient - 10.7.0 | Adding parameter (where/reported_dt < timestamp '2019-06-07 17:35:20').

The above quickly reduces all the noise logged by the HTTP Client component logger to only those messages which include the name of the attribute field reported_dt which the Add a Feature output was configured to use when identifying feature records older than a specified number of minutes. The criteria we are looking for is clearly identified as a parameter the HTTP Client is adding to the request it is constructing to send to the feature service to identify and delete old feature records.

The system I am running is in California, which is -07:00 hours behind GMT. The date/time values in the reported_dt attribute of each feature record in my feature are expressed as epoch long integers and represent GMT values. My output is configured to query every 60 seconds and delete feature records which are more than six hours old. The logged messages above bear timestamps which are roughly 60 seconds apart and the where clause identifies any feature record whose date/time is "now" + 07:00 hours (UTC offset) - 06:00 hours (the number of hours at which a feature record is considered "old").

Using the ArcGIS REST Services Directory to query feature records from the feature service, I can quickly see that feature records which are not yet six hours old (relative to GMT) remain but those I add or update with a reported_dt value which is at least six hours old get deleted every 60 seconds.

What if the above had not yielded the information we needed?

We could always fall back to set the ROOT logger to DEBUG so that all component loggers produced debug messages. While this is extremely verbose the technique which uses the tail and grep command-line utilities can still be used to try and find anything which mentions our particular feature service's REST endpoint.

In this case my feature service's name was New_SampleRecord, so I can reasonably expect to find logged messages which include references to:  New_SampleRecord/FeatureServer/0/deleteFeatures

A grep command, using a regular expression pattern match like the following should find only those logged messages which appear to be attempting to delete features from the feature layer in question:
tail -0f karaf.log |grep --line-buffered 'SampleRecord.*FeatureServer.*deleteFeatures'

Tests using the above grep log message filter reveal about 75 messages logged every 60 seconds which include a reference to the deleteFeatures endpoint for the feature layer my output is targeting. Copying and pasting these lines into a text editor I can review them to discover that only one message contains a SQL WHERE clause. Such a clause would be required to identify records with a date/time value which should be considered "old".

While the date/time value in this logged message is HTTP encoded, because this particular message depicts text ready to be sent out over the HTTP wire, we can still use the logged message to understand the criteria being applied by the ArcGIS GeoEvent Server's output.

2019-06-07T18:10:06,956 | DEBUG | HttpRequest Worker Thread: https://localhost.esri.com /server/rest/services/New_SampleRecord/FeatureServer/0/deleteFeatures | wire | 60 - com.esri.ges.framework.httpclient - 10.7.0 | http-outgoing-27360 >> "f=json&token=HM85k4E...&rollbackOnFailure=true&where=reported_dt+%3C+timestamp+%272019-06-07+19%3A10%3A06%27"

Introduction

Often computers think they are smarter than humans, but since it is the human whom programs the computer code to perform a repetitive task, we know there are times additional tweaking can be beneficial for a successful outcome of a given workflow. XML data structures with namespaces is no exception.

 

If you have not started your XML quest off by reading the blog, XML Data Structures - Characteristics and Limitations, written by RJ Sunderman, I highly recommend starting there. It provides a solid foundation for working with XML data structures. What we will explore in this blog is XML data structures that include the use of namespaces, in particular, that of a Web Feature Server (WFS) service. The first question here might be,what exactly is a "namespace?" The namespace refers to the pre-fix of the XML element, for example, <wfs:WFS_Capabilities>. When working with XML data that includes namespaces there will be an XML <schema> element with one or more attributes containing URLs describing the XML structure and all namespaces used in the document. This schema declaration often looks something like:

The XML Schema Declaration results from WFS getCapabilities request.

The xmlns:wfs="http ://www.opengis.net/wfs/2.0" attribute in the illustration above indicates the elements and data types used in the schema come from the "http://www.w3.org/2001/XMLSchema" namespace. The same attribute also specifies the elements and data types that come from the "http://www.w3.org/2001/XMLSchema" namespace should be prefixed with wfs. For more information, see XSD - The <schema> Element.

 

At this point, it should be noted that WFS services in ArcGIS Server use Geography Markup Language (GML) to encode the feature data. In order to represent geographic information, GML is the means used for XML. The GML used in ArcGIS Server WFS services is the Simple Features profile. For more information, see the technical notes in Why use a WFS service?.

 

Explore a WFS service

To begin our adventure, you will need an existing WFS service published that ArcGIS GeoEvent Server can ingest. You might not be aware, but ArcMap provides sample data that can be accessed, by default, in the following location: C:\Program Files (x86)\ArcGIS\Desktop<version>\TemplateData\TemplateData.gdb. Keep in mind, you are working with the actual features, therefore, the feature class must reside in a registered enterprise geodatabase before proceeding, see Data sources for ArcGIS Server for more information. For this blog, I have added the USA Cities feature class to ArcMap (ArcGIS Pro works too!) and published it as a service to ArcGIS Server. 

 

NOTEAvoid using special characters in the layer name represented in the Table of Contents in ArcMap or ArcGIS Pro.

 

During the publishing process the following capabilities were enabled in ArcMap.

ArcMap Service Editor dialog box during publishing process.

If you are working with an existing service, you can use ArcGIS Server Manager to ensure you have the appropriate capabilities enabled on the service.

Select and configure capabilities page of published service from within ArcGIS Server's Manager page.

Once the service is finished publishing the service should be shared with Everyone if your ArcGIS environment is a federated ArcGIS Enterprise deployment. Otherwise, continuing with the workflow below might not work as expected. In the ArcGIS REST Services Directory, browse to the endpoint for the published service and click the WFS link, which then performs a GetCapabilities of the WFS service:

 

Results from WFS getCapabilities request.

Okay, so far so good, but you will need to work with the features of the WFS service which requires sending the GetFeature request query parameter. To accomplish this, you need to know the name of the feature element. You can use the DescribeFeatureType parameter that describes the field information about one or more features from the WFS service. In this case, you are working with Cities which is returned from this request.

 

The request resembles:

 

URL example for DescribeFeatureType

 

And returns the following XML information:

 

Results from WFS Describe Feature Type request.

 

For additional assistance on this and other parameters, see Communicating with a WFS service in a web browser. Now that you have all of the parameters for the WFS services, you can go ahead and request those features.

 

Your request will look something like:

 

URL example for getFeature

 

And the features returned will look like the following:

 

One entire feature returned from the WFS getFeatures request for Cities.

 

The above illustration shows one feature from the GetFeature request. Depending on how many features your service contains, the request might take anywhere from a few seconds to several minutes, the request may also flash a blob of unformatted text in the browser. Be patient and wait for the GetFeature request to perform its magic, all features will be returned as formatted XML. The sample data used here contains 3,159 cities within the USA dataset, this information is returned as part of the GetFeature request within the first XML element. Although it is not displayed here, just look for the XML attribute numberReturned="3159". Note that, since the XML data structure for the WFS service also contains GML data, there is the ever important X and Y location information listed under the <gml:pos> attribute, So, enough about WFS services, let’s get to the fun that is GeoEvent Server...

 

Working with XML namespaces in GeoEvent Server

The XML/GML namespace and hierarchy found in a WFS service can get in the way when using default values to configure a new “Poll an External Website for XML” Input Connector in GeoEvent server. For example, if the above GetFeatures URL for the WFS service query parameter is specified, leave the XML Object Name unspecified, and allow GeoEvent Server to auto-generate a GeoEvent Definition for us, below is the resulting GeoEvent Definition that is created:

 

Auto-generated GeoEvent Definition from the WFS service getFeatures request.

 

If we compare the GetFeatures request to the GeoEvent Definition they match up perfectly at first glance. However, notice that all the namespaces have been stripped of the attribute names. Upon further observation there is no need for the “metadata” above each “member” attribute (e.g. numberMatchednumberReturned, etc.) Also, we know that each “member” should be processed as a separate event record and therefore, a value for the input connector’s XML Object Name needs to be specified somehow. Looking back at the screenshot above of the GetFeatures request and the GeoEvent Definition, the logical choice in this workflow would be to use the wfs:member to tell the input connector to look in that list for individual event records.

 

By doing so, whereby the wfs:member is entered as the input connector’s XML Object Name, the event count for the input connector does not increment. Even if the modified input connector attempts to create a new auto-generated GeoEvent Definition with XML Object name specified the count does not increment. Further, if you stop the input, update the properties again, save the input and then restart the input, an ERROR is logged from the com.esri.ges.adapter.xml.XmlInboundAdapter indicating us that it is Unable to parse input '' into spatial reference. This, is, more than likely, where the GML/XML namespaces are getting in the way.

 

There are two paths the GeoEvent Definition can take from here. If you are lucky, you might find your XML data structure is such that it does not contain a double nested hierarchy like the one above. In this instance the existing GeoEvent Definition can be modified to include the XML namespaces and then you can carry on. However, with a WFS service, manually creating the GeoEvent Definition is necessary. To do so, you will need to specify USA_USA:Cities as a "Group" element, specifically calling out each attribute and element beneath that group (pre-fixing the namespace designation) while taking care to also map the nested hierarchy for the shape element too, Once you create a GeoEvent Definition with these changes applied, you should be able to successfully ingest event data into GeoEvent Server.

 

Below is the GeoEvent Definition created to include attributes and the corresponding XML namespace. Take note of the USA_USA:Shape group, with its nested element gml:Point, which is also a group and contains an element gml:pos. Also notice that the feature dataTypes can be specified in the GeoEvent Definition, which can be obtained from the WFS Describe Feature Type results.

 

GeoEvent Definition with namespace on left side and Wfs GetFeatures

 

You may be thinking the namespace for this cities feature looks a little strange, let me explain. When this data was published to ArcGIS Server it was placed in a folder named "USA". So, just like the folder name is reflected in the REST URL it is also added to the XML namespace as USA_USA.

 

Below is the configured “Poll an External Website for XML” Input connector along with an initial GeoEvent Service.

 

On the left is The configured “Poll an External Website for XML” inbound connector and on the right is The start of the GeoEvent Service.

 

As a best practice, GeoEvent Definitions should be a flat representation of the data being ingested. Therefore, it is recommended you re-map those ingested event records. This magic is possible by using the Field Mapper Processor. To start, you will need to create an additional GeoEvent Definition without any of the group elements or namespaces as the new schema to which you want to map the received data. In addition to flattening the structure, this provides you an opportunity to rename all the attribute fields if you choose, I did not in this case. You will, however, want to remove the ":" and the unnecessary XML namespace pre-fixes, which in the example above is USA_USA and gml. The ":" will cause problems later in your GeoEvent Service if you do not remove them. Go ahead and create the flat definition at this point.

With the flat GeoEvent Definition created, you should now have the auto-generated GeoEvent Definition from the XML data structure of the WFS service, which was modified to include the attributes and XML namespaces. In addition, you should also have a second GeoEvent Definition that is flat and includes a Geometry field and structure illustrated below.

 

On the left is The Field Mapper processor and on the right is The GeoEvent Service that includes the Field Mapper processor.

 

If you are wondering how we are going to get the Latitude and Longitude from that pos string field, read on.

 

Working with Geometry

 

The finish line is close, we just have one more thing to address, but the coordinates in the pos attribute string field need to be converted into a point geometry. The key to this conversion is to recognize that pos is actually a single string containing two coordinate values separated by a space. In this case, the Field Calculator Processor can be used with the expression '{"x":'+ replace( pos, ' ', ',"y":') + ',"spatialReference":{"wkid":4269}}' to convert this string into a JSON string representation of an Esri point feature.

 

The above expression targets the literal space between the first coordinate and the second coordinate and replaces it with the literal string:, "y":. The expression also prepends the literal string {"x": and appends the literal string, "spatialReference":{"wkid":4269}} which completes the geometry string with a spatial reference. Remember, the spatialReference can be found in the srsName attribute field.

 

Now, let’s explore how this would look in a configured Field Calculator Processor along with the completed GeoEvent Service.

 

On the left is the The Field Calculator processor and on the right The final GeoEvent Service showing the Field Mapper and Field Calculator processors.

 

So far, the GeoEvent Service has been writing out to JSON files, to bring this full circle let’s compare the JSON output from the auto-generated GeoEvent Definition to the JSON file after the Field Mapper and Field Calculator Processors have processed the manually created GeoEvent Definition.

 

On the left is The JSON output from the auto-generated GeoEvent Definition and on the right is The JSON output from the manually created GeoEvent Definition and both processors.

Conclusion and References

I hope the information presented above is useful and provides insight into working with WFS services in GeoEvent Server.

 

As you may have noticed, a lot of the work was related to XML data structures. An additional resource I find useful when working with XML data is the free program, Microsoft - XML Notepad 2007. For help with regular expressions try regex101.

 

You can read more about creating JSON string representations for Esri feature geometry in the How to switch positions on coordinates GeoNet post. There’s also a slightly different approach discussed in the Appendix of the Introduction to GeoEvent Serverhowever you would have to slice the two coordinate values out of the string and save them as separate attribute values to use that approach.

 

I cannot finish this Blog without also mentioning another great Blog written by RJ, JSON Data Structures - Working with Hierarchy and Multicardinality. His discussion on hierarchy ties in nicely with the XML Data Structure.

When someone asks you, "What time is it?", you are probably assuming he or she wants to know the local time where the two of you are right now. As I write this, the time now is Tuesday, March 12, 2019 at about 2:25 PM in Redlands, California, USA.

Typically, we do not qualify our answers so explicitly. We say "It's 2 o'clock" and assume it's understood that this is the time right now in Redlands, California. But that is sort of like answering a query about length or distance by simply saying "36". Is that feet, meters, miles, or kilometers?

Last weekend, here in California, we set our clocks ahead one hour to honor daylight savings time (DST). California is now observing Pacific Daylight Time (PDT) which is equal to UTC-7:00 hours. When we specify the time at which an event was observed, we should include the time zone in which the observation is made as well as whether or not the time reflects a local convention honoring daylight savings time.

When ArcGIS GeoEvent Server receives data for processing, event records usually include a date/time value with each observation. Often the date/time value is expressed as a string and does not specify the time zone in which the date/time is expressed or whether the value reflects a daylight savings time offset. These are sort of like the "units" (e.g. feet, meters, miles, or kilometers) which qualify a date/time value.

The intent of this blog is to identify when GeoEvent Server assumes a date/time value is expressed in Coordinated Universal Time (UTC) versus when it is assumed that a date/time expresses a value consistent with the system's locale. We'll explore a couple situations where this might be important and the steps you can take to configure how date/time values are handled and displayed.

Event data ingest should generally assume date/time values are expressed as UTC values

There are several reasons for this. In the interest of brevity, I'll simply note that GeoEvent Server is running in a "server" context. The assumption is that the server machine is not necessarily located in the same time zone as the sensors from which it is receiving data and that clients interested in visualizing the data are likewise not necessarily in the same time zone as the server or the sensors. UTC is the time standard commonly used around the world. The world's timing centers have agreed to synchronize, or coordinate, their date/time values -- hence the name Coordinated Universal Time.(1)

If you have ever used the ArcGIS REST Services Directory to examine the JSON representation of feature records which include a date/time field whose data type is esriFieldTypeDate, you have probably noticed that the value is not a string, it is a number; an epoch long integer representing the number of milliseconds since the UNIX Epoch (January 1, 1970, midnight). The default is to express the value in UTC.(2)(3)

When does GeoEvent Server assume the date/time values it receives are UTC values?

Out-of-the-box, GeoEvent Server supports the ISO 8601 standard for representing date/time values.(4)

It is unusual, however, to find sensor data which expresses the date/time value "March 12, 2019, 2:25:30 pm PDT" as 2019-03-12T14:25:30-07:00. So when a GeoEvent Definition specifies that a particular attribute should be handled as a Date, inbound adapters used by GeoEvent Server inputs will compare received string values to see if they match one of a few commonly used date/time patterns.

For example, GeoEvent Server, out-of-the-box, will recognize the following date/time values as Date values:

  • Tue Mar 12 14:25:30 PDT 2019
  • 03/12/2019 02:25:30 PM
  • 03/12/2019 14:25:30
  • 1552400730000

When one of the above date/time values is handled, and the input's Expected Date Format parameter does not specify a Java SimpleDateFormat expression / pattern, GeoEvent Server will assume the date/time value represents a Coordinated Universal Time (UTC) value.

When will GeoEvent Server assume a date/time value is expressed in the server machine's locale?

When a GeoEvent Server input is configured with a Java SimpleDateFormat expression / pattern the assumption is the input should convert date/time values it receives into an epoch long integer, but treat the value as a local time, not a UTC value.

For example, if your event data represents its date/time values as "Mar 12 2019 14:25:30" and you configure a new Receive JSON on a REST Endpoint  input to use the pattern matching expression MMM dd yyyy HH:mm:ss as its Expected Date Format property, then GeoEvent Server will assume the event record's date/time expresses a value consistent with the system's locale and will convert the date/time to the long integer value 1552425930000.

You can use the EpochConverter online utility to show equivalent date/time string values for this long integer value. Notice in the illustration below that the value 1552425930000 (expressed in epoch milliseconds) is equivalent to both the 12th of March, 2019, at 9:25 PM Greenwich Mean Time (GMT) and 2:25 PM Pacific Daylight Time (PDT):

EpochConverter online utility

The utility's conversion notes that clocks in my time zone are currently seven hours behind GMT and that daylight savings time is currently being observed. You should note that while GMT and UTC are often used interchangeably, they are not the same.(5)

 

What if I have to use a SimpleDateFormat expression, because my date/time values are not in a commonly recognized format, but my client applications expect date/time values will be expressed as UTC values?

You have a couple of options. First, if you have the ability to work with your data provider, you could request that the date/time values sent to you specify a time zone as well as the month, day, year, hour, minute, second (etc.).

For example, suppose the event data you want to process could be changed to specify "Mar 12 2019 14:25:30 GMT". This would enable you to configure a Receive JSON on a REST Endpoint  input to use the pattern matching expression MMM dd yyyy HH:mm:ss zzz as its Expected Date Format property since information on the time zone is now included in the date/time string. The input will convert the date/time string to 1552400730000 which is a long integer equivalent of the received date/time string value.

Using the EpochConverter online utility to show the equivalent date/time string values for this long integer value, you can see that the Date value GeoEvent Server is using is a GMT/UTC value:

If the data feed from your data provider cannot be modified you can use GeoEvent Server to compute the proper UTC offset for the ingested "local" date/time value within a GeoEvent Service.

Because GeoEvent Server handles Date attribute values as long integers, in epoch milliseconds, you can use a Field Calculator to add (or subtract) a number of milliseconds equal to the number of hours you need to offset a date/time value to change its representation from "local" time to UTC.

The problem, for a long time, was that you had to use a hard-coded constant value in your Field Calculator's expression which rendered your GeoEvent Service vulnerable twice a year to time changes if your community started and later stopped observing daylight savings time. Beginning with the ArcGIS GeoEvent Server 10.5.1, the Field Calculator supports a new wrapper function that helps address this: currentOffsetUTC()

A Field Calculator, running within a GeoEvent Service on my local server, evaluates currentOffsetUTC() and returns the value -25200000, the millisecond difference between my local system's current date/time and UTC. Currently, here in California, we are observing Pacific Daylight Time (PDT) which is equal to UTC-7:00.

Even though GeoEvent Server assumes date/time values such as "Mar 12 2019 14:25:30" (received without any time zone "units") represent local time values -- because a pattern matching expression MMM dd yyyy HH:mm:ss must be used to interpret the received date/time string values -- I was able to calculate a new date/time value using a dynamic offset and output a value which represents the received date/time as a UTC value. All I had to do was route the event record, with its attribute value ReportedDT (data type: Date) through a Field Calculator configured with the expression:  ReportedDT + currentOffsetUTC()

How do I configure a web map to display local time rather than UTC time values

When recommending that date/time values should generally be expressed as UTC values, a frequent complaint when feature records updated by GeoEvent Server are visualized on a web map, is that the web map's pop-up values show the date/time values in UTC rather than local time.

It is true that, generally, we do not want to assume that a server machine and sensor network are both located in the same time zone as the localized client applications querying the feature record data. That does not mean that folks in different time zones want to perform the mental arithmetic needed to convert a date/time value displayed by a web map's pop-up from UTC to their local time.

In the past I have recommended data administrators work around this issue using a Field Calculator to offset the date/time, as I've shown above, by a number of hours to "falsely" represent date/time values in their database as local time values. I say "falsely" because most map/feature services are not configured to use a specified time zone. For a long time it wasn't even possible to change the time zone a map/feature service used to represent its temporal data values. There are web pages in the ArcGIS REST API which still specify that feature services return date/time values only as epoch long integers whose UTC values represent the number of milliseconds since the UNIX Epoch (January 1, 1970, midnight). So even if a map/feature service is configured to use a specific time zone, we should not expect all client applications to honor the service's specification.

For now, let's assume our published feature service's JSON specification follows the default and client apps expect UTC values returned when they query the map/feature service. If we use GeoEvent Server to falsely offset the date/time values to local time, the data values in our geodatabase are effectively a lie. Sure, it is easy to say that all client applications have been localized, and assume all server machines, client applications, and reporting sensors are all in one time zone; all we are trying to do is get a web map to stop displaying date/time values in UTC.

But there is a better way to handle this problem. Testing the latest public release (10.6.1) of the Enterprise portal web map and ArcGIS Online web map I found that pop-ups can be configured with custom expressions which dynamically calculate new values from existing feature record attributes. These new values can then be selected as the attributes to show in a web map's pop-up rather than the "raw" values from the feature service.

Below are the basic steps necessary to accomplish this:

  1. In your web map, from the Content tab, expand the feature layer's context menu and click Configure Pop-up.
  2. On the lower portion of the Configure Pop-up panel, beneath Attribute Expressions, click Add.
  3. Search the available functions for date functions and build an expression like the one illustrated below.

Web Map | Custom Attributes

Assign the new custom attribute a descriptive name (e.g. localDateTime) and save the attribute calculation. You should now be able to select the dynamic attribute to display along with any other "raw" attributes from the feature layer.

Web Map | Custom Pop-up

 

References:

(1)  UTC – Coordinated Universal Time

(2)  ArcGIS for Developers | ArcGIS REST API

(3)  ArcGIS for Developers | Common Data Types | Feature object

(4)  World Wide Web Consortium | Date and Time Formats

(5)  timeanddate.com - The Difference Between GMT and UTC

(6)  ArcGIS for Developers | ArcGIS REST API | Enterprise Administration | Server | Service Types

 .

 

One of the first contributions I made to the GeoEvent space on GeoNet was a blog titled Understanding GeoEvent DefinitionsTechnical workshops and best practice discussions for years have recommended that, when you want to use data from event records to add or update feature records in a geodatabase, you start by importing a GeoEvent Definition from the targeted feature service. This allows you to explicitly map an event record’s structure as the last processing step before an add / update feature output. The field mapping guarantees that service requests made by GeoEvent Server match the schema expected by the feature service.

In this blog I would like to expand upon this recommendation and introduce flexibility you may not realize you have when working with feature records in both feature services and stream services. Let's begin by considering a relatively simple GeoEvent Definition describing the structure of a "sample" event record:

GeoEvent Definition

 

Different types of services will have different schema

I could use GeoEvent Manager and the event definition above to publish several different types of services:

  • A traditional feature service using my GIS Server's managed geodatabase (a relational database).
  • A hosted feature service using a spatiotemporal big data store configured with my ArcGIS Enterprise.
  • A stream service without any feature record persistence and no associated geodatabase.

 

Following the best practice recommendation, a Field Mapper Processor should be used to explicitly map an event record structure and ensure that event records routed to a GeoEvent Server output match the schema expected by the service. The GeoEvent Service illustrated below can be used to successfully store feature records in my GIS Server's managed geodatabase. The same feature records can be stored in my ArcGIS Enterprise's spatiotemporal big data store with copies of the feature records broadcast by a stream service:

GeoEvent Service

 

But if you compare the feature records broadcast by the stream service with feature records queried from the different feature services and data stores you should notice some subtle differences. The schema of the various feature records is not the same:

 

Feature Records

 

You might notice that the stream service's geometry is "complete". It has both the coordinate values for the point geometry and the geometry's spatial reference, but this is not what I want to highlight. The feature services also have the spatial reference, they just record it as part of the overall service's metadata rather than including the spatial reference as part of each feature record.

What I want to highlight are the attribute values in the relational data store's feature record and spatiotemporal big data store's feature record which are not in the stream service's feature record. These additional identifier values are created and maintained by the geodatabase and you cannot use GeoEvent Server to update them.

Recall that the SampleRecord GeoEvent Definition illustrated at the top of this article was successfully used to add and update feature records in the different data stores. If new GeoEvent Definitions were imported from each feature service, however, the imported event definitions would reflect the actual schema of their respective feature classes:

GeoEvent Definition

Since the highlighted attribute fields are created and maintained by the geodatabase and cannot be updated, the best practice recommendation is to delete them from the imported GeoEvent Definitions. Even if event records you ingest for processing happen to have string values you think appropriate to use as a globalid for a spatiotemporal feature record, altering the database's assigned identifier would be very bad.

But if I delete the fields from the imported GeoEvent Definitions ...

Exactly. The simplest way to convey the best practice recommendation to import a GeoEvent Definition from a feature service is to say that this ensures event records mapped to the imported event definition will exactly match the structure expected by the feature service. In service-oriented architecture (SOA) terminology this is "honoring the service's contract."

Maybe you did not know that the identifier fields could be safely deleted from the imported GeoEvent Definition, and so chose to keep them, but leave them unmapped when configuring your final Field Mapper Processor. The processor will assign null values to any unmapped attribute fields, and the feature service knows to ignore attempts to update the values that are created and maintained by the geodatabase, so there is really no harm in retaining the unneeded fields. But unless you want a Field Mapper Processor to place a null value in an attribute field, it is best not to leave attribute fields unmapped.

Is it OK to use a partial GeoEvent Definition when adding or updating feature records?

Yes, though you generally only do this when updating existing feature records, not when adding new feature records.

Say, for example, you had published a feature service which specified the codeword attribute could not be null. While such a restriction cannot be placed on a feature service published using GeoEvent Manager, you could use ArcGIS Desktop or ArcGIS Pro to place a restriction nullable: false on a feature class's attribute field to specify that the field's value may not be assigned a null value.

If you were using GeoEvent Server to add new feature records to the feature class, left one or more attribute fields unmapped in the final Field Mapper, and those attribute values are not allowed to be null, requests from GeoEvent Server will be rejected by the feature service -- the add record request does not include sufficient data to satisfy all the restrictions specified by the feature service.

Feature services which have nullable: false restrictions on attribute fields normally also specify a default value to use when a data value is not specified. Assuming the event record you were processing did not have a valid codeword, you could simply delete that attribute field from the Target GeoEvent Definition used by your final Field Mapper and allow the feature service to supply a default value for the missing, yet required, attribute. If the feature service spec does not include default values for required fields, well then, the processing you do within your GeoEvent Service will have to come up with a codeword value.

The point is, if you do not want to attempt to update a particular attribute value in a feature record, either because you do not have a meaningful value, or you do not want to push a null value into the feature record, you can simply not include that attribute field in the structure or schema of event records you route to an output.

Examples where feature record flexibility might be useful

I have worked with customers who use feature services to compile attribute data collected from different sensors. One type of sensor might provide barometric pressure and relative humidity. Another type of sensor might provide ambient temperature and yet another a measure of the amount of rainfall. No single sensor is supplying all the weather data, so no single event record will have all the attribute values you want to include in a single feature record. Presumably, the different sensor types are all associated with a single weather station, whose name could be used as the TRACK_ID for adding and updating feature records, so we can create partial GeoEvent Definitions supporting each type of sensor and update only the specific attribute fields of a feature record with the data provided by a particular type of sensor installed at the weather station.

Another example might be when data records arrive with different frequency. Consider an automated vehicle location (AVL) solution which receives data every two minutes reporting a vehicle's last observed position and speed. A different data feed might provide information for that same vehicle when the vehicle's brakes are pressed particularly hard (signaling, perhaps, an aggressive driving incident). You do not receive "hard brake" event records as frequently as you receive "vehicle position" event records, and you do not want to push null values for speed or location into a feature record whenever an event record signaling aggressive driving is received, so you prepare a partial GeoEvent Definition for the "hard brake" event records and only update that portion of a vehicle's feature record when that type of data is received.

A third example where using a GeoEvent Definition which either deliberately includes or excludes a attribute value may be helpful is described in the thread Find new entries when streaming real-time data

Are stream services as flexible as feature services?

They did not used to be, no, but changes made to stream services in the ArcGIS 10.6 release relaxed their event record schema requirements. You should still use a Field Mapper Processor to make sure that the spelling and case sensitivity of your event record's attribute fields match those in the stream service's specification. Stream services cannot transfer an attribute value from an event field named codeWord into a field named codeword for example, but you can now send event records whose structure is a subset of the stream service's schema to a Send Features to a Stream Service output. The output will attempt to handle any necessary data conversions, broadcasting a long integer value when a short integer is received, or broadcasting a string equivalent when a date value is received. The output will also omit any attribute value(s) from the feature record(s) it broadcasts when it does not receive a data value for a particular attribute.

 

Hopefully the additional detail and examples in this discussion illustrate flexibility you have when working with feature records in both feature services and stream services and helps clarify best practice recommendations to use a Field Mapper Processor to ensure the structure of event records sent to either a feature service or stream service output have a schema compatible with the service's specification. You can use partial GeoEvent Definitions which model a subset of a feature record's complete schema to avoid pushing null values into a data record and/or avoid attempting to update attribute values you do not want to update (or are not allowed to update).

- RJ

The GeoEvent Server team maintains sample servers which expose both simulated and live data via stream services. For this write-up I will use publicly available services from the following ArcGIS REST Services Directory:

This write-up assumes you have set up a base ArcGIS Enterprise and have included ArcGIS GeoEvent Server as an additional server role in your solution architecture. I will use a deployment which has the base ArcGIS Enterprise and GeoEvent Server installed on a single machine.

Your goal is to receive feature records, formatted as Esri Feature JSON, from an ArcGIS Server stream service. You could, of course, simply add the stream service to an ArcGIS Enterprise portal web map as a stream layer. For this write-up, however, we will look at the steps a custom client must perform to discover the WebSocket associated with a stream service and subscribe to begin receiving data broadcast by the service.

Stream Service Discovery

It is important to recognize that the GIS server hosting a stream service may be on a different server machine than GeoEvent Server. A stream service is discoverable via the ArcGIS Server REST Services Directory, but the WebSocket used to broadcast feature records is run from within the JVM (Java Virtual Machine) used to run GeoEvent Server. If your ArcGIS Enterprise portal and GeoEvent Server have been deployed on separate machines client applications will need to be able to access both servers to discover the stream service and subscribe to the stream service's WebSocket.

If you browse to the ArcGIS REST Services Directory mentioned above you should see a list of available services highlighted below:

GeoEvent Sample Server - stream services

Let’s examine how a client application might subscribe to the LABus stream service. First, the client will need to acquire a token which it will append to its request to subscribe to the stream service’s WebSocket. The WebSocket’s base endpoint is shown on the stream service’s properties page. The token you need is included in the stream service’s JSON specification.

  • Click the LABus stream service to open the service's properties page.
  • In the upper-left corner of  the LABus properties page, click the JSON link
    to open the stream service's JSON specification.

Stream service properties page

  • Scroll to the bottom of the LABus stream service’s JSON specification page and locate
    the stream service’s subscription token.

 

Stream service JSON specification

 

Client applications will need to construct a subscription request which includes both the WebSocket URL and the stream service’s subscription token as a query parameter. The format of the request is illustrated below; make sure to include subscribe in the request:

wss://geoeventsample1.esri.com:6143/arcgis/ws/services/LABus/StreamServer/subscribe?token=some_value

 

Client Subscription Examples

The website websocket.org offers a connection test you can use to verify the subscription request you expect your client application will need to construct. Browse to http://websocket.org and select DEMOS > Echo Test from the menu. Paste the subscription request with the stream service’s WebSocket URL and token into the Location field and click ConnectThe websocket.org client should be able to reach the GeoEvent Server sample server and successfully subscribe to the service’s WebSocket. Esri feature records will be displayed for the Los Angeles Metro buses in the Log window.

WebSocket.org

websocket.org homepage

 

WebSocket.org Echo Test

websocket.org Echo Test

 

You can also configure an input connector in GeoEvent Server to subscribe to the LABus stream service.

  • Log in to GeoEvent Manager.
  • Add a new Subscribe to an External WebSocket for JSON input.
  • Enter a name for the input.
  • Paste the constructed subscription request to the Remote server WebSocket URI property.
  • Allow the input to create a GeoEvent Definition for you.


Subscribe to an External WebSocket for JSON

Do not configure the input to use event attribute values to build a geometry. The records being broadcast by the stream service are Esri feature records, formatted as Esri Feature JSON, which include attributes and geometry as separate values in an event record hierarchy.

Save the new input and navigate to the Monitor page in GeoEvent Manager – you should see your input’s event count increase as event records are received.

You can now incorporate the input into a GeoEvent Service and use filters and/or processors to apply real-time analytics on the event records being ingested. You might, for example, create a GeoEvent Definition with a simpler structure, tag the id field as the TRACK_ID, and use a Field Mapper Processor to flatten the hierarchical structure of each event record received so that you can send them to a TCP/Text output for display using GeoEvent Logger.


Hopefully the examples and illustrations in this write-up are helpful in guiding you through the discovery of stream services, their properties, and how you can use external clients – or configure GeoEvent Server inputs – to receive the feature records that are being broadcast.

In a separate blog, JSON Data Structures - Working with Hierarchy and Multicardinality, I wrote about how data can be organized in a JSON structure, how to recognize data hierarchy and cardinality from a GeoEvent Definition, and how to access data values given a hierarchical, multi-cardinal, data structure.

In this blog, we'll explore XML, another self-describing data format which -- like JSON -- has a specific syntax that organizes data using key/value pairs. XML is similar to JSON, but the two data formats are not interchangeable.

What does XML support that JSON does not?

One difference is that XML supports both attribute and element values whereas JSON really only supports key/value pairs. With JSON you generally expect data values will be associated with named fields. Consider the two examples below (credit: w3schools.com):

<person sex="female">
  <firstname>Anna</firstname>
  <lastname>Smith</lastname>
</person>

The XML in this first example above provides information on a person, "Anna". Her first and last name are provided as elements whereas her gender is provided as an attribute value.

<person>
  <sex>female</sex>
  <firstname>Anna</firstname>
  <lastname>Smith</lastname>
</person>

The XML in this second example above provides the same information, except now all of the data is provided using element values

Both XML structures are valid, but if you have any influence with your data provider, it is probably better to avoid attribute values and instead use elements exclusively when ingesting XML data into GeoEvent Server. This is only a recommendation, not a requirement. As you will see in the following examples, GeoEvent Server can successfully adapt XML which contains attribute values.

Here's a little secret:  GeoEvent Server does not actually handle XML data at all.

GeoEvent Server uses third party libraries to translate XML it receives to JSON. The JSON adapter is used interpret the data and create event records from the translated data. Because JSON does not support attribute values, all data values in an XML structure must be translated as elements. Consider the following illustration which shows how a block of XML data might be translated to JSON by GeoEvent Server:

XML vs. JSON

Notice the JSON on the right in this example organizes each event record as separate elements in a JSON array. Also notice the first line of the XML on the left which declares the version and encoding being used. The libraries GeoEvent Server uses to translate the XML to JSON really like seeing this information as part of the XML data. Finally, sometimes XML will include non-visible characters such as a BOM (byte-order mark). If the XML you are trying to ingest is not being recognized by an input you've configured, try copying the XML into a text editor and saving a text-only version to strip out any hidden characters.

 

Other limitations to consider when ingesting XML

There are several other limitations to consider when ingesting XML data into GeoEvent Server. Sometimes a block of JSON might pass an online JSON validator such as the one provided by JSON Lint but still not be ingested into GeoEvent Server. The JSON syntax rules, for example, do not require that every nested element have a name; yet without a name, it is impossible to construct a GeoEvent Definition since every event attribute must have a name to create a complete GeoEvent Definition.

Similarly, there are XML structures which are perfectly valid which GeoEvent Server may have trouble ingesting. Consider the following block of XML data as an example:

<?xml version="1.0" encoding="utf-8"?>
<data>
  <vehicles>
    <vehicle make="Ford" model="Explorer">
      <license_plate>4GHG892</license_plate>
    </vehicle>
    <vehicle make="Toyota" model="Prius">
      <license_plate>6KLM153</license_plate>
    </vehicle>
  </vehicles>
  <personnel>
    <person fname="James" lname="Albert">
      <employee_number>1234</employee_number>
    </person>
    <person fname="Mary" lname="Smith">
      <employee_number>7890</employee_number>
    </person>
  </personnel>
</data>

The XML data illustrated above contains a mix of both "vehicles" and "personnel". The self-describing nature of the XML makes it apparent to a reader which data elements are which, but an input in GeoEvent Server may still have trouble identifying the multiple occurrences of the different data items if the inbound adapter's XML Object Name property is not specified.

Here is the GeoEvent Definition the inbound adapter generates when its XML Object Name property is left unspecified and the XML data sample above is ingested into GeoEvent Server:

GeoEvent Definition

In testing, the very first time the XML with the combination of "vehicles" and "personnel" was received and written out as JSON to a system text file, I observed only one person and one vehicle were written to the output file. Worse yet, without changing the generated GeoEvent Definition or any of the input connector's properties, sending the exact same XML a second time produced an output file with "vehicles" and "personnel" elements that were empty.

We know from the JSON Data Structures - Working with Hierarchy and Multicardinality blog that, at the very least, the cardinality specified by the generated GeoEvent Definition is not correct. The GeoEvent Definition also implies a nesting of groups within groups, which is probably not correct.

Working around the issue

Let's explore how you might work around the issue identified above using the configurable properties available in GeoEvent Server. First, ensure the XML input connector specifies which node in the XML should be treated as the root node by setting the XML Object Name property accordingly as illustrated below:

GeoEvent Input

Second, verify the GeoEvent Definition has the correct cardinality for the data sub-structure beneath the specified root node as illustrated below:

GeoEvent Definition

By configuring these above properties accordingly, GeoEvent Server will only consider data within a sub-structure found beneath a "vehicles" root node and should make allowances that the sub-structure may contain more than one "vehicle".

XML Sample

With this approach, there are two ramifications you might want consider. First, the inbound adapter is literally throwing half of the received data away by excluding data from any sub-structure found beneath the "personnel" nodes. This can be addressed by making a copy of the existing Receive XML on a REST Endpoint input and configuring this copy to use "personnel" as its XML Object Name. The copied input should also use a different GeoEvent Definition -- one which specifies "person" as an event attribute with cardinality Many and the attributes of a "person" (rather than a "vehicle") as illustrated below.

Copied Input Configuration

Second, the event record being ingested has multiple vehicles (or people) as items in an array. You'll likely want to process each vehicle (or person) as individual event records. To address this, it's recommended you use a processor available on the ArcGIS GeoEvent Server Gallery, specifically the Multicardinal Field Splitter Processor. There are two different field splitter processors provided in the download, so make sure to use the processor that handles multicardinal data structures.

A Multicardinal Field Splitter Processor, added to a GeoEvent Service illustrated below, will clone event records it receives and split the event record so that each record output has only one vehicle (or person). Notice that each event record output from the Multicardinal Field Splitter Processor includes an index at which the element was found in the original array.

GeoEvent Service

Conclusion

The examples I've referenced in this blog are obviously academic. There's no good reason why a data provider would mashup people and vehicles this way in the same XML data structure. However, you might come across data structures which are not homogeneous and need to use one or more of the approaches highlighted in this blog to extract a portion of the data out of a data structure. Or you might need to debug your input connector's configuration to figure out why attribute or element values you know to exist in the XML being received are not coming through in the event records that output. Or maybe in the data you're receiving you expect multiple event records to be ingested and end up only observing a few -- or maybe only one -- event records being ingested. Hopefully the information provided will help you address these challenges when you encounter them.

To summarize, below are the tips I highlighted in this article:

  • Use the GeoEvent Definition as a clue to the hierarchy and cardinality GeoEvent Server is using to define each event record's structure.
  • Specify the root node or element when ingesting XML or JSON; don't let the inbound adapter assume which node should be considered the root. If necessary, specify an interior node as the root node so only a subset of the data is actually considered.
  • Avoid XML data which uses attributes. If you must use XML data with attributes, know that an attempt will be made to promote these as elements when the XML is translated to JSON.
  • Encourage your data providers to design data structures whose records are homogeneous. This can run counter to database normalization instincts where data common to all records is included in a sub-section above each of the actual records. Sometimes simple is better, even when "simple" makes individual data records verbose.
  • Make sure the XML you ingest includes a header specifying its version and encoding -- the libraries GeoEvent Server is using really like seeing this metadata. Also, watch out for hidden characters which are sometimes present in the data.

GeoEvent Server Automatic Configuration Backup Files

It is possible, and in fact preferred, to create XML snapshots of your ArcGIS GeoEvent Server configuration using GeoEvent Manager (Site > GeoEvent > Configuration Store > Export Configuration).

But what if something has gone sideways and you cannot access GeoEvent Manager? Before you delete GeoEvent Server’s ZooKeeper distributed configuration store, you will want to locate a recent XML configuration and see if recent changes to inputs, outputs, GeoEvent Definitions, and GeoEvent Services are in the configuration file.

Beginning with GeoEvent Server 10.5, a copy of the configuration is exported automatically for you, daily, at 00:00:00 hours (local time).

  • Automatic backup files, by default, are written to the following folder:
    C:\ProgramData\Esri\GeoEvent
  • You can change the folder used by editing the folder registered for 'Automatic Backups':
    Site > GeoEvent > Data Stores > Register Folder
  • You can change when and how often snapshots of your configuration are taken:
    Site > Settings > Configure Global Settings > Automatic Backup Settings

 

GeoEvent Server ZooKeeper Files

At the 10.5 / 10.5.1 release – GeoEvent Server uses the “synchronization service” platform service in ArcGIS Server, which is running an Apache ZooKeeper behind the scenes. Since this is an ArcGIS Server service, the application files are found in the ArcGIS Server 'local' folder (e.g. C:\arcgisserver\local).

If a system administrator wanted to administratively clear a configuration of GeoEvent Server they could stop the ArcGIS Server platform service -- using the Administrative API -- or stop the ArcGIS Server Windows service and delete the files and folders found beneath C:\arcgisserver\local\zookeeper\.

  • You should leave the parent folder, C:\arcgisserver\local\zookeeper intact.
  • You should also confirm with Esri Technical Support that patches, service packs, or hot-fixes you may have installed have not changed how the “synchronization service” platform service is used by other ArcGIS Enterprise components before administratively deleting files from beneath the ArcGIS Server directories. (ArcGIS GeoAnalytics Server, for example, uses the platform service to elect a machine participating in a multiple-machine analytic as the "leader" for an operation.)

Beginning with the 10.6 release – GeoEvent Server is running its own Apache ZooKeeper instance within the ArcGIS GeoEvent Gateway Windows service. If a system administrator wanted to administratively clear a 10.6 configuration of GeoEvent Server they could stop the ArcGIS GeoEvent Gateway Windows service – which will also stops the dependent ArcGIS GeoEvent Server Windows service – and then delete the files and folders found beneath: C:\ProgramData\Esri\GeoEvent-Gateway\zookeeper-data.


GeoEvent Server Kafka File

NOTE: The following only applies to 10.6 and later releases of GeoEvent Server.

Beginning with the 10.6 release – GeoEvent Server is running an Apache Kafka instance as an event message broker within the ArcGIS GeoEvent Gateway Windows service. The message broker uses on-disk topic queues to manage event records. The event records which have been sent from the message broker to a GeoEvent Server instance for processing are recorded within the broker's associated configuration store (e.g. Apache ZooKeeper).

The Kafka message broker provides a transactional message guarantee that the RabbitMQ message broker (used in 10.5.1 and earlier releases) does not provide. If the GeoEvent Gateway on a machine were stopped and restarted, the configuration store will have recorded where event message processing was suspended and will use indexes into the topic queues to resume processing previously received event records.

The topic queue files are closed, new files created, and old files deleted according to configurable data retention strategy. However, if the GeoEvent Gateway were stopped and its ZooKeeper configuration were deleted, the Kafka topic queues will likely be orphaned and potentially large message log files may not be deleted from disk according to the data retention strategy. In this case, a system administrator might need to locate and delete the topic queue files from beneath C:\ProgramData\Esri\GeoEvent-Gateway\kafka.

 

GeoEvent Server Runtime Files

When GeoEvent Server is initially launched, following a new product installation, a number of files are created as the system framework is built. These files, referred to as “cached bundles” are written into a \data folder in the GeoEvent Server installation directory (e.g  C:\Program Files\ArcGIS\Server\GeoEvent\data). Again, if something has gone sideways, a system administrator might want to try deleting these files, forcing the system framework to be rebuilt, before deciding to uninstall and then reinstall GeoEvent Server.

This might be necessary if, for example, you continue to see the message "No Services Found" displayed in a browser window (after several minutes and a browser refresh) when attempting to launch GeoEvent Manager. In this case, deleting the runtime files from the \data folder to force the system framework to be rebuilt may remedy an issue which prevented GeoEvent Server from launching correctly the first time.

Another reason a system administrator may need to force the system framework to be rebuilt might be observing a message that the ArcGIS GeoEvent Server Windows service could not be stopped “in a timely fashion” (when selecting to stop the service using the Windows Task Manager). In this case, an administrator should ensure the process identified in the C:\Program Files\ArcGIS\Server\GeoEvent\instances\instance.properties file has been stopped. Administratively terminating this processes to stop GeoEvent Server can leave the system framework in a bad state, requiring the \data files be deleted so the framework can be rebuilt.

 

Administratively Reset GeoEvent Server

Deleting the Apache ZooKeeper files (to administratively clear the GeoEvent Server configuration), the product’s runtime files (to force the system framework to be rebuilt), and removing previously received event messages (by deleting Kafka topic queues from disk) is how system administrators reset a GeoEvent Server instance to look like the product has just been installed. Below are the steps and system folders you need to access to administratively reset GeoEvent Server at the 10.5.x and 10.6.x releases.

 

If you have custom components in the C:\Program Files\ArcGIS\Server\GeoEvent\deploy folder, move these from the \deploy folder to a local temporary folder, while GeoEvent Server is running, to prevent the component from being restored (from the distributed configuration store) when GeoEvent Server is restarted. Also, make sure you have a copy of the most recent XML export of your GeoEvent Server configuration if you want to save the elements you have created.

10.5.x

  You should confirm with Esri Technical Support that system folders and files you plan to delete before executing the steps below. Files you delete following the steps below are irrecoverable.

  1. Stop the ArcGIS Server Windows service.
    (This will also stop the GeoEvent Server Windows service)
  2. Locate and delete the files and folders beneath C:\Program Files\ArcGIS\Server\GeoEvent\data
    (Leave the \data folder intact)
  3. Locate and delete the files and folders beneath C:\arcgisserver\local\zookeeper
    (Leave the \zookeeper folder intact)
  4. Locate and delete the files and folders beneath C:\ProgramData\Esri\GeoEvent
    (Leave the \GeoEvent folder intact)
  5. Start the ArcGIS Server Windows service.
    (Confirm you can log in to the ArcGIS Server Manager web application)
  6. Start the ArcGIS GeoEvent Server Windows service.

10.6.x

  Note that the lifecycle of the ArcGIS GeoEvent Gateway service is intended to mirror that of the operating system.
  You can administratively reset GeoEvent Server (e.g. deleting its runtime files from its \data folder) without stopping the ArcGIS GeoEvent Gateway service -- unless you also want to administratively delete the ZooKeeper files from the configuration store (which in the 10.6.x are maintained as part of the ArcGIS GeoEvent Gateway service).

  1. Stop the ArcGIS GeoEvent Server Windows service.
  2. Locate and delete the files and folders beneath the following directories (leaving the parent folders intact):
    C:\Program Files\ArcGIS\Server\GeoEvent\data\
    C:\ProgramData\Esri\GeoEvent\
  3. Stop the ArcGIS GeoEvent Gateway Windows service.
    This will also stop the ArcGIS GeoEvent Server Windows service if it is running.
  4. Locate and delete the files and folders beneath the following directories.
    Leave the parent folders (highlighted) intact:
    C:\ProgramData\Esri\GeoEvent-Gateway\zookeeper-data
    C:\Program Files\ArcGIS\Server\GeoEvent\gateway\log
  5. If you delete the zookeeper-data files, you should remove any orphaned topic queues
    by deleting the on-disk Kafka logs (delete the 'logs' sub-folder, leave the 'kafka' folder intact):
    C:\ProgramData\Esri\GeoEvent-Gateway\kafka\logs
  6. Locate and delete the GeoEvent Gateway configuration file (a new file will be rebuilt).
    C:\Program Files\ArcGIS\Server\GeoEvent\etc\com.esri.ges.gateway.cfg
  7. Start the ArcGIS GeoEvent Server Windows service.
    This will start the ArcGIS GeoEvent Gateway service if it has been stopped.
    Confirm you can log in to GeoEvent Manager.

At this point you can also review the contents of the rebuilt com.esri.ges.gateway.cfg file. The GeoEvent Gateway will record its message broker and configuration store port configurations in this file if it was able to launch successfully:

gateway.zookeeper.connect=MY-MACHINE.MY-DOMAIN:4181

gateway.kafka.brokers=MY-MACHINE.MY-DOMAIN:9192

gateway.kafka.topic.partitions=3

gateway.kafka.topic.replication.factor=3