Skip navigation
All Places > GIS > Enterprise GIS > GeoEvent > Blog
1 2 Previous Next

GeoEvent

29 posts

When someone asks you, "What time is it?", you are probably assuming he or she wants to know the local time where the two of you are right now. As I write this, the time now is Tuesday, March 12, 2019 at about 2:25 PM in Redlands, California, USA.

Typically, we do not qualify our answers so explicitly. We say "It's 2 o'clock" and assume it's understood that this is the time right now in Redlands, California. But that is sort of like answering a query about length or distance by simply saying "36". Is that feet, meters, miles, or kilometers?

Last weekend, here in California, we set our clocks ahead one hour to honor daylight savings time (DST). California is now observing Pacific Daylight Time (PDT) which is equal to UTC-7:00 hours. When we specify the time at which an event was observed, we should include the time zone in which the observation is made as well as whether or not the time reflects a local convention honoring daylight savings time.

When ArcGIS GeoEvent Server receives data for processing, event records usually include a date/time value with each observation. Often the date/time value is expressed as a string and does not specify the time zone in which the date/time is expressed or whether the value reflects a daylight savings time offset. These are sort of like the "units" (e.g. feet, meters, miles, or kilometers) which qualify a date/time value.

The intent of this blog is to identify when GeoEvent Server assumes a date/time value is expressed in Coordinated Universal Time (UTC) versus when it is assumed that a date/time expresses a value consistent with the system's locale. We'll explore a couple situations where this might be important and the steps you can take to configure how date/time values are handled and displayed.

Event data ingest should generally assume date/time values are expressed as UTC values

There are several reasons for this. In the interest of brevity, I'll simply note that GeoEvent Server is running in a "server" context. The assumption is that the server machine is not necessarily located in the same time zone as the sensors from which it is receiving data and that clients interested in visualizing the data are likewise not necessarily in the same time zone as the server or the sensors. UTC is the time standard commonly used around the world. The world's timing centers have agreed to synchronize, or coordinate, their date/time values -- hence the name Coordinated Universal Time.(1)

If you have ever used the ArcGIS REST Services Directory to examine the JSON representation of feature records which include a date/time field whose data type is esriFieldTypeDate, you have probably noticed that the value is not a string, it is a number; an epoch long integer representing the number of milliseconds since the UNIX Epoch (January 1, 1970, midnight). The default is to express the value in UTC.(2)(3)

When does GeoEvent Server assume the date/time values it receives are UTC values?

Out-of-the-box, GeoEvent Server supports the ISO 8601 standard for representing date/time values.(4)

It is unusual, however, to find sensor data which expresses the date/time value "March 12, 2019, 2:25:30 pm PDT" as 2019-03-12T14:25:30-07:00. So when a GeoEvent Definition specifies that a particular attribute should be handled as a Date, inbound adapters used by GeoEvent Server inputs will compare received string values to see if they match one of a few commonly used date/time patterns.

For example, GeoEvent Server, out-of-the-box, will recognize the following date/time values as Date values:

  • Tue Mar 12 14:25:30 PDT 2019
  • 03/12/2019 02:25:30 PM
  • 03/12/2019 14:25:30
  • 1552400730000

When one of the above date/time values is handled, and the input's Expected Date Format parameter does not specify a Java SimpleDateFormat expression / pattern, GeoEvent Server will assume the date/time value represents a Coordinated Universal Time (UTC) value.

When will GeoEvent Server assume a date/time value is expressed in the server machine's locale?

When a GeoEvent Server input is configured with a Java SimpleDateFormat expression / pattern the assumption is the input should convert date/time values it receives into an epoch long integer, but treat the value as a local time, not a UTC value.

For example, if your event data represents its date/time values as "Mar 12 2019 14:25:30" and you configure a new Receive JSON on a REST Endpoint  input to use the pattern matching expression MMM dd yyyy HH:mm:ss as its Expected Date Format property, then GeoEvent Server will assume the event record's date/time expresses a value consistent with the system's locale and will convert the date/time to the long integer value 1552425930000.

You can use the EpochConverter online utility to show equivalent date/time string values for this long integer value. Notice in the illustration below that the value 1552425930000 (expressed in epoch milliseconds) is equivalent to both the 12th of March, 2019, at 9:25 PM Greenwich Mean Time (GMT) and 2:25 PM Pacific Daylight Time (PDT):

EpochConverter online utility

The utility's conversion notes that clocks in my time zone are currently seven hours behind GMT and that daylight savings time is currently being observed. You should note that while GMT and UTC are often used interchangeably, they are not the same.(5)

 

What if I have to use a SimpleDateFormat expression, because my date/time values are not in a commonly recognized format, but my client applications expect date/time values will be expressed as UTC values?

You have a couple of options. First, if you have the ability to work with your data provider, you could request that the date/time values sent to you specify a time zone as well as the month, day, year, hour, minute, second (etc.).

For example, suppose the event data you want to process could be changed to specify "Mar 12 2019 14:25:30 GMT". This would enable you to configure a Receive JSON on a REST Endpoint  input to use the pattern matching expression MMM dd yyyy HH:mm:ss zzz as its Expected Date Format property since information on the time zone is now included in the date/time string. The input will convert the date/time string to 1552400730000 which is a long integer equivalent of the received date/time string value.

Using the EpochConverter online utility to show the equivalent date/time string values for this long integer value, you can see that the Date value GeoEvent Server is using is a GMT/UTC value:

If the data feed from your data provider cannot be modified you can use GeoEvent Server to compute the proper UTC offset for the ingested "local" date/time value within a GeoEvent Service.

Because GeoEvent Server handles Date attribute values as long integers, in epoch milliseconds, you can use a Field Calculator to add (or subtract) a number of milliseconds equal to the number of hours you need to offset a date/time value to change its representation from "local" time to UTC.

The problem, for a long time, was that you had to use a hard-coded constant value in your Field Calculator's expression which rendered your GeoEvent Service vulnerable twice a year to time changes if your community started and later stopped observing daylight savings time. Beginning with the ArcGIS GeoEvent Server 10.5.1, the Field Calculator supports a new wrapper function that helps address this: currentOffsetUTC()

A Field Calculator, running within a GeoEvent Service on my local server, evaluates currentOffsetUTC() and returns the value -25200000, the millisecond difference between my local system's current date/time and UTC. Currently, here in California, we are observing Pacific Daylight Time (PDT) which is equal to UTC-7:00.

Even though GeoEvent Server assumes date/time values such as "Mar 12 2019 14:25:30" (received without any time zone "units") represent local time values -- because a pattern matching expression MMM dd yyyy HH:mm:ss must be used to interpret the received date/time string values -- I was able to calculate a new date/time value using a dynamic offset and output a value which represents the received date/time as a UTC value. All I had to do was route the event record, with its attribute value ReportedDT (data type: Date) through a Field Calculator configured with the expression:  ReportedDT + currentOffsetUTC()

How do I configure a web map to display local time rather than UTC time values

When recommending that date/time values should generally be expressed as UTC values, a frequent complaint when feature records updated by GeoEvent Server are visualized on a web map, is that the web map's pop-up values show the date/time values in UTC rather than local time.

It is true that, generally, we do not want to assume that a server machine and sensor network are both located in the same time zone as the localized client applications querying the feature record data. That does not mean that folks in different time zones want to perform the mental arithmetic needed to convert a date/time value displayed by a web map's pop-up from UTC to their local time.

In the past I have recommended data administrators work around this issue using a Field Calculator to offset the date/time, as I've shown above, by a number of hours to "falsely" represent date/time values in their database as local time values. I say "falsely" because most map/feature services are not configured to use a specified time zone. For a long time it wasn't even possible to change the time zone a map/feature service used to represent its temporal data values. There are web pages in the ArcGIS REST API which still specify that feature services return date/time values only as epoch long integers whose UTC values represent the number of milliseconds since the UNIX Epoch (January 1, 1970, midnight). So even if a map/feature service is configured to use a specific time zone, we should not expect all client applications to honor the service's specification.

For now, let's assume our published feature service's JSON specification follows the default and client apps expect UTC values returned when they query the map/feature service. If we use GeoEvent Server to falsely offset the date/time values to local time, the data values in our geodatabase are effectively a lie. Sure, it is easy to say that all client applications have been localized, and assume all server machines, client applications, and reporting sensors are all in one time zone; all we are trying to do is get a web map to stop displaying date/time values in UTC.

But there is a better way to handle this problem. Testing the latest public release (10.6.1) of the Enterprise portal web map and ArcGIS Online web map I found that pop-ups can be configured with custom expressions which dynamically calculate new values from existing feature record attributes. These new values can then be selected as the attributes to show in a web map's pop-up rather than the "raw" values from the feature service.

Below are the basic steps necessary to accomplish this:

  1. In your web map, from the Content tab, expand the feature layer's context menu and click Configure Pop-up.
  2. On the lower portion of the Configure Pop-up panel, beneath Attribute Expressions, click Add.
  3. Search the available functions for date functions and build an expression like the one illustrated below.

Web Map | Custom Attributes

Assign the new custom attribute a descriptive name (e.g. localDateTime) and save the attribute calculation. You should now be able to select the dynamic attribute to display along with any other "raw" attributes from the feature layer.

Web Map | Custom Pop-up

 

References:

(1)  UTC – Coordinated Universal Time

(2)  ArcGIS for Developers | ArcGIS REST API

(3)  ArcGIS for Developers | Common Data Types | Feature object

(4)  World Wide Web Consortium | Date and Time Formats

(5)  timeanddate.com - The Difference Between GMT and UTC

(6)  ArcGIS for Developers | ArcGIS REST API | Enterprise Administration | Server | Service Types

 .

 

One of the first contributions I made to the GeoEvent space on GeoNet was a blog titled Understanding GeoEvent DefinitionsTechnical workshops and best practice discussions for years have recommended that, when you want to use data from event records to add or update feature records in a geodatabase, you start by importing a GeoEvent Definition from the targeted feature service. This allows you to explicitly map an event record’s structure as the last processing step before an add / update feature output. The field mapping guarantees that service requests made by GeoEvent Server match the schema expected by the feature service.

In this blog I would like to expand upon this recommendation and introduce flexibility you may not realize you have when working with feature records in both feature services and stream services. Let's begin by considering a relatively simple GeoEvent Definition describing the structure of a "sample" event record:

GeoEvent Definition

 

Different types of services will have different schema

I could use GeoEvent Manager and the event definition above to publish several different types of services:

  • A traditional feature service using my GIS Server's managed geodatabase (a relational database).
  • A hosted feature service using a spatiotemporal big data store configured with my ArcGIS Enterprise.
  • A stream service without any feature record persistence and no associated geodatabase.

 

Following the best practice recommendation, a Field Mapper Processor should be used to explicitly map an event record structure and ensure that event records routed to a GeoEvent Server output match the schema expected by the service. The GeoEvent Service illustrated below can be used to successfully store feature records in my GIS Server's managed geodatabase. The same feature records can be stored in my ArcGIS Enterprise's spatiotemporal big data store with copies of the feature records broadcast by a stream service:

GeoEvent Service

 

But if you compare the feature records broadcast by the stream service with feature records queried from the different feature services and data stores you should notice some subtle differences. The schema of the various feature records is not the same:

 

Feature Records

 

You might notice that the stream service's geometry is "complete". It has both the coordinate values for the point geometry and the geometry's spatial reference, but this is not what I want to highlight. The feature services also have the spatial reference, they just record it as part of the overall service's metadata rather than including the spatial reference as part of each feature record.

What I want to highlight are the attribute values in the relational data store's feature record and spatiotemporal big data store's feature record which are not in the stream service's feature record. These additional identifier values are created and maintained by the geodatabase and you cannot use GeoEvent Server to update them.

Recall that the SampleRecord GeoEvent Definition illustrated at the top of this article was successfully used to add and update feature records in the different data stores. If new GeoEvent Definitions were imported from each feature service, however, the imported event definitions would reflect the actual schema of their respective feature classes:

GeoEvent Definition

Since the highlighted attribute fields are created and maintained by the geodatabase and cannot be updated, the best practice recommendation is to delete them from the imported GeoEvent Definitions. Even if event records you ingest for processing happen to have string values you think appropriate to use as a globalid for a spatiotemporal feature record, altering the database's assigned identifier would be very bad.

But if I delete the fields from the imported GeoEvent Definitions ...

Exactly. The simplest way to convey the best practice recommendation to import a GeoEvent Definition from a feature service is to say that this ensures event records mapped to the imported event definition will exactly match the structure expected by the feature service. In service-oriented architecture (SOA) terminology this is "honoring the service's contract."

Maybe you did not know that the identifier fields could be safely deleted from the imported GeoEvent Definition, and so chose to keep them, but leave them unmapped when configuring your final Field Mapper Processor. The processor will assign null values to any unmapped attribute fields, and the feature service knows to ignore attempts to update the values that are created and maintained by the geodatabase, so there is really no harm in retaining the unneeded fields. But unless you want a Field Mapper Processor to place a null value in an attribute field, it is best not to leave attribute fields unmapped.

Is it OK to use a partial GeoEvent Definition when adding or updating feature records?

Yes, though you generally only do this when updating existing feature records, not when adding new feature records.

Say, for example, you had published a feature service which specified the codeword attribute could not be null. While such a restriction cannot be placed on a feature service published using GeoEvent Manager, you could use ArcGIS Desktop or ArcGIS Pro to place a restriction nullable: false on a feature class's attribute field to specify that the field's value may not be assigned a null value.

If you were using GeoEvent Server to add new feature records to the feature class, left one or more attribute fields unmapped in the final Field Mapper, and those attribute values are not allowed to be null, requests from GeoEvent Server will be rejected by the feature service -- the add record request does not include sufficient data to satisfy all the restrictions specified by the feature service.

Feature services which have nullable: false restrictions on attribute fields normally also specify a default value to use when a data value is not specified. Assuming the event record you were processing did not have a valid codeword, you could simply delete that attribute field from the Target GeoEvent Definition used by your final Field Mapper and allow the feature service to supply a default value for the missing, yet required, attribute. If the feature service spec does not include default values for required fields, well then, the processing you do within your GeoEvent Service will have to come up with a codeword value.

The point is, if you do not want to attempt to update a particular attribute value in a feature record, either because you do not have a meaningful value, or you do not want to push a null value into the feature record, you can simply not include that attribute field in the structure or schema of event records you route to an output.

Examples where feature record flexibility might be useful

I have worked with customers who use feature services to compile attribute data collected from different sensors. One type of sensor might provide barometric pressure and relative humidity. Another type of sensor might provide ambient temperature and yet another a measure of the amount of rainfall. No single sensor is supplying all the weather data, so no single event record will have all the attribute values you want to include in a single feature record. Presumably, the different sensor types are all associated with a single weather station, whose name could be used as the TRACK_ID for adding and updating feature records, so we can create partial GeoEvent Definitions supporting each type of sensor and update only the specific attribute fields of a feature record with the data provided by a particular type of sensor installed at the weather station.

Another example might be when data records arrive with different frequency. Consider an automated vehicle location (AVL) solution which receives data every two minutes reporting a vehicle's last observed position and speed. A different data feed might provide information for that same vehicle when the vehicle's brakes are pressed particularly hard (signaling, perhaps, an aggressive driving incident). You do not receive "hard brake" event records as frequently as you receive "vehicle position" event records, and you do not want to push null values for speed or location into a feature record whenever an event record signaling aggressive driving is received, so you prepare a partial GeoEvent Definition for the "hard brake" event records and only update that portion of a vehicle's feature record when that type of data is received.

Are stream services as flexible as feature services?

They did not used to be, no, but changes made to stream services in the ArcGIS 10.6 release relaxed their event record schema requirements. You should still use a Field Mapper Processor to make sure that the spelling and case sensitivity of your event record's attribute fields match those in the stream service's specification. Stream services cannot transfer an attribute value from an event field named codeWord into a field named codeword for example, but you can now send event records whose structure is a subset of the stream service's schema to a Send Features to a Stream Service output. The output will attempt to handle any necessary data conversions, broadcasting a long integer value when a short integer is received, or broadcasting a string equivalent when a date value is received. The output will also omit any attribute value(s) from the feature record(s) it broadcasts when it does not receive a data value for a particular attribute.

 

Hopefully the additional detail and examples in this discussion illustrate flexibility you have when working with feature records in both feature services and stream services and helps clarify best practice recommendations to use a Field Mapper Processor to ensure the structure of event records sent to either a feature service or stream service output have a schema compatible with the service's specification. You can use partial GeoEvent Definitions which model a subset of a feature record's complete schema to avoid pushing null values into a data record and/or avoid attempting to update attribute values you do not want to update (or are not allowed to update).

- RJ

The GeoEvent Server team maintains sample servers which expose both simulated and live data via stream services. For this write-up I will use publicly available services from the following ArcGIS REST Services Directory:

This write-up assumes you have set up a base ArcGIS Enterprise and have included ArcGIS GeoEvent Server as an additional server role in your solution architecture. I will use a deployment which has the base ArcGIS Enterprise and GeoEvent Server installed on a single machine.

Your goal is to receive feature records, formatted as Esri Feature JSON, from an ArcGIS Server stream service. You could, of course, simply add the stream service to an ArcGIS Enterprise portal web map as a stream layer. For this write-up, however, we will look at the steps a custom client must perform to discover the WebSocket associated with a stream service and subscribe to begin receiving data broadcast by the service.

Stream Service Discovery

It is important to recognize that the GIS server hosting a stream service may be on a different server machine than GeoEvent Server. A stream service is discoverable via the ArcGIS Server REST Services Directory, but the WebSocket used to broadcast feature records is run from within the JVM (Java Virtual Machine) used to run GeoEvent Server. If your ArcGIS Enterprise portal and GeoEvent Server have been deployed on separate machines client applications will need to be able to access both servers to discover the stream service and subscribe to the stream service's WebSocket.

If you browse to the ArcGIS REST Services Directory mentioned above you should see a list of available services highlighted below:

GeoEvent Sample Server - stream services

Let’s examine how a client application might subscribe to the LABus stream service. First, the client will need to acquire a token which it will append to its request to subscribe to the stream service’s WebSocket. The WebSocket’s base endpoint is shown on the stream service’s properties page. The token you need is included in the stream service’s JSON specification.

  • Click the LABus stream service to open the service's properties page.
  • In the upper-left corner of  the LABus properties page, click the JSON link
    to open the stream service's JSON specification.

Stream service properties page

  • Scroll to the bottom of the LABus stream service’s JSON specification page and locate
    the stream service’s subscription token.

 

Stream service JSON specification

 

Client applications will need to construct a subscription request which includes both the WebSocket URL and the stream service’s subscription token as a query parameter. The format of the request is illustrated below; make sure to include subscribe in the request:

wss://geoeventsample1.esri.com:6143/arcgis/ws/services/LABus/StreamServer/subscribe?token=some_value

 

Client Subscription Examples

The website websocket.org offers a connection test you can use to verify the subscription request you expect your client application will need to construct. Browse to http://websocket.org and select DEMOS > Echo Test from the menu. Paste the subscription request with the stream service’s WebSocket URL and token into the Location field and click ConnectThe websocket.org client should be able to reach the GeoEvent Server sample server and successfully subscribe to the service’s WebSocket. Esri feature records will be displayed for the Los Angeles Metro buses in the Log window.

WebSocket.org

websocket.org homepage

 

WebSocket.org Echo Test

websocket.org Echo Test

 

You can also configure an input connector in GeoEvent Server to subscribe to the LABus stream service.

  • Log in to GeoEvent Manager.
  • Add a new Subscribe to an External WebSocket for JSON input.
  • Enter a name for the input.
  • Paste the constructed subscription request to the Remote server WebSocket URI property.
  • Allow the input to create a GeoEvent Definition for you.


Subscribe to an External WebSocket for JSON

Do not configure the input to use event attribute values to build a geometry. The records being broadcast by the stream service are Esri feature records, formatted as Esri Feature JSON, which include attributes and geometry as separate values in an event record hierarchy.

Save the new input and navigate to the Monitor page in GeoEvent Manager – you should see your input’s event count increase as event records are received.

You can now incorporate the input into a GeoEvent Service and use filters and/or processors to apply real-time analytics on the event records being ingested. You might, for example, create a GeoEvent Definition with a simpler structure, tag the id field as the TRACK_ID, and use a Field Mapper Processor to flatten the hierarchical structure of each event record received so that you can send them to a TCP/Text output for display using GeoEvent Logger.


Hopefully the examples and illustrations in this write-up are helpful in guiding you through the discovery of stream services, their properties, and how you can use external clients – or configure GeoEvent Server inputs – to receive the feature records that are being broadcast.

In a separate blog, JSON Data Structures - Working with Hierarchy and Multicardinality, I wrote about how data can be organized in a JSON structure, how to recognize data hierarchy and cardinality from a GeoEvent Definition, and how to access data values given a hierarchical, multi-cardinal, data structure.

In this blog, we'll explore XML, another self-describing data format which -- like JSON -- has a specific syntax that organizes data using key/value pairs. XML is similar to JSON, but the two data formats are not interchangeable.

What does XML support that JSON does not?

One difference is that XML supports both attribute and element values whereas JSON really only supports key/value pairs. With JSON you generally expect data values will be associated with named fields. Consider the two examples below (credit: w3schools.com):

<person sex="female">
  <firstname>Anna</firstname>
  <lastname>Smith</lastname>
</person>

The XML in this first example above provides information on a person, "Anna". Her first and last name are provided as elements whereas her gender is provided as an attribute value.

<person>
  <sex>female</sex>
  <firstname>Anna</firstname>
  <lastname>Smith</lastname>
</person>

The XML in this second example above provides the same information, except now all of the data is provided using element values

Both XML structures are valid, but if you have any influence with your data provider, it is probably better to avoid attribute values and instead use elements exclusively when ingesting XML data into GeoEvent Server. This is only a recommendation, not a requirement. As you will see in the following examples, GeoEvent Server can successfully adapt XML which contains attribute values.

Here's a little secret:  GeoEvent Server does not actually handle XML data at all.

GeoEvent Server uses third party libraries to translate XML it receives to JSON. The JSON adapter is used interpret the data and create event records from the translated data. Because JSON does not support attribute values, all data values in an XML structure must be translated as elements. Consider the following illustration which shows how a block of XML data might be translated to JSON by GeoEvent Server:

XML vs. JSON

Notice the JSON on the right in this example organizes each event record as separate elements in a JSON array. Also notice the first line of the XML on the left which declares the version and encoding being used. The libraries GeoEvent Server uses to translate the XML to JSON really like seeing this information as part of the XML data. Finally, sometimes XML will include non-visible characters such as a BOM (byte-order mark). If the XML you are trying to ingest is not being recognized by an input you've configured, try copying the XML into a text editor and saving a text-only version to strip out any hidden characters.

 

Other limitations to consider when ingesting XML

There are several other limitations to consider when ingesting XML data into GeoEvent Server. Sometimes a block of JSON might pass an online JSON validator such as the one provided by JSON Lint but still not be ingested into GeoEvent Server. The JSON syntax rules, for example, do not require that every nested element have a name; yet without a name, it is impossible to construct a GeoEvent Definition since every event attribute must have a name to create a complete GeoEvent Definition.

Similarly, there are XML structures which are perfectly valid which GeoEvent Server may have trouble ingesting. Consider the following block of XML data as an example:

<?xml version="1.0" encoding="utf-8"?>
<data>
  <vehicles>
    <vehicle make="Ford" model="Explorer">
      <license_plate>4GHG892</license_plate>
    </vehicle>
    <vehicle make="Toyota" model="Prius">
      <license_plate>6KLM153</license_plate>
    </vehicle>
  </vehicles>
  <personnel>
    <person fname="James" lname="Albert">
      <employee_number>1234</employee_number>
    </person>
    <person fname="Mary" lname="Smith">
      <employee_number>7890</employee_number>
    </person>
  </personnel>
</data>

The XML data illustrated above contains a mix of both "vehicles" and "personnel". The self-describing nature of the XML makes it apparent to a reader which data elements are which, but an input in GeoEvent Server may still have trouble identifying the multiple occurrences of the different data items if the inbound adapter's XML Object Name property is not specified.

Here is the GeoEvent Definition the inbound adapter generates when its XML Object Name property is left unspecified and the XML data sample above is ingested into GeoEvent Server:

GeoEvent Definition

In testing, the very first time the XML with the combination of "vehicles" and "personnel" was received and written out as JSON to a system text file, I observed only one person and one vehicle were written to the output file. Worse yet, without changing the generated GeoEvent Definition or any of the input connector's properties, sending the exact same XML a second time produced an output file with "vehicles" and "personnel" elements that were empty.

We know from the JSON Data Structures - Working with Hierarchy and Multicardinality blog that, at the very least, the cardinality specified by the generated GeoEvent Definition is not correct. The GeoEvent Definition also implies a nesting of groups within groups, which is probably not correct.

Working around the issue

Let's explore how you might work around the issue identified above using the configurable properties available in GeoEvent Server. First, ensure the XML input connector specifies which node in the XML should be treated as the root node by setting the XML Object Name property accordingly as illustrated below:

GeoEvent Input

Second, verify the GeoEvent Definition has the correct cardinality for the data sub-structure beneath the specified root node as illustrated below:

GeoEvent Definition

By configuring these above properties accordingly, GeoEvent Server will only consider data within a sub-structure found beneath a "vehicles" root node and should make allowances that the sub-structure may contain more than one "vehicle".

XML Sample

With this approach, there are two ramifications you might want consider. First, the inbound adapter is literally throwing half of the received data away by excluding data from any sub-structure found beneath the "personnel" nodes. This can be addressed by making a copy of the existing Receive XML on a REST Endpoint input and configuring this copy to use "personnel" as its XML Object Name. The copied input should also use a different GeoEvent Definition -- one which specifies "person" as an event attribute with cardinality Many and the attributes of a "person" (rather than a "vehicle") as illustrated below.

Copied Input Configuration

Second, the event record being ingested has multiple vehicles (or people) as items in an array. You'll likely want to process each vehicle (or person) as individual event records. To address this, it's recommended you use a processor available on the ArcGIS GeoEvent Server Gallery, specifically the Multicardinal Field Splitter Processor. There are two different field splitter processors provided in the download, so make sure to use the processor that handles multicardinal data structures.

A Multicardinal Field Splitter Processor, added to a GeoEvent Service illustrated below, will clone event records it receives and split the event record so that each record output has only one vehicle (or person). Notice that each event record output from the Multicardinal Field Splitter Processor includes an index at which the element was found in the original array.

GeoEvent Service

Conclusion

The examples I've referenced in this blog are obviously academic. There's no good reason why a data provider would mashup people and vehicles this way in the same XML data structure. However, you might come across data structures which are not homogeneous and need to use one or more of the approaches highlighted in this blog to extract a portion of the data out of a data structure. Or you might need to debug your input connector's configuration to figure out why attribute or element values you know to exist in the XML being received are not coming through in the event records that output. Or maybe in the data you're receiving you expect multiple event records to be ingested and end up only observing a few -- or maybe only one -- event records being ingested. Hopefully the information provided will help you address these challenges when you encounter them.

To summarize, below are the tips I highlighted in this article:

  • Use the GeoEvent Definition as a clue to the hierarchy and cardinality GeoEvent Server is using to define each event record's structure.
  • Specify the root node or element when ingesting XML or JSON; don't let the inbound adapter assume which node should be considered the root. If necessary, specify an interior node as the root node so only a subset of the data is actually considered.
  • Avoid XML data which uses attributes. If you must use XML data with attributes, know that an attempt will be made to promote these as elements when the XML is translated to JSON.
  • Encourage your data providers to design data structures whose records are homogeneous. This can run counter to database normalization instincts where data common to all records is included in a sub-section above each of the actual records. Sometimes simple is better, even when "simple" makes individual data records verbose.
  • Make sure the XML you ingest includes a header specifying its version and encoding -- the libraries GeoEvent Server is using really like seeing this metadata. Also, watch out for hidden characters which are sometimes present in the data.

GeoEvent Server Automatic Configuration Backup Files

It is possible, and in fact preferred, to create XML snapshots of your ArcGIS GeoEvent Server configuration using GeoEvent Manager (Site > GeoEvent > Configuration Store > Export Configuration).

But what if something has gone sideways and you cannot access GeoEvent Manager? Before you delete GeoEvent Server’s ZooKeeper distributed configuration store, you will want to locate a recent XML configuration and see if recent changes to inputs, outputs, GeoEvent Definitions, and GeoEvent Services are in the configuration file.

Beginning with GeoEvent Server 10.5, a copy of the configuration is exported automatically for you, daily, at 00:00:00 hours (local time).

  • Automatic backup files, by default, are written to the following folder:
    C:\ProgramData\Esri\GeoEvent
  • You can change the folder used by editing the folder registered for 'Automatic Backups':
    Site > GeoEvent > Data Stores > Register Folder
  • You can change when and how often snapshots of your configuration are taken:
    Site > Settings > Configure Global Settings > Automatic Backup Settings

 

GeoEvent Server ZooKeeper Files

At the 10.5 / 10.5.1 release – GeoEvent Server uses the “synchronization service” platform service in ArcGIS Server, which is running an Apache ZooKeeper behind the scenes. Since this is an ArcGIS Server service, the application files are found in the ArcGIS Server 'local' folder (e.g. C:\arcgisserver\local).

If a system administrator wanted to administratively clear a configuration of GeoEvent Server they could stop the ArcGIS Server platform service -- using the Administrative API -- or stop the ArcGIS Server Windows service and delete the files and folders found beneath C:\arcgisserver\local\zookeeper\.

  • You should leave the parent folder, C:\arcgisserver\local\zookeeper intact.
  • You should also confirm with Esri Technical Support that patches, service packs, or hot-fixes you may have installed have not changed how the “synchronization service” platform service is used by other ArcGIS Enterprise components before administratively deleting files from beneath the ArcGIS Server directories. (ArcGIS GeoAnalytics Server, for example, uses the platform service to elect a machine participating in a multiple-machine analytic as the "leader" for an operation.)

Beginning with the 10.6 release – GeoEvent Server is running its own Apache ZooKeeper instance within the ArcGIS GeoEvent Gateway Windows service. If a system administrator wanted to administratively clear a 10.6 configuration of GeoEvent Server they could stop the ArcGIS GeoEvent Gateway Windows service – which will also stops the dependent ArcGIS GeoEvent Server Windows service – and then delete the files and folders found beneath: C:\ProgramData\Esri\GeoEvent-Gateway\zookeeper-data.


GeoEvent Server Kafka File

NOTE: The following only applies to 10.6 and later releases of GeoEvent Server.

Beginning with the 10.6 release – GeoEvent Server is running an Apache Kafka instance as an event message broker within the ArcGIS GeoEvent Gateway Windows service. The message broker uses on-disk topic queues to manage event records. The event records which have been sent from the message broker to a GeoEvent Server instance for processing are recorded within the broker's associated configuration store (e.g. Apache ZooKeeper).

The Kafka message broker provides a transactional message guarantee that the RabbitMQ message broker (used in 10.5.1 and earlier releases) does not provide. If the GeoEvent Gateway on a machine were stopped and restarted, the configuration store will have recorded where event message processing was suspended and will use indexes into the topic queues to resume processing previously received event records.

The topic queue files are closed, new files created, and old files deleted according to configurable data retention strategy. However, if the GeoEvent Gateway were stopped and its ZooKeeper configuration were deleted, the Kafka topic queues will likely be orphaned and potentially large message log files may not be deleted from disk according to the data retention strategy. In this case, a system administrator might need to locate and delete the topic queue files from beneath C:\ProgramData\Esri\GeoEvent-Gateway\kafka.

 

GeoEvent Server Runtime Files

When GeoEvent Server is initially launched, following a new product installation, a number of files are created as the system framework is built. These files, referred to as “cached bundles” are written into a \data folder in the GeoEvent Server installation directory (e.g  C:\Program Files\ArcGIS\Server\GeoEvent\data). Again, if something has gone sideways, a system administrator might want to try deleting these files, forcing the system framework to be rebuilt, before deciding to uninstall and then reinstall GeoEvent Server.

This might be necessary if, for example, you continue to see the message "No Services Found" displayed in a browser window (after several minutes and a browser refresh) when attempting to launch GeoEvent Manager. In this case, deleting the runtime files from the \data folder to force the system framework to be rebuilt may remedy an issue which prevented GeoEvent Server from launching correctly the first time.

Another reason a system administrator may need to force the system framework to be rebuilt might be observing a message that the ArcGIS GeoEvent Server Windows service could not be stopped “in a timely fashion” (when selecting to stop the service using the Windows Task Manager). In this case, an administrator should ensure the process identified in the C:\Program Files\ArcGIS\Server\GeoEvent\instances\instance.properties file has been stopped. Administratively terminating this processes to stop GeoEvent Server can leave the system framework in a bad state, requiring the \data files be deleted so the framework can be rebuilt.

 

Administratively Reset GeoEvent Server

Deleting the Apache ZooKeeper files (to administratively clear the GeoEvent Server configuration), the product’s runtime files (to force the system framework to be rebuilt), and removing previously received event messages (by deleting Kafka topic queues from disk) is how system administrators reset a GeoEvent Server instance to look like the product has just been installed. Below are the steps and system folders you need to access to administratively reset GeoEvent Server at the 10.5.x and 10.6.x releases.

 

If you have custom components in the C:\Program Files\ArcGIS\Server\GeoEvent\deploy folder, move these from the \deploy folder to a local temporary folder, while GeoEvent Server is running, to prevent the component from being restored (from the distributed configuration store) when GeoEvent Server is restarted. Also, make sure you have a copy of the most recent XML export of your GeoEvent Server configuration if you want to save the elements you have created.

10.5.x

  You should confirm with Esri Technical Support that system folders and files you plan to delete before executing the steps below. Files you delete following the steps below are irrecoverable.

  1. Stop the ArcGIS Server Windows service.
    (This will also stop the GeoEvent Server Windows service)
  2. Locate and delete the files and folders beneath C:\Program Files\ArcGIS\Server\GeoEvent\data
    (Leave the \data folder intact)
  3. Locate and delete the files and folders beneath C:\arcgisserver\local\zookeeper
    (Leave the \zookeeper folder intact)
  4. Locate and delete the files and folders beneath C:\ProgramData\Esri\GeoEvent
    (Leave the \GeoEvent folder intact)
  5. Start the ArcGIS Server Windows service.
    (Confirm you can log in to the ArcGIS Server Manager web application)
  6. Start the ArcGIS GeoEvent Server Windows service.

10.6.x

  Note that the lifecycle of the ArcGIS GeoEvent Gateway service is intended to mirror that of the operating system.
  You can administratively reset GeoEvent Server (e.g. deleting its runtime files from its \data folder) without stopping the ArcGIS GeoEvent Gateway service -- unless you also want to administratively delete the ZooKeeper files from the configuration store (which in the 10.6.x are maintained as part of the ArcGIS GeoEvent Gateway service).

  1. Stop the ArcGIS GeoEvent Server Windows service.
  2. Locate and delete the files and folders beneath the following directories (leaving the parent folders intact):
    C:\Program Files\ArcGIS\Server\GeoEvent\data\
    C:\ProgramData\Esri\GeoEvent\
  3. Stop the ArcGIS GeoEvent Gateway Windows service.
    This will also stop the ArcGIS GeoEvent Server Windows service if it is running.
  4. Locate and delete the files and folders beneath the following directories.
    Leave the parent folders (highlighted) intact:
    C:\ProgramData\Esri\GeoEvent-Gateway\zookeeper-data
    C:\Program Files\ArcGIS\Server\GeoEvent\gateway\log
  5. If you delete the zookeeper-data files, you should remove any orphaned topic queues
    by deleting the on-disk Kafka logs (delete the 'logs' sub-folder, leave the 'kafka' folder intact):
    C:\ProgramData\Esri\GeoEvent-Gateway\kafka\logs
  6. Locate and delete the GeoEvent Gateway configuration file (a new file will be rebuilt).
    C:\Program Files\ArcGIS\Server\GeoEvent\etc\com.esri.ges.gateway.cfg
  7. Start the ArcGIS GeoEvent Server Windows service.
    This will start the ArcGIS GeoEvent Gateway service if it has been stopped.
    Confirm you can log in to GeoEvent Manager.

At this point you can also review the contents of the rebuilt com.esri.ges.gateway.cfg file. The GeoEvent Gateway will record its message broker and configuration store port configurations in this file if it was able to launch successfully:

gateway.zookeeper.connect=MY-MACHINE.MY-DOMAIN:4181

gateway.kafka.brokers=MY-MACHINE.MY-DOMAIN:9192

gateway.kafka.topic.partitions=3

gateway.kafka.topic.replication.factor=3

When speaking with customers who want to get started with ArcGIS GeoEvent Server, I'm often asked if GeoEvent Server has an input connector for a specific data vendor or type of device. My answer is almost always that we prefer to integrate via REST and the question you should be asking is: "Does the vendor or device offer a RESTful API whose endpoints a GeoEvent Server input can be configured to query?"

Ideally, you want to be able to answer two integration questions:

  1. How is the data being sent to a GeoEvent Server input?
  2. How is the data formatted; what does the data's structure look like?

For example, an input can be configured to accept data sent to a GeoEvent Server hosted REST endpoint. That answers the first question - integration will occur via REST with the vendor sending data as an HTTP/POST request to a GeoEvent Server endpoint. The second question, how is the data formatted, is the focus of this blog.

What does a typical JSON data record look like?

Typically, when a data vendor sends event data formatted as JSON, there will be multiple event records organized within a list such as this:

{
    "items": [{
                  "id": 3201,
                  "status": "",
                  "calibrated": 1521135120000,
                  "location": {
                         "latitude": -117.125,
                         "longitude": 31.125
                  }
           },
           {
                  "id": 5416,
                  "status": "offline",
                  "calibrated": 1521638100000,
                  "location": {
                         "latitude": -113.325,
                         "longitude": 33.325
                  }
           },
           {
                  "id": 9823,
                  "status": "error",
                  "calibrated": 1522291320000,
                  "location": {
                         "latitude": -111.625,
                         "longitude": 35.625
                  }
           }
    ]
}

 

There are three elements, or objects, in the block of JSON data illustrated above. It would be natural to think of each element as an event record with its own "id", "status", and "location". Each event record also has a date/time the item was last "calibrated" (expressed as an epoch long integer in milliseconds).

 

What do we mean when we refer to a "multi-cardinal" JSON structure?

The JSON data illustrated above is multi-cardinal because the data has been organized within an array. We say the data structure is multi-cardinal because its cardinality, in a mathematical sense of the number of elements in a group, is more than one. The array is enclosed within a pair of square brackets:  "items": [ ... ]

If the array were a list of simple integers the data would look something like:  "values": [ 1, 3, 5, 7, 9 ]

The data elements in the illustration above are not simple integers. Each item is bracketed within curl-braces which is how JSON identifies an object. For GeoEvent Server, it is important that both the array have a name and that each object within the array have a homogeneous structure, meaning that every event record should, generally speaking, use a common schema or collection of name/value pairs to communicate the item's data.

What do we mean when we refer to a "hierarchical" JSON structure?

The data elements in the array are themselves hierarchical. Values associated with "id", "status", and "calibrated" are simple numeric, string, or Boolean values. The "location" value, on the other hand, is an object which encapsulates two child values -- "latitude" and "longitude". Because "location" organizes its data within a sub-structure the overall structure of each data element in the array is considered hierarchical.

It should be noted that the coordinate values within the "location" sub-structure can be used to create a point geometry, but "location" itself is not a geometry. This is evident by examining how a GeoEvent Definition is used to represent the data contained in the illustrated block of JSON.

Different ways of looking viewing this data using a GeoEvent Definition

In GeoEvent Server, if you were to configure a new Receive JSON on a REST Endpoint input, leaving the JSON Object Name property unspecified, selecting to have an GeoEvent Definition created for you, and specifying that the inbound adapter not attempt to construct a geometry from received attribute values, the GeoEvent Definition created would match the one illustrated below:

GeoEvent Definition

Notice the cardinality of "items" is specified as Many (the infinity sign simply means "more than one"). Also, when the block of JSON data illustrated above is sent to the input via HTTP/POST, the input's event count only increments by one, indicating that only one event record was received.

Also notice that, in this configuration, "items" is a Group element type. This implies that in addition to the structure being multi-cardinal, it's also organized as a group of elements, which in JSON is typically an array.

Finally, notice that the "location" is also a Group element type. The cardinality of "location", however, is One not Many. This tells you that the value is a single element, not an array of elements or values.

Accessing data values

Working with the structure specified in the GeoEvent Definition illustrated above, if you wanted to access the coordinate values for "latitude" or "longitude" you would have to specify which latitude and longitude you wanted. Remember, the data was received as a single event record and "items" is a list or array of elements. Each element in the array has its own set of coordinate values. Consider the following expressions:

  items[2].location.longitude

  items[2].location.latitude

The expressions above specify that the third element in the "items" list is the one in which you are interested. You cannot refer to items.location.latitude because you have not specified an index to select one of the three elements in the "items" array. The array's index is zero-based, which means the first item is at index 0, the second is at index 1, and so on.

Ingesting this data as a single event record is probably not what you would want to do. It is unlikely that an arbitrary choice to use the third element's coordinates, rather than the first or second element in the list, would appropriately represent the items in the list. These three items have significantly different geographic locations, so we should find a way to ingest them as three separate event records.

Re-configuring the data ingest

When I first mentioned configuring a Receive JSON on a REST Endpoint input to allow the illustrated block of JSON to be ingested into GeoEvent Server for processing, I indicated that the JSON Object Name property should be left unspecified. This was done to support a discussion of the data's structure.

If the illustrated JSON data were representative of data you wanted to ingest, you should specify an explicit value for the JSON Object Name parameter when configuring the GeoEvent Server input. In this case, you would specify "items" as the root node of the data structure.

Specifying "items" as the JSON Object Name tells the input to handle the data as an array of values and to ingest each item from the array as its own event record. If you make this change to our input, and delete the GeoEvent Definition it created the last time the JSON data was received, you will get a slightly different GeoEvent Definition generated as illustrated below:

 GeoEvent Definition

The first thing you should notice, when the illustrated block of JSON data is sent to the input, is the input's event count increments by three -- indicating that three event records were received by GeoEvent Server. Looking at the new GeoEvent Definition, notice there is no attribute named "items" -- the elements in the array have been split out so that the event records could be ingested separately. Also notice the cardinality of each of the event record attributes is now One. There are no lists or arrays of multiple elements in the structure specified by this GeoEvent Definition. The "location" is still a Group which is fine; each event record should have (one) location and the coordinate values can legitimately be organized as children within a sub-structure.

The updates to the structure specified in the GeoEvent Definition change how the coordinate values are accessed. Now that the event records have been separated, you can access each record's attributes without specifying one of several element indices to select an element from a list.

You should now be ready to re-configure the input to construct a geometry as well as make some minor updates to the data types of each attribute in the GeoEvent Definition in order to handle "id" as a Long and "calibrated" as a Date. You also need to add a new field of type Geometry to the GeoEvent Definition to hold the geometry being constructed.

GeoEvent Input

GeoEvent Definition

Hopefully this blog provided some additional insight on working with hierarchical and multi-cardinal JSON data structures in GeoEvent Server. If you have ideas for future blog posts, let me know, the team is always looking for ways to make you more successful with the Real-Time & Big Data GIS capabilities of ArcGIS.

When a GeoEvent Service processes an event record, the processing is generally atomic. In other words, a filter or processor considering an event record's attributes and geometry has no information on other event records previously processed and will not cache or save the current event record's attributes or geometry for later consideration by an event record not yet received.

 

There are a few exceptions - monitor processors such as the Incident Detector or Track Gap Detector necessarily cache some information in order to monitor ongoing conditions. And filters configured with ENTER or EXIT criteria need to know something about the position of the last reported event with a given TRACK_ID.

 

So how do you configure real-time analytics to compare an event's geometry against some other geometry?  You use geofences. Christopher Dufault has collected some best practices for importing, synchronizing, and using geofences in GeoEvent Server. Check out his blog Geofence Best Practices and comment with tips and tricks with geofences you've found useful in analytics you've designed.

 

- RJ

This article is the second of two articles examining enhancements made to the HTTP transport for the GeoEvent Server 10.5 release. This article examines the outbound transport. The previous article examining the inbound transport can be found here.

 

In this article, I would like to provide detail for an enhancement made to the HTTP outbound transport for the GeoEvent Server 10.5 release. The following capability is listed on the What's new in ArcGIS GeoEvent Server web help page:

  • HTTP outbound transport now supports field value substitutions in the HTTP GET mode

 

Beginning with the 10.5 product release, an output leveraging the HTTP transport can be configured to substitute event attribute values into the URL of a request GeoEvent Server will send to an external server. The attribute values are incorporated as query parameters (as opposed to the request’s content body).

The new capabilities of the HTTP transport will be described below with exercise steps you can follow to demonstrate the capabilities.

~~~

When you want to send data from event records to an external server or application you typically configure an outbound connector – such as the Push JSON to an External Website output. GeoEvent Server will incorporate the event data into the content body of a REST request and send the request to the external server as an HTTP/POST. This capability has been available in the last several releases.

A device on the edge of the Internet of Things, however, might prefer to receive requests with event data organized as query parameters rather than in a request's content body. This way the entire data payload is in the URL of the request -- leaving the content body of the request empty.

It might seem a little odd for a GeoEvent Server output, which is not intended to receive or process any type response, to make an HTTP/GET request. But the capability was introduced to enable GeoEvent Server to issue activation requests to devices which require data values be sent using query parameters.

~~~

Exercise 2A – Use HTTP/GET to send event data as query parameters to an external server

 

Why exactly are we configuring a custom outbound connector?

How's it different than the the Push JSON to an External Website connector available out-of-the-box?

 

For this exercise:

  1. Configure the following GeoEvent Server output connector.
    Browse to Site > GeoEvent > Connectors and select to create a new outbound connector. Default values for the "Shown", "Advanced", and "Hidden" properties are included beneath the illustration.


     

    Shown PropertiesDefault Value
    URL[ no default value defined ]

     

    Advanced PropertiesDefault Value
    Use URL ProxyFalse
    URL Proxy[ no default value defined ]
    HTTP Timeout (in seconds)30

     

    Hidden PropertiesDefault Value
    Formatted JSONFalse
    MIME Typetext/plain
    Acceptable MIME Typestext/plain
    Post/Put body MIME Typetext/plain
    Parameters[ no default value defined ]
    Header Parameter Name:Value List( blank )
    HTTP MethodGet
    ModeClient
  2. Save your newly configured custom outbound connector.
  3. Navigate to Services > Outputs and select to create a new (Custom) HTTP/GET request with event data as query parameters output. Configure the output as illustrated below, replacing yourServer and yourDomain with a valid server and domain for your organization.


    Note the URL specified in the illustration:

    https ://yourServer.yourDomain/server/rest/services/SampleWorldCities/MapServer/0/query?where=city_name='${Origin}'&f=json

    The format of the URL assumes that an ArcGIS web adapter (named 'server') has been configured and that an external server or client application receiving this URL could use it to query the "Sample World Cities" map service on your ArcGIS Server. GeoEvent Server will substitute the variable ${Origin} in the URL's query parameter with an actual attribute value from a received event record, enabling the external server or client application to make a more specific query based on real-time events.
     
  4. Save your updated output, then publish a GeoEvent Service which incorporates your output and an input of your choice. You can use any type of input, so long as the GeoEvent Definition associated with event records received by the input includes an attribute field named Origin.

    Queries through a web adapter to a Portal secured web service from an unauthenticated source will return an error. Since the Sample World Cities web service is secured by Portal in my current deployment, I expect the request made by GeoEvent Server will generate an error. In order to complete the demonstration we will use the GeoEvent Server's debug logs to confirm that the output has constructed a valid query and sent the request to the ArcGIS Server map service.
  5. Navigate to the Logs page in GeoEvent Manager. Click 'Settings' and enable DEBUG logging for the feature service outbound transport logger (com.esri.ges.transport.http.HttpOutboundTransport).
  6. Send an event record to your GeoEvent Server input whose Origin attribute is the name of one of the cities in the Sample World Cities map service (e.g. Chicago). Refresh the Logs page in GeoEvent Manager and you should see log messages with information similar to the following:

 

The first message shows that 'Chicago' was indeed substituted into the query parameters by the GeoEvent Server output and a request was made. The error may or may not be displayed; as indicated above, the map service in my case is Portal secured and this request did not include a token authenticating the request.

 

There are a couple of things you'll want to keep in mind. The URL you use to configure the the output must URL Encode its query parameters to make them HTTP safe. But the value is being substituted by GeoEvent Server is based on a string received from a real-time data source. This means you may have some work to do to make sure that "San Francisco" is represented as San%20Francisco not San Francisco before an event record is sent to an output.

 

Also, the enhancement being introduced in this article was designed specifically for HTTP/GET since those requests do not include a JSON payload in the request’s body. However, some rudimentary testing suggests that you can use HTTP/POST as well; I suppose it would be up to the external server receiving the request whether or not to honor an HTTP/POST and either ignore the request’s JSON payload or potentially consider its content in addition to the values in the query parameter.

 

Finally, you do have some freedom in how the request’s query string is specified. For example, you could construct a parameterized string something like; GeoEvent Server will handle the substitution of the multiple parameter values:

query?where=city_name+IN+%28%27${CityA}%27%2C%27${CityB}%27%29&f=json

 

If you send the string highlighted above through an HTML decoder you'll see that it is equivalent to:

where=city_name IN ('${CityA}','${CityB}')&f=json

 

I hope these two blogs were helpful.  Please comment below with questions and I'll do my best to answer them.

 

-- RJ

This article is the first of two articles examining enhancements made to the HTTP transport for the GeoEvent Server 10.5 release. This article examines the inbound transport. The second article examining the outbound transport can be found here.

 

In this article, I would like to provide detail for an enhancement made to the HTTP inbound transport for the GeoEvent Server 10.5 release. The following capability is listed on the What's new in ArcGIS GeoEvent Server web help page:

  • HTTP inbound transport now accepts GET requests in the query parameters

 

Beginning with the 10.5 product release, an input leveraging the HTTP transport can be configured to support an external server or application which incorporates its data payload in the URL of the request (as opposed to the request’s content body).

The new capabilities of the HTTP transport will be described below with exercise steps you can follow to demonstrate the capabilities.

~~~

When you want to receive event records as an HTTP/POST request from an external server or application you typically configure an inbound connector – such as the Receive JSON on a REST Endpoint input. GeoEvent Server will create a REST endpoint to which the external server can post its event data with the event data included in the content body of the request. This capability has been available in the last several releases.

A device on the edge of the Internet of Things, however, might prefer to organize the event data as query parameters and incorporate its data payload in the URL of the request -- leaving the content body of the request empty. For example:

  • http :// localhost:6080/geoevent/rest/receiver/http-receiver?field1=v1&field2=v2&field3=v3
  • http :// localhost:6080/geoevent/rest/receiver/http-receiver?data=v1,v2,v3

Beginning with the 10.5 product release an input pairing either the out-of-the-box JSON or TEXT adapter with the HTTP inbound transport can be configured to support the use cases above with an HTTP/GET request.

~~~

Exercise 1A – Use HTTP/GET requests to send event data to GeoEvent Server as query parameters

  1. Create the following GeoEvent Definition

  2. Configure the following GeoEvent Server input connector


    Note the new 10.5 parameter:  Get Request Contains Raw Data

    Review the help tip provided for this parameter. If the inbound connector is running in SERVER mode and receives an HTTP/GET request, if the request content body is empty and the request URL includes query parameters, the default (‘No’) will consider each name/value pair as a separate attribute value in an event record. If the default is changed to ‘Yes’ you will be expected to specify the one query parameter which will be considered the event’s raw data.

  3. Configure a GeoEvent Server output connector and publish a GeoEvent Service

    You can use any outbound connector which supports JSON event record displays. Recommended output connectors are ‘Send Features to a Stream Service’ or ‘Write to a JSON File’.




  4. Send the following HTTP/GET request to your input connector’s endpoint

    http://yourServer.yourDomain:6180/geoevent/rest/receiver/rest-json-in?fname=Robert&lname=Lawrenson&employee_id=123

 

You should observe the event count of your ‘Receive JSON on a REST Endpoint’ input increment as HTTP/GET requests are made on your input’s REST endpoint

~~~

Exercise 1B – Explore HTTP/GET requests whose query parameters include comma delimited values

Rather than incorporating the event data into a series of key/value pairs, the event data can be conveyed using a single query parameter whose value is a set of comma delimited values. The delimited text values will require an inbound connector which leverages the TEXT adapter (rather than the JSON adapter used in the previous exercise).

GeoEvent Server does not include a “Receive TEXT on a REST Endpoint” inbound connector out-of-the-box, so you will need to configure one for this exercise.

  1. Configure the following GeoEvent Server input connector.
    Browse to Site > GeoEvent > Connectors and select to create a new inbound connector. Default values for the "Shown", "Advanced", and "Hidden" properties are included beneath the illustration.



    Shown PropertiesDefault Value
    Event Separator\n (newline)
    Field Separator, (comma)
    Incoming Data Contains GeoEvent DefinitionFalse
    Create Unrecognized Event DefinitionsFalse
    Create Fixed GeoEvent DefinitionsFalse
    GeoEvent Definition Name (New)[ no default value defined ]
    GeoEvent Definition Name (Existing)[ no default value defined ]
    Language for Number Formatting[ no default value defined ]

    Advanced PropertiesDefault Value
    Acceptable MIME Types (Server Mode)text/plain
    Expected Date Format[ no default value defined ]
    Build Geometry From FieldsFalse
    X Geometry Field[ no default value defined ]
    Y Geometry Field[ no default value defined ]
    Z Geometry Field[ no default value defined ]
    Well Known Text Geometry Field[ no default value defined ]
    wkid Geometry Field[ no default value defined ]
    Get Request Contains Raw DataTrue
    Parameter Name for the Raw Datadata

    Hidden PropertiesDefault Value
    ModeServer
    Use Long PollingFalse
    Frequency (in seconds)[ no default value defined ]
    Receive New Data OnlyFalse
    Post/Put body MIME Type[ no default value defined ]
    HTTP MethodGet
    Header Parameter Name:Value List( blank )
    Post/Put FromParameters
    Post/Put Parameters( blank )
    Content Body[ no default value defined ]
    Parameters[ no default value defined ]
    URL[ no default value defined ]
    URL Proxy[ no default value defined ]
    Use URL ProxyFalse
    Acceptable MIME Types (Client Mode)[ no default value defined ]
    HTTP Timeout (in seconds)30
    Append to the End of Payload[ no default value defined ]
  2. Save your newly configured custom inbound connector.
  3. Navigate to Services > Inputs and select to create a new (Custom) Receive TEXT on a REST Endpoint input.
    Configure the input as illustrated below. Use the GeoEvent Definition you created for the last exercise.


  4. Publish a GeoEvent Service which incorporates your newly configured input and any outbound connector which supports JSON event record displays. You can use the outputs configured for the previous exercise if you wish.
  5. Send the following HTTP/GET request to your input connector’s endpoint (note the endpoint's name has changed):

    http://yourServer.yourDomain:6180/geoevent/rest/receiver/custom-receive-text-rest-in?data=Robert,Lawrenson,123

 

You should observe the event count of your ‘(Custom) Receive TEXT on a REST Endpoint’ input increment as HTTP/GET requests are made on your input’s REST endpoint.

~~~

On both the Linux and Windows platforms, GeoEvent Server is run from within a Java Virtual Machine (JVM) instance. The out-of-the-box default configuration allocates only 4GB of your server's available RAM to this JVM. All GeoEvent Server operations requiring RAM draw from this allocation.

 

Some reasons you might want to increase the amount of RAM allocated to the GeoEvent Server's JVM include:

  • A need to load a large number of geofences into the GeoEvent Server's geofence manager
  • A need to process a large velocity or large volume of event records (more than a few hundred per second)
  • A need to cache a large amount of information from a secondary enrichment source for event record enrichment
  • An expectation that real-time analytics using Incident Detectors will generate a large number of concurrent incidents
  • An expectation that real-time analytics requiring state (e.g. Track Gap detection and monitoring, or spatial conditions such as ENTER / EXIT) will need to work with a large number of assets with unique track identifiers

 

System administrators who have determined that their server machine has sufficient available RAM, and who have also determined that their GeoEvent Server deployment has a need to allocate more RAM to the JVM instance running GeoEvent Server, can follow the steps outlined below to increase the memory available to GeoEvent Server by allocating more RAM to the hosting JVM.

 

  1. Stop GeoEvent Server
    • On a Windows platform, make sure the GeoEvent Server Windows Service and its associated java.exe process are stopped.
  2. Open the ArcGISGeoEvent.cfg configuration file in a text editor
    • On a Windows platform, this file is found in the ...\ArcGIS\Server\GeoEvent\etc folder by default
    • When located beneath C:\Program Files you will need to edit this file as a user with administrative privilege
  3. Locate the block of JVM Parameters in the file
    • Note that at different releases the indexes for the JVM parameters will be different from the illustration below
    • Click the image below for an enlarged view in a new tab / window:

  4. Increase the -Xmx parameter for the Java Heap Size from its default (4096m) to specify a larger allocation
    • For example:   -Xmx8192m
    • Note that the allocation is in megabytes
  5. Save your edits to the ArcGISGeoEvent.cfg file (and dismiss your text editor)
  6. Start GeoEvent Server

 

Using system administrative tools you should be able to verify that the JVM instance (java.exe process) never consumes more memory than what is allocated by the ArcGISGeoEvent.cfg configuration file, and that more than the default 4GB is now available for GeoEvent Server operations.

Hello Everyone --

 

I've recently completed three short videos which illustrate how to use stream services and capabilities related to stream services -- specifically 'Store Latest' and 'Related Features' which were never covered in the product tutorial available on line.

 

We are working on updating the tutorial's exercises and narrative to be consistent with these new videos, but I don't want to hold the videos until the tutorial re-write is complete. (The videos will eventually be bundled with the tutorial for download.)

 

The basic stream service capability provided by GeoEvent did not change with the ArcGIS 10.5 product release. However, some minor changes in behavior were made with regard to 'Store Latest' when working within different enterprise configurations, such as single-machine vs. multi-machine and when you have federated with a Portal for ArcGIS vs. when you have not federated with a Portal.

 

Enhancements to the 'Related Features' configuration workflow now allow you to select the feature service from which related features will be obtained (rather than having to manually enter the URL of an existing feature service).

 

Three MP4 files have been attached to this blog.  Please check-out the videos -- they are each only 10 to 15 minutes. Let the team know (e-mail geoevent@esri.com) if you think bundling a short video with a less detailed tutorial is an approach which works for introducing product updates and documenting product functionality.

 

Best Regards --

RJ

A couple of times a year a script developer will ask me about using the GeoEvent Admin API to automate some administrative task - such as stopping and restarting a GeoEvent input.

 

Any user action taken through the GeoEvent Manager web application makes a request against a URL in our GeoEvent Admin API. So, in theory, once you authenticate with the GeoEvent Admin API, you should be able to script some fairly simple tasks, like stopping a running input, modifying one of the input's parameters, saving the input's new configuration and restarting the input to begin receiving event data.

 

I'd like to share a blog post by Andy Ommen, a solution engineer working with Esri Database Services out of our Boston regional office. Take a look and let him know if you find his information useful. I really appreciate him sharing this out through his blog.  Here's the link:  Scripting tasks using the GeoEvent Admin API

 

Update March 2019Eric Ironside, a product engineer on the Real-Time team, has created a second blog illustrating how to update the properties of a GeoEvent Input. Much appreciated Eric!

Here’s the link:  Scripting Tasks Using the GeoEvent Admin API – Update Inputs.

 

 RJ

When processing event records which include a large number of unique track identifiers you might notice that some spatial relationships evaluated by GeoEvent filters and processors do not behave as expected. We will focus on the “Enter Any” and “Exit Any” spatial operators as they apply to a GeoTagger Processor when more than 1000 unique TRACK_ID values are present. I will explain in detail the behavior reported to me by a few users and a potential product configuration you can make to better accommodate your data.

 

Consider, for example, a GeoTagger Processor configured to enrich event records with the name of a geofence. The processor will evaluate a set of geofences and add a new field with the name of a geofence to an event record whenever the event’s geometry enters or exits an area. As you observe the output from a GeoEvent Service however, you notice that events are being dropped at random from the processor. Events that you observed several minutes ago are being removed or are severely delayed in displaying at all within the GeoEvent Service and are not included in the output. While there may be other reasons for your observations, we’ll assume that the GeoTagger Processor is the root cause.

GeoTagger Properties

 

"Enter Any" and "Exit Any" Spatial Operators maintain state

The majority of spatial operators do not require GeoEvent to maintain state information. The “Enter Any” and “Exit Any” operators are the exception, and require GeoEvent to track both geometry and track identifier as events are moved forward through the processor. Maintaining state requires a prior knowledge of each event’s location; each event is treated as a dependent of its previously observed location. Without maintaining state however, the previous positions of the events remain unknown to GeoEvent and all events are treated independently. This means that a GeoTagger Processor configured with the “Enter Any” or “Exit Any” operation must utilize a cache of unique track identifiers for every observed event. In short, this is what is known as a cache‑aware processor node.

 

GeoEvent does have a maximum cache size enabled which is in place to maintain the state of all events as efficiently as possible; the default value for this property is 1000 events. When an event arrives at a cache-aware node and the event’s TRACK_ID is not contained in the cache, one of the previously observed TRACK_ID values must be discarded if the cache is currently full. This is done to make room for the newly observed event and respect the 1000 event limit.

What does this mean for your data? Conceptually it means that the processor will forget that it has ever encountered an event with the discarded TRACK_ID. When observing the behavior in real-time, events will spontaneously be dropped from the processor and certain events that you observed several minutes ago may not be displaying at all in the output destination.

 

Though the logic remains the same for both, the definitions used for “Enter Any” and “Exit Any” are in fact different. In order for GeoEvent to recognize that an event’s geometry has entered a geofence, its prior location must have been observed outside that geofence. Conversely, GeoEvent will not recognize that an event’s geometry has exited a geofence unless the geometry’s prior location was observed inside that geofence. These definitions are honored by default, but can produce different results if an additional property that GeoEvent uses to evaluate the spatial relationships is changed.

 

"First GeoEvent triggers Enter" and "First GeoEvent triggers Exit"

When a cache-aware node must make an “enter or exit” decision, it respects the settings of two properties within the GeoEvent Manager. These are the "First GeoEvent triggers Enter" and "First GeoEvent triggers Exit" properties, which determine the importance of the “enter” or “exit” operations. By default GeoEvent assumes that “entry” is more important than “exit” and that most event geometries are already outside of a geofence. The defaults for these properties are true and false respectively. You can change the values for each, but it is recommended that the change be made only with deliberate care. If not careful you could easily configure GeoEvent to start generating unwanted analysis and notifications, particularly after a restart of the server machine. For example, even if event geometries are expected to move around inside a geofence such as an administrative boundary, they will be located outside every other geofence that is registered with GeoEvent. If you change the "First GeoEvent triggers Exit" property from false to true you may unexpectedly get a significant number of “exit” evaluations every time the server machine is rebooted.

 

We’ll look at an example of a default configuration with GeoEvent using “entry” as the main importance. Let’s assume that a GeoTagger is set to an “Enter Any” operation and the event TRACK_ID is not in the cache. If the event’s geometry lies inside of a geofence, then the GeoTagger will read the event as having entered. Alternately, if this same GeoTagger is set to “Exit Any” and the event’s geometry is already outside of a geofence, the processor will determine that the point did not exit. Sound tricky? Just remember that GeoEvent assumes that geofences are empty 99% of the time, and that points are expected to enter at some future point. That doesn’t mean that exits are ignored, but they are placed with less importance and require more observations to determine.

 

Changing the maximum cache size for cache‑aware nodes

The default setting for the maximum cache size is exposed through a specific product configuration file.  It is important to note that this cache size is a maximum for each cache-aware node, and it is not a system-wide limit. You must have administrative privilege to edit this file. Changing the default value can result in your GeoEvent Server consuming significantly more RAM. The Java process in which the GeoEvent Server runs, by default, is limited to 4GB of system RAM. If every cache-aware node begins caching event data for significantly more than 1000 unique track identifiers, a larger portion of the 4GB will be consumed leaving less room for other more basic functions.

 

To promote system stability it is recommended that you estimate of the total number of unique track identifiers expected from your event data and set the maximum cache size value slightly higher than the estimate (to allow for more features to be added over time). Keep the value as small as possible and do not specify an arbitrarily high maximum cache size.

The cache value is contained within the com.esri.ges.manager.servicemanager.cfg file located in the following directory on a default system:

 

“C:\Program Files\ArcGIS\Server\GeoEvent\etc”

 

Keep in mind that this location may change if GeoEvent Server was installed to directories other than the product’s default system folder.

 

Incident Detection and the 1000 event cache limit

Within GeoEvent, you could also observe that open incidents are being dropped from the output destination in a similar manner to how the GeoTagger discards events. The Incident Detector Processor is another cache-aware node in GeoEvent, and utilizes an incident cache particularly when evaluating conditions for concurrently open incidents. Much like the GeoTagger, the Incident Detector will discard open/ongoing incidents if its incident cache is full and a new event triggers an incident to open. The opening/closing of incidents is managed by the Incident Manager program, which runs in the background to maintain state. The Incident Manager has a 1000 incident cache limit enabled by default, but it is exposed through the GeoEvent Manager rather than a configuration file. This default size can also be changed, but the same recommendations as the GeoTagger Processor apply. Obtain an estimate first of how many unique features GeoEvent will be processing, and then determine how many probable incidents could be open at one time. Set the number of Open and Closed incidents to slightly higher than the calculated maximum.

 Incident Manager Settings

The Real-Time GIS product team is thrilled to announce our newest addition to the ArcGIS GeoEvent Gallery, the Waze Connector for GeoEvent. 

 
The Waze Connector for GeoEvent allows users to receive live data from Waze, the world's largest community-based traffic and navigation app. Through the free Waze Connected Citizens Program (CCP), they support a two-way data exchange between Waze and their municipal partners. This allows for streamlined access to both authoritative information alongside user submitted alerts and hazards.

 

In order to utilize the Waze data in GeoEvent Extension you must first be a part of their Connected Citizens Program. Information regarding the requirements to participate and apply to join that program can be found at the following link:
 

Waze Connected Citizens Program

https://www.waze.com/ccp 

Once you've received your necessary access credentials you can begin pulling in the live user submitted Alerts and Traffic Jams and consuming that data throughout the ArcGIS platform. The tutorial included with the connector will walk you through the process of polling the feed, filtering by data type, and writing that data out into the spatiotemporal big data store or other data sources. Also included is an optional segment that will allow users to quickly apply the Waze symbology to their newly created web maps.

 

 

For an example of the types of information being made available to Esri users, and a sneak peak of the new aggregation styles for map services using the spatiotemporal big data store with the 10.5 release, check out the screenshots from our demo application below.

 

 

 

 

It is strongly recommended that users who want to integrate the GeoEvent Extension and Portal for ArcGIS in the same environment use the most recent release of the ArcGIS product. GeoEvent’s integration with Portal is not supported prior to the 10.4 product release. Some users have successfully deployed these products using earlier releases, however, their deployments have significant functional limitations and known constraints. GeoEvent first began integrating Portal’s updated security model at the 10.4 release; integration was completed with the 10.4.1 product release. This blog clarifies GeoEvent’s integration with Portal, provides detail on how the integration works, when and why it was implemented the way it was, and suggests some considerations you should be aware of when planning a deployment.

 

 

How Integration Works

 

When we discuss integration between GeoEvent and Portal, we're really talking about two different issues.

 

The first involves security and access to the GeoEvent Manager. As the product name implies, GeoEvent is an extension for ArcGIS Server. Beginning with the 10.3 release of both products, GeoEvent uses the same security store as ArcGIS Server. What that means in practical terms is that when logging into the GeoEvent Manager to configure inputs, outputs, and services, you do so using an administrative account for ArcGIS Server. It is recommended that you use Server’s default Primary Site Administrator (PSA) account, but any administrative account – such an Integrated Windows Authentication (IWA) admin account – can be used. Once inside the GeoEvent Manager web application the user experience is the same regardless of which administrative account was used to gain access.

 

When ArcGIS Server has been federated with Portal for ArcGIS, it gives up its own security store and relies on Portal to authenticate and authorize user access. This means that GeoEvent, in turn, will also rely on Portal’s security model since GeoEvent is using the same security model used by ArcGIS Server.

 

The second issue involves access to data. GeoEvent uses server connections, registered as GeoEvent Data Stores, to connect to server machines and request access to data. When registering a server connection a user provides the URL of the server they are trying to reach and, if necessary, credentials or a token to be used when accessing the server content. You can register server connections to the local ArcGIS Server (the one running the GeoEvent Extension), an external ArcGIS Server instance, an ArcGIS Online for Organizations instance, or an instance of Portal for ArcGIS (either on the local server or running on an external server).

 

For "FULL" Integration to be considered, both issues must be addressed. Users should be able to log into GeoEvent Manager using the Portal Security Store, AND be able to access any potential data source coming from Portal for ArcGIS.

 

When and Why Was This Implemented?

 

As indicated earlier, GeoEvent is an extension to the ArcGIS Server product. A decision was made at the 10.3 release that GeoEvent should use a token-based security model, similar to what ArcGIS Server uses, to simplify access control as well as the user login experience. This worked by passing an administrative user's credentials to ArcGIS Server and receiving back an encrypted token which verified the user's access, limited which server / site could use that token, and imposed an expiration date after which connections using the token would no longer validate. This worked well for ArcGIS Server in that GeoEvent could easily log in using those same credentials, and could use a Long Term Token to validate access to all of Server's services through the Data Store for the life of that token, up to one year.

 

When configured with built-in users, Portal's security worked the same way. Users would provide credentials at the login page for the GeoEvent Manager which GeoEvent would use to request a token from Portal on behalf of the user. When IWA had been configured this approach required users to first use Portal’s Token generation page to obtain a token (by entering credentials recognized by Portal) and then entering that token into the GeoEvent Manager’s login page.

 

This exposed a crucial difference between Server tokens and Portal tokens – specifically the maximum time a token could be used before it expired. While Server allowed tokens to be created which would not expire for a year, Portal only supported tokens with a maximum life of up to two weeks. This wasn’t a significant limitation for a user’s login to GeoEvent Manager, but it meant that GeoEvent Data Stores would need to be reconfigured with a new token every two weeks.

 

At the 10.4 release Portal updated the federation experience with ArcGIS Server, providing an opportunity for GeoEvent to improve its integration and utilize the same OAuth2 security model. Portal security integration was completed with the GeoEvent 10.4.1 product release. This removed the need to have different login workflows to authenticate built-in and IWA users, and more importantly removed restrictions inherent with the token-based access developed for the 10.3 / 10.3.1 releases of GeoEvent. Within the Data Store, users could choose to enter credentials which GeoEvent would encrypt and use to authenticate requests it made to either Server or Portal. These changes addressed both issues identified above streamlining a user’s login to GeoEvent Manager and providing long term access to Portal data and resources.

 

 

Deployment Considerations

 

GeoEvent deployment into a federated environment with Portal prior to 10.4 is not supported by Esri Technical Support. While it may be possible to architect a solution using these products at the 10.3.x releases, it is against our recommended best practices. Changes made for 10.4 and 10.4.1 to replace the reliance on token-based authentication cannot be ported back to earlier releases. There will be no patches or hot fixes to re-architect the token-based security as it would involve significant modification to Portal, Server, and GeoEvent.

 

Users looking to leverage GeoEvent and Portal in an enterprise solution are encouraged to use the latest public release of each product as these will incorporate improvements to both the user experience and product integration.