The Field Mapper and Field Calculator Processors are two of the most often used processors in ArcGIS GeoEvent Server. While the Field Mapper Processor provides the ability to map one GeoEvent Definition (schema) to another GeoEvent Definition, the Field Calculator Processors allows you to use functions to manipulate field values. The Field Calculator supports many different functions related to data type conversion, string manipulation, mathematics, creating geometry from fields, and more.
In GeoEvent Server 10.9, the Field Mapper now allows you to use the same Field Calculator functions inside each Field Map text box. This makes it possible for a single Field Mapper to potentially replace a series of Field Calculators when performing calculations on multiple event attribute values. Let’s walk through some examples to see the power of this enhancement.
In the examples, you will notice a pattern in which the incoming events are immediately mapped into the outgoing GeoEvent Definition. This allows the Field Calculators to calculate field values directly into existing fields. Field mapping into a desired GeoEvent Definition immediately solidifies the event schema and simplifies the configuration.
If you choose not to use this approach, each Field Calculator will need to create a new GeoEvent Definition containing the new field that is being calculated. This can lead to many temporary GeoEvent Definitions that clutter GeoEvent Server’s configuration, making it hard to manage.
Imagine you have an input providing the location of a device as a comma-separated string. This location string might be something like the following:
“Location”: “One International Way, Broomfield, CO, 80021”
For this example, let us assume the string always reports an address, city, state, and ZIP code (four parts). To split out the individual components in this list, a regular expressions can be used inside of the replaceAll() and trim() functions as follows:
Address | trim(replaceAll(Location,'^(.*)[,](.*)[,](.*)[,](.*)$','$1')) |
City | trim(replaceAll(Location,'^(.*)[,](.*)[,](.*)[,](.*)$','$2')) |
State | trim(replaceAll(Location,'^(.*)[,](.*)[,](.*)[,](.*)$','$3')) |
Zip | trim(replaceAll(Location,'^(.*)[,](.*)[,](.*)[,](.*)$','$4')) |
Wrapping each of the equations above into a Field Calculator for each field requires four nodes, as illustrated in the GeoEvent Service below. In addition, a Field Mapper is necessary to convert the event to a GeoEvent Definition that includes all the new fields: Address, City, State, and Zip. This is done so each Field Calculator can write derivative values into existing fields rather than each Field Calculator creating a new GeoEvent Definition as new fields are created.
Example GeoEvent Service with many processors
Field Mapper configuration mapping fields
Field Calculator configuration with expression
This is a lot of work to coax out a few sub-strings of data. The GeoEvent Service is significantly more complicated, making it harder to understand and maintain.
With GeoEvent Server 10.9, you can now greatly simplify this GeoEvent Service by moving all of the functions into a single Field Mapper as illustrated below.
Simplified GeoEvent Service incorporating expressions from multiple processors into a single Field Mapper.
Configuration of the Field Mapper incorporating multiple expressions
With the Internet of Things (IoT), it’s common to receive status information in a bit encoded format. Fundamentally, this status information is delivered in an integer type field (short, integer, or long) and it looks like a regular number. But the underlying bits are being manipulated to report binary encoded information. Typically, this is as simple as reporting if something is on (1) or off (0). But it can also be used to report larger sets of encoded values such as off (00), low (01), medium (10), and high (11).
In this second example, we’ll draw from the world of winter snowplow operations and utilize a status value that is reported by a popular automatic vehicle location (AVL) provider. The status value is reported as an integer type field in the incoming events. The description of the status bits is provided in the table below.
Field Name | Bit Position | Description |
Reserved | 0 - 2 | Reserved |
SpreadOn | 3 | Spreading ON/OFF |
Blast | 4 | Blast ON/OFF |
SolidPause | 5 | Solid Material Pause YES/NO |
LiquidPause | 6 | Liquid Material Pause YES/NO |
Unload0 | 7 | Unload Status YES/NO |
Unload1 | 8 | Liquid Unload Status YES/NO |
Reverse | 9 | Reverse Status YES/NO |
ConvOn | 10 | Conveyor Status ON/OFF |
LiquidOn | 11 | Liquid Status ON/OFF |
PrewetOn | 12 | Pre-wet Status ON/OFF |
AntiIceOn | 13 | Anti-Ice Status ON/OFF |
ConvMode | 14 & 15 | Conveyor Mode 0 - Off 1 – Open Loop 2 – Manual 3 – Auto |
PrewetMode | 16 & 17 | Pre-wet Mode 0 - Off 1 – Open Loop 2 – Manual 3 – Auto |
AntiIceMode | 18 & 19 | Anti-Ice Mode 0 - Off 1 – Open Loop 2 – Manual 3 – Auto |
HasError | 20 | Error Status YES/NO |
Examples of status values might be:
binary 00000000000000000000 = decimal 0
binary 010101011110000011000 = decimal 703,512
In order to isolate the bits within this numeric field, we’ll use the exponent of 2 to remove the parts of the number before and after the bits we’re interested in. The function will take the general format of:
floor(Status/pow(2,<startBit>))-floor(Status/pow(2,<endBit>))*pow(2,<numBits>)
Entering our start and end bit locations, the functions for each of the individual state values in the status can be retrieved:
SpreadOn | floor(Status/pow(2,3))-floor(Status/pow(2,4))*pow(2,1) |
Blast | floor(Status/pow(2,4))-floor(Status/pow(2,5))*pow(2,1) |
SolidPause | floor(Status/pow(2,5))-floor(Status/pow(2,6))*pow(2,1) |
LiquidPause | floor(Status/pow(2,6))-floor(Status/pow(2,7))*pow(2,1) |
Unload0 | floor(Status/pow(2,7))-floor(Status/pow(2,8))*pow(2,1) |
Unload1 | floor(Status/pow(2,8))-floor(Status/pow(2,9))*pow(2,1) |
Reverse | floor(Status/pow(2,9))-floor(Status/pow(2,10))*pow(2,1) |
ConvOn | floor(Status/pow(2,10))-floor(Status/pow(2,11))*pow(2,1) |
LiquidOn | floor(Status/pow(2,11))-floor(Status/pow(2,12))*pow(2,1) |
PrewetOn | floor(Status/pow(2,12))-floor(Status/pow(2,13))*pow(2,1) |
AntiIceOn | floor(Status/pow(2,13))-floor(Status/pow(2,14))*pow(2,1) |
ConvMode | floor(Status/pow(2,14))-floor(Status/pow(2,16))*pow(2,2) |
PrewetMode | floor(Status/pow(2,16))-floor(Status/pow(2,18))*pow(2,2) |
AntiIceMode | floor(Status/pow(2,18))-floor(Status/pow(2,20))*pow(2,2) |
HasError | floor(Status/pow(2,20))-floor(Status/pow(2,21))*pow(2,1) |
Wrapping each of the equations above into a Field Calculator for each field would require fifteen processors. An example of what a GeoEvent Service like this would look like is illustrated below. This GeoEvent Service also includes a Field Mapper that converts the event to a GeoEvent Definition that includes all the new fields.
GeoEvent Service with numerous processors, each with their own expression
Field Mapper configuration mapping the source fields to the target fields
Field Calculator configuration for the SpreadOn field.
Creating all those Field Calculators can be daunting and it really clutters the service designer. Just imagine if this GeoEvent Service required other processing workflows, resulting in even more elements.
Luckily, GeoEvent Server 10.9 comes to the rescue, what previously took multiple Field Calculators can now be accomplished in a single Field Mapper. Your GeoEvent Services can be simplified considerably, giving you more space to incorporate other processing workflows.
Simplified GeoEvent Service at 10.9 with single Field Mapper
Field Mapper with new expression support
In this third example, we’ll explore a situation where multiple dates are being sent to an input in different formats. GeoEvent Server recognizes a few different commonly used string formats to represent a date and time (refer to the blog What time is it? Well That Depends...). As you work your way through this example, consider that the following four values all represent the exact same date and time:
Suppose data being received from a sensor contains several different date values, all in different formats. A sample data record is illustrated below:
Input Field | Field Type | Example | Notes |
ReportDate | Date | 17-May-2021 | “d-MMM-yyyy” day-first format |
ReportTime | String | 21:36:42 | “H:mm:ss” common time string |
Date_1 | String | 2021-05-17T21:36:42-00:00 | ISO 8601 datetime formatted string |
Date_2 | Long | 1621287402 | Epoch measured in seconds |
Date_3 | String | 5/17/2021 | “M/d/yyyy” common date format |
In an input, you can specify only one Expected Date Format pattern to use when parsing and adapting date/time values. When specified, the pattern is applied to any field’s value the GeoEvent Definition specifies should be adapted as a date. Any field specified for adaption as a date must therefore share a common and consistent format with other date fields. Note that, when an Expected Date Format pattern is not specified, the ISO 8601 standard is the preferred format, though a few other formats commonly used to express both date and time are acceptable.
In this example, the ReportDate attribute was chosen to be adapted as a date, using a specified Expected Date Format pattern “d-MMM-yyyy”. This was done because there is no easy way, outside of an input adapter, to translate an alphabetic month to a numeric value. The input will therefore be configured to apply the pattern “d-MMM-yyyy” to adapt the value of any fields of type date.
Note that the input will not be able to adapt, as a date, any field’s value whose format does not follow this pattern. So the GeoEvent Definition must specify that ReportDate is the only field with a data type date. To get around this constraint, the other values will have to be adapted as either long integer values or strings (as detailed in the table above). These values will require further processing to convert them to date values.
The data provider has specified that the reporting date and reporting time values will be recorded using two separate attribute values and that the UTC time zone is assumed. ReportDate represents a base date and ReportTime provides the UTC time. These two values should be combined to produce a single value before attempting to cast them to a date value.
Remember that we chose to allow an input connector’s adapter to adapt ReportDate as a date rather than a string. This decision was made primarily because that is the easiest way to handle the conversion of the alphabetic expression for the month. Using an Expected Date Format pattern to adapt ReportDate as a date means the adapter will have to assume values for the time and time zone. The assumed time will be midnight and the time zone will be adopted from the locale of the server on which GeoEvent Server is running.
For this example, assume the sensors reporting data and the server running GeoEvent Server both observe Mountain Daylight Time (MDT), consistent with the summertime months in Colorado, United States. The string value “17-May-2021” will be adapted to produce a millisecond epoch value 1621231200000 which is consistent with the date/time string "Mon May 17 00:00:00 MDT 2021".
To adjust the time zone and add the additional time field, you need to create an expression which will advance the adapted date forward a number of hours, minutes, and seconds consistent with the values in the ReportTime attribute field and also adjust for the time zone offset.
ReportDate + currentOffsetUTC()
+ toLong(substring(ReportTime, 0, 2))*3600000
+ toLong(substring(ReportTime, 3, 5))*60000
+ toLong(substring(ReportTime, 6, 8))*1000
The currentOffsetUTC() function returns the local time zone’s offset from UTC as a millisecond value. The UTC offset for MDT is -21600000 (negative six hours in milliseconds). Adding this value to the adapted date therefore subtracts six hours.
The remaining toLong(substring()) functions parse the hours, minutes, and seconds from ReportTime and add them (as millisecond equivalent values) to the ReportDate. For this example, the expression above works out to be:
1621231200000 + (-6*3600000) + (15 × 3600000) + (36 × 60000) + (42 × 1000)
Which is equivalent to the epoch 1621265802000 and the following date strings:
"May 17, 2021 15:36:42 UTC"
"Mon May 17 09:36:42 MDT 2021"
The data provider has used the ISO 8601 formatting standard to express Date_1 attribute values. GeoEvent Server adapters and/or processors can easily cast an ISO 8601 formatted string to a date out-of-the-box. An expression simply names the string field whose value should be written to a date attribute field. No additional calculations are necessary for this value. Refer to the illustrations in the Incorporate date string calculations in a Field Calculator section below.
The data provider has specified that Date_2 values represent the number of seconds since midnight, January 1970 (UTC). GeoEvent Server, however, uses millisecond epoch values consistent with the ArcGIS REST Services API, not epoch values measured in seconds. The received values for Date_2 should be scaled out from a 10-digit value (measuring seconds) to a 13-digit value representing milliseconds before they are mapped to a date attribute. Long integer epoch values are always assumed to be in the UTC time zone, so the only processing necessary before casting the received value to a date is to multiply by 1000:
Date_2 * 1000
Client applications will differ in how they choose to represent a millisecond epoch as a string. Most will assume the local time zone and shift the value they retrieve from the database accordingly. The following strings all represent the same epoch 1621287402000 date and time:
Monday May 17 21:36:42 GMT 2021
May 17, 2021 9:36:42 PM UTC
May 17, 2021 5:36:42 PM EDT (UTC-04:00)
May 17, 2021 3:36:42 PM MDT (UTC-06:00)
May 17, 2021 2:36:42 PM PDT (UTC-07:00)
The data provider has specified that Date_3 values represent a date with an assumed time of Midnight UTC. To remove ambiguity and assumptions related to the unspecified time and time zone, you can use an expression which reformats the received string to produce an ISO 8601 datetime string before casting it into a date attribute field.
replaceAll(Date_3, '(\d+)[/](\d+)[/](\d+)', '$3-$1-$2T00:00:00-00:00')
The computed string is explicit; the time is expressed using the UTC standard. If the time zone offset were left off, the processors in GeoEvent Server would be free to assume the time zone of the local server when casting the string into a date field. The specified time and time zone in the expression above are consistent with the data provider’s specification.
It would require multiple Field Calculator’s to perform the calculations necessary to convert Date_1, Date_2 and Date_3 to date values and combine the ReportedDate and ReportedTime. An additional Field Mapper would also be necessary to cast the string type fields to date type fields. An example of a GeoEvent Service like this and its configuration is illustrated below.
GeoEvent Service with numerous processors, each with their own expression
The time-in GeoEvent Definition
The time-out GeoEvent Definition
Field Calculator configuration for the ReportDatetime
Field Calculator configuration for the Date_2
Field Calculator configuration for the Date_3
Field Mapper configuration mapping the source fields to the target fields
GeoEvent Server 10.9 makes it easy to move all those individual functions directly into a Field Mapper. This simplifies the GeoEvent Service, giving you more options for incorporating other processing workflows.
Simplified GeoEvent Service at 10.9 with single Field Mapper
Field Mapper with new expression support
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.