POST
|
Hey @kavi88 Everything looks in order, but just to be sure, here is the equation I would use: '{ "x":' + X + ', "y":' + Y + ', "spatialReference" : { "wkid" : 4326 } }' An alternative would be to convert the X and Y to strings, but I don't think this is necessary. '{"x":' + valueOf(X) + ',"y":' + valueOf(Y) + ',"spatialReference" : { "wkid" : 4326 } }' If that doesn't work, then there is something in your schema that isn't what you expect. So I would send the events to a JSON File output and inspect the structure there to see if there is any remaining grouping.
... View more
07-22-2021
11:52 AM
|
1
|
0
|
2224
|
POST
|
Hey @LuisAntonioRodriguezGonzalez Unfortunately, what you are asking for is not possible with the OOTB Field Mapper. By design, there is no way to map into a hierarchical event defintion structure.
... View more
07-21-2021
07:42 PM
|
0
|
0
|
447
|
POST
|
Hey @JohnLucotch2 You might try to configure the WebSocketContextURL on your ArcGIS Server. That is a server property I have not worked with in a long time, but the following post might be helpful. https://community.esri.com/t5/arcgis-geoevent-server-questions/websocket-url-configuration-in-stream-service/td-p/122490
... View more
07-21-2021
07:40 PM
|
0
|
0
|
516
|
POST
|
Hey @parksh Please see the online documentation on changing that property: https://enterprise.arcgis.com/en/geoevent/latest/administer/kafka-on-disk-storage.htm
... View more
07-21-2021
07:23 PM
|
0
|
0
|
684
|
DOC
|
While you may think of a real-time GIS system typically associated with moving assets (vehicles, airplanes, and vessels) or with events that happen in our environment (accidents, crime, and weather), a vast amount of data is available from sensors that do not actually physically move. However, these fixed sensors do have a location, and the data emitted can be visualized on a web map. In this example, we will assign the current water height values to a set of stream gauge sensors that reside in the State of Florida, U.S. In addition to assigning a non-spatial value to a sensor location on a map, a historical value table will be maintained, and this data will be visualized using time series charts. The video below walks through the example and the attached document provides additional step-by-step details on implementing this example.
... View more
07-21-2021
07:16 PM
|
1
|
0
|
1015
|
DOC
|
A common practice in hurricane prone areas is to track operational status based on the proximity of an existing hurricane. Hurricane forecasts, and their associated threat levels, can be intersected with regional districts to determine the necessary operational level of awareness for a given region. These operational levels are typically codified into lists of specific actions that must be taken to ensure readiness and avoid or minimize damage from a hurricane. In the example here, we’ll explore how the current operational status of transportation districts in the State of Florida, USA can be calculated automatically using hurricane forecast data from Hurricane Irene in 2017, available from the NOAA National Hurricane Center. The video below walks through the example and the attached document provides additional step-by-step details on implementing this example.
... View more
07-13-2021
02:41 PM
|
0
|
0
|
744
|
POST
|
Sorry, just saw your other post and I replied there: waze-data-stops-flowing-at-filter-processor-after-geoevent-srver in 10.9
... View more
06-21-2021
08:30 AM
|
0
|
0
|
581
|
POST
|
Hey @RobertSpál1 At 10.9 we have unfortunately found an issue with the filter elements not working with the built in tags for definition name, owner, etc. One workaround is to use a field calculator to assign the GeoEvent Definition Name to a text field and then filter on that field name. Another workaround until we can get a fix out is to filter for a field that exists in one event but not in the other event. Sorry for the inconvenience Eric
... View more
06-21-2021
08:29 AM
|
1
|
0
|
728
|
POST
|
Hey @RobertSpál1 What version of GeoEvent did you upgrade to?
... View more
06-21-2021
08:25 AM
|
0
|
1
|
583
|
POST
|
Hey @OldManStrength Under the assumption that the updates happen relatively infrequently and don't need a high processing velocity you could do the following (NOTE: this is not a recommended approach for high velocity event streams). 1. Create a new 'temporary' table to hold the current value for the active field. This table will hold the ObjectID and any other fields you want to monitor for change (e.g. Status). The status fields would have the prefix 'Prev' added to them. 2. Set up a GeoEvent Service to read in all of the hydrants every so often. I would use a time filter on the where clause to only read in the events that have been edited (turn on editor tracking) within your polling rate time window (e.g. polling every 10 minutes getting hydrants that have been edited in the last 10 minutes). 3. Field Map to a definition that adds the 'Prev' status fields to the schema so you can compare the currrent vs. the previous value. 4. Field Enrich with the previous values using the object id to match. 5. Filter out records that have not changed. 6. For the events that have changed: Send an email, update the 'Prev' value with the current value, field map to the temporary table schema, and write out the 'Prev' vaues to update your previous value table. 7. You should consider having an additional table that contains your 'Changed' hydrants. This table would be a mirror schematically to the original hydrant table. The output to this table should have the option to delete old featurs turned on and it should be set to something reasonable like 1-4 hours. This table can then put put into a map so end users can see where the changes are. Your email could even reference this map, zooming in to the particular hydrant. 8. Finaly, you could also send the output to ArcGIS Workforce or other work order management system to create a task assignment for someone to investigate the hydrant (maybe when they switch to the inactive state). If you data is higer velocity, you will want to avoid the Field Enricher and go with a different approach.
... View more
06-21-2021
08:24 AM
|
0
|
0
|
666
|
BLOG
|
The Field Mapper and Field Calculator Processors are two of the most often used processors in ArcGIS GeoEvent Server. While the Field Mapper Processor provides the ability to map one GeoEvent Definition (schema) to another GeoEvent Definition, the Field Calculator Processors allows you to use functions to manipulate field values. The Field Calculator supports many different functions related to data type conversion, string manipulation, mathematics, creating geometry from fields, and more. In GeoEvent Server 10.9, the Field Mapper now allows you to use the same Field Calculator functions inside each Field Map text box. This makes it possible for a single Field Mapper to potentially replace a series of Field Calculators when performing calculations on multiple event attribute values. Let’s walk through some examples to see the power of this enhancement. In the examples, you will notice a pattern in which the incoming events are immediately mapped into the outgoing GeoEvent Definition. This allows the Field Calculators to calculate field values directly into existing fields. Field mapping into a desired GeoEvent Definition immediately solidifies the event schema and simplifies the configuration. If you choose not to use this approach, each Field Calculator will need to create a new GeoEvent Definition containing the new field that is being calculated. This can lead to many temporary GeoEvent Definitions that clutter GeoEvent Server’s configuration, making it hard to manage. Example 1 – Parse tabular field content Imagine you have an input providing the location of a device as a comma-separated string. This location string might be something like the following: “Location”: “One International Way, Broomfield, CO, 80021” Isolate specific sub-strings in the received location value For this example, let us assume the string always reports an address, city, state, and ZIP code (four parts). To split out the individual components in this list, a regular expressions can be used inside of the replaceAll() and trim() functions as follows: Address trim(replaceAll(Location,'^(.*)[,](.*)[,](.*)[,](.*)$','$1')) City trim(replaceAll(Location,'^(.*)[,](.*)[,](.*)[,](.*)$','$2')) State trim(replaceAll(Location,'^(.*)[,](.*)[,](.*)[,](.*)$','$3')) Zip trim(replaceAll(Location,'^(.*)[,](.*)[,](.*)[,](.*)$','$4')) Incorporate sub-string isolation in Field Calculators prior to 10.9 Wrapping each of the equations above into a Field Calculator for each field requires four nodes, as illustrated in the GeoEvent Service below. In addition, a Field Mapper is necessary to convert the event to a GeoEvent Definition that includes all the new fields: Address, City, State, and Zip. This is done so each Field Calculator can write derivative values into existing fields rather than each Field Calculator creating a new GeoEvent Definition as new fields are created. Example GeoEvent Service with many processors Field Mapper configuration mapping fields Field Calculator configuration with expression This is a lot of work to coax out a few sub-strings of data. The GeoEvent Service is significantly more complicated, making it harder to understand and maintain. Incorporate sub-string isolation in a single Field Mapper at 10.9 With GeoEvent Server 10.9, you can now greatly simplify this GeoEvent Service by moving all of the functions into a single Field Mapper as illustrated below. Simplified GeoEvent Service incorporating expressions from multiple processors into a single Field Mapper. Configuration of the Field Mapper incorporating multiple expressions Example 2 – Bit encoded status values With the Internet of Things (IoT), it’s common to receive status information in a bit encoded format. Fundamentally, this status information is delivered in an integer type field (short, integer, or long) and it looks like a regular number. But the underlying bits are being manipulated to report binary encoded information. Typically, this is as simple as reporting if something is on (1) or off (0). But it can also be used to report larger sets of encoded values such as off (00), low (01), medium (10), and high (11). In this second example, we’ll draw from the world of winter snowplow operations and utilize a status value that is reported by a popular automatic vehicle location (AVL) provider. The status value is reported as an integer type field in the incoming events. The description of the status bits is provided in the table below. Field Name Bit Position Description Reserved 0 - 2 Reserved SpreadOn 3 Spreading ON/OFF Blast 4 Blast ON/OFF SolidPause 5 Solid Material Pause YES/NO LiquidPause 6 Liquid Material Pause YES/NO Unload0 7 Unload Status YES/NO Unload1 8 Liquid Unload Status YES/NO Reverse 9 Reverse Status YES/NO ConvOn 10 Conveyor Status ON/OFF LiquidOn 11 Liquid Status ON/OFF PrewetOn 12 Pre-wet Status ON/OFF AntiIceOn 13 Anti-Ice Status ON/OFF ConvMode 14 & 15 Conveyor Mode 0 - Off 1 – Open Loop 2 – Manual 3 – Auto PrewetMode 16 & 17 Pre-wet Mode 0 - Off 1 – Open Loop 2 – Manual 3 – Auto AntiIceMode 18 & 19 Anti-Ice Mode 0 - Off 1 – Open Loop 2 – Manual 3 – Auto HasError 20 Error Status YES/NO Examples of status values might be: Everything off binary 00000000000000000000 = decimal 0 Everything on, not reversing/unloading, nothing paused, modes in manual, no errors binary 010101011110000011000 = decimal 703,512 Isolate specific bit sequences in the received long integer value In order to isolate the bits within this numeric field, we’ll use the exponent of 2 to remove the parts of the number before and after the bits we’re interested in. The function will take the general format of: floor(Status/pow(2,<startBit>))-floor(Status/pow(2,<endBit>))*pow(2,<numBits>) Entering our start and end bit locations, the functions for each of the individual state values in the status can be retrieved: SpreadOn floor(Status/pow(2,3))-floor(Status/pow(2,4))*pow(2,1) Blast floor(Status/pow(2,4))-floor(Status/pow(2,5))*pow(2,1) SolidPause floor(Status/pow(2,5))-floor(Status/pow(2,6))*pow(2,1) LiquidPause floor(Status/pow(2,6))-floor(Status/pow(2,7))*pow(2,1) Unload0 floor(Status/pow(2,7))-floor(Status/pow(2,8))*pow(2,1) Unload1 floor(Status/pow(2,8))-floor(Status/pow(2,9))*pow(2,1) Reverse floor(Status/pow(2,9))-floor(Status/pow(2,10))*pow(2,1) ConvOn floor(Status/pow(2,10))-floor(Status/pow(2,11))*pow(2,1) LiquidOn floor(Status/pow(2,11))-floor(Status/pow(2,12))*pow(2,1) PrewetOn floor(Status/pow(2,12))-floor(Status/pow(2,13))*pow(2,1) AntiIceOn floor(Status/pow(2,13))-floor(Status/pow(2,14))*pow(2,1) ConvMode floor(Status/pow(2,14))-floor(Status/pow(2,16))*pow(2,2) PrewetMode floor(Status/pow(2,16))-floor(Status/pow(2,18))*pow(2,2) AntiIceMode floor(Status/pow(2,18))-floor(Status/pow(2,20))*pow(2,2) HasError floor(Status/pow(2,20))-floor(Status/pow(2,21))*pow(2,1) Incorporate bit isolation in Field Calculators prior to 10.9 Wrapping each of the equations above into a Field Calculator for each field would require fifteen processors. An example of what a GeoEvent Service like this would look like is illustrated below. This GeoEvent Service also includes a Field Mapper that converts the event to a GeoEvent Definition that includes all the new fields. GeoEvent Service with numerous processors, each with their own expression Field Mapper configuration mapping the source fields to the target fields Field Calculator configuration for the SpreadOn field. Creating all those Field Calculators can be daunting and it really clutters the service designer. Just imagine if this GeoEvent Service required other processing workflows, resulting in even more elements. Incorporate bit isolation in a single Field Mapper at 10.9 Luckily, GeoEvent Server 10.9 comes to the rescue, what previously took multiple Field Calculators can now be accomplished in a single Field Mapper. Your GeoEvent Services can be simplified considerably, giving you more space to incorporate other processing workflows. Simplified GeoEvent Service at 10.9 with single Field Mapper Field Mapper with new expression support Example 3 – Convert date and time strings to date type values In this third example, we’ll explore a situation where multiple dates are being sent to an input in different formats. GeoEvent Server recognizes a few different commonly used string formats to represent a date and time (refer to the blog What time is it? Well That Depends...). As you work your way through this example, consider that the following four values all represent the exact same date and time: ISO 8601 String: "2021-05-17T21:36:42-00:00" Web Client String (Eastern time zone): "Mon May 17 17:36:42 EDT 2021" Web Client String (Pacific time zone): "Mon May 17 14:36:42 PDT 2021" Epoch (milliseconds): 1621287402000 Implement date string calculations Suppose data being received from a sensor contains several different date values, all in different formats. A sample data record is illustrated below: Input Field Field Type Example Notes ReportDate Date 17-May-2021 “d-MMM-yyyy” day-first format ReportTime String 21:36:42 “H:mm:ss” common time string Date_1 String 2021-05-17T21:36:42-00:00 ISO 8601 datetime formatted string Date_2 Long 1621287402 Epoch measured in seconds Date_3 String 5/17/2021 “M/d/yyyy” common date format Apply an expected date format pattern to received data values In an input, you can specify only one Expected Date Format pattern to use when parsing and adapting date/time values. When specified, the pattern is applied to any field’s value the GeoEvent Definition specifies should be adapted as a date. Any field specified for adaption as a date must therefore share a common and consistent format with other date fields. Note that, when an Expected Date Format pattern is not specified, the ISO 8601 standard is the preferred format, though a few other formats commonly used to express both date and time are acceptable. In this example, the ReportDate attribute was chosen to be adapted as a date, using a specified Expected Date Format pattern “d-MMM-yyyy”. This was done because there is no easy way, outside of an input adapter, to translate an alphabetic month to a numeric value. The input will therefore be configured to apply the pattern “d-MMM-yyyy” to adapt the value of any fields of type date. Note that the input will not be able to adapt, as a date, any field’s value whose format does not follow this pattern. So the GeoEvent Definition must specify that ReportDate is the only field with a data type date. To get around this constraint, the other values will have to be adapted as either long integer values or strings (as detailed in the table above). These values will require further processing to convert them to date values. What to do when date and time values are reported in separate fields The data provider has specified that the reporting date and reporting time values will be recorded using two separate attribute values and that the UTC time zone is assumed. ReportDate represents a base date and ReportTime provides the UTC time. These two values should be combined to produce a single value before attempting to cast them to a date value. Remember that we chose to allow an input connector’s adapter to adapt ReportDate as a date rather than a string. This decision was made primarily because that is the easiest way to handle the conversion of the alphabetic expression for the month. Using an Expected Date Format pattern to adapt ReportDate as a date means the adapter will have to assume values for the time and time zone. The assumed time will be midnight and the time zone will be adopted from the locale of the server on which GeoEvent Server is running. For this example, assume the sensors reporting data and the server running GeoEvent Server both observe Mountain Daylight Time (MDT), consistent with the summertime months in Colorado, United States. The string value “17-May-2021” will be adapted to produce a millisecond epoch value 1621231200000 which is consistent with the date/time string "Mon May 17 00:00:00 MDT 2021". To adjust the time zone and add the additional time field, you need to create an expression which will advance the adapted date forward a number of hours, minutes, and seconds consistent with the values in the ReportTime attribute field and also adjust for the time zone offset. ReportDate + currentOffsetUTC() + toLong(substring(ReportTime, 0, 2))*3600000 + toLong(substring(ReportTime, 3, 5))*60000 + toLong(substring(ReportTime, 6, 8))*1000 The currentOffsetUTC() function returns the local time zone’s offset from UTC as a millisecond value. The UTC offset for MDT is -21600000 (negative six hours in milliseconds). Adding this value to the adapted date therefore subtracts six hours. The remaining toLong(substring()) functions parse the hours, minutes, and seconds from ReportTime and add them (as millisecond equivalent values) to the ReportDate. For this example, the expression above works out to be: 1621231200000 + (-6*3600000) + (15 × 3600000) + (36 × 60000) + (42 × 1000) Which is equivalent to the epoch 1621265802000 and the following date strings: "May 17, 2021 15:36:42 UTC" "Mon May 17 09:36:42 MDT 2021" Date_1 – Ingested and adapted as a date The data provider has used the ISO 8601 formatting standard to express Date_1 attribute values. GeoEvent Server adapters and/or processors can easily cast an ISO 8601 formatted string to a date out-of-the-box. An expression simply names the string field whose value should be written to a date attribute field. No additional calculations are necessary for this value. Refer to the illustrations in the Incorporate date string calculations in a Field Calculator section below. Date_2 – Adapted as a long integer, scaled and cast to a date The data provider has specified that Date_2 values represent the number of seconds since midnight, January 1970 (UTC). GeoEvent Server, however, uses millisecond epoch values consistent with the ArcGIS REST Services API, not epoch values measured in seconds. The received values for Date_2 should be scaled out from a 10-digit value (measuring seconds) to a 13-digit value representing milliseconds before they are mapped to a date attribute. Long integer epoch values are always assumed to be in the UTC time zone, so the only processing necessary before casting the received value to a date is to multiply by 1000: Date_2 * 1000 Client applications will differ in how they choose to represent a millisecond epoch as a string. Most will assume the local time zone and shift the value they retrieve from the database accordingly. The following strings all represent the same epoch 1621287402000 date and time: Monday May 17 21:36:42 GMT 2021 May 17, 2021 9:36:42 PM UTC May 17, 2021 5:36:42 PM EDT (UTC-04:00) May 17, 2021 3:36:42 PM MDT (UTC-06:00) May 17, 2021 2:36:42 PM PDT (UTC-07:00) Date_3 – Adapted as a string, then reformatted as an ISO 8601 Date The data provider has specified that Date_3 values represent a date with an assumed time of Midnight UTC. To remove ambiguity and assumptions related to the unspecified time and time zone, you can use an expression which reformats the received string to produce an ISO 8601 datetime string before casting it into a date attribute field. replaceAll(Date_3, '(\d+)[/](\d+)[/](\d+)', '$3-$1-$2T00:00:00-00:00') The computed string is explicit; the time is expressed using the UTC standard. If the time zone offset were left off, the processors in GeoEvent Server would be free to assume the time zone of the local server when casting the string into a date field. The specified time and time zone in the expression above are consistent with the data provider’s specification. Incorporate date string calculations in Field Calculators prior to 10.9 It would require multiple Field Calculator’s to perform the calculations necessary to convert Date_1, Date_2 and Date_3 to date values and combine the ReportedDate and ReportedTime. An additional Field Mapper would also be necessary to cast the string type fields to date type fields. An example of a GeoEvent Service like this and its configuration is illustrated below. GeoEvent Service with numerous processors, each with their own expression The time-in GeoEvent Definition The time-out GeoEvent Definition Field Calculator configuration for the ReportDatetime Field Calculator configuration for the Date_2 Field Calculator configuration for the Date_3 Field Mapper configuration mapping the source fields to the target fields Incorporate date string calculations in a single Field Mapper at 10.9 GeoEvent Server 10.9 makes it easy to move all those individual functions directly into a Field Mapper. This simplifies the GeoEvent Service, giving you more options for incorporating other processing workflows. Simplified GeoEvent Service at 10.9 with single Field Mapper Field Mapper with new expression support
... View more
05-26-2021
11:06 AM
|
2
|
1
|
2226
|
BLOG
|
When would I use a choice element? The choice element is intended to replace situations where multiple filter elements are deployed in parallel (see Figure 1). Figure 1 – Parallel filtering paradigm This parallel filtering paradigm is an anti-pattern that should be avoided when possible. The reason for this is two-fold: Each filter element must get its own copy of an event for it to evaluate the conditional statement. Every filter element added to a parallel filter design requires an additional copy of each event be made. Therefore, the system will see a linear increase in event traffic in the parallel filtering section of the GeoEvent Service for every additional parallel filter added. Each event is evaluated against the conditional statement of each filter independently. This creates a linear increase on the CPU load in the parallel filtering section of the GeoEvent Service for every additional parallel filter added. Unfortunately, up until the 10.9 release, it was hard to avoid parallel filtering in many cases because of the singular nature of the filter element. Service designers were left with no options when events were required to be routed across several processing paths. These situations were prevalent in several domains including (but not limited to): Asset Types: processing specific assets according to their type (trucks, cars, busses, taxis, ride-share, recreational, commercial, public, etc.). Field Values: a geometry might be non-existent, null, present, inside or outside a specified location. A field value may be above, between or below a specified set of threshold values. GeoEvent Definitions: An input may emit multiple GeoEvent Definitions that require separate processing paths. Choice elements were designed to replace a filter element whenever more than one conditional statement must be evaluated in a specific route of a GeoEvent Service. In the simplest case you may need to route events that pass a filter to one processing path, and all other events to another pass. This pattern is known as an “If … Else”. A more complex use case may need to route multiple different GeoEvent Definitions to their own dedicated processing path. Fundamentally, any time more than one filter element is used in parallel, it is a good candidate for replacing with a choice element. How do choice elements work? As mentioned above, a choice element specifies a list of routes a GeoEvent could take. Each route is evaluated by a when clause that defines the conditional statement an event must meet to pass through. If an event passes the condition defined by the when clause, it follows that route. If an event does not pass the condition defined by the when clause, it passes to the next when clause if one exists. The when clauses in a choice element are processed in a serial order, as defined in the choice element. If an event record does not meet any of the conditions specified in the when clause(s), it can optionally be passed to an otherwise route. If an otherwise route is enabled, the event is passed to that route, there is no condition that must be met. If an otherwise route is not enabled, the event record is dropped and is not passed to any route. Figure 2 below shows an example of a choice element as it is displayed on the GeoEvent Server service designer canvas. It also shows how events are processed inside of the choice element. Figure 2 – Example choice element and conceptual event flow process diagram. The implementation of the choice element does not create extra copies of an event for each conditional statement to evaluate. The original event is evaluated against each conditional statement in sequential order, eliminating extra events on the message bus. In addition, once an event passes a given conditional statement, it is no longer considered for the following statements, reducing the compute load on the system. The combination of these changes results in a choice element that does not degrade performance of the system as more conditional statements are added (see Figure 3 below). Figure 3 – Choice element vs filter element performance Optimizing a choice element One observation about a choice element that is important to understand is an event record that meets the condition in a given when clause will not be considered for any subsequent when or otherwise routes. In the example below (figure 4), events of type ‘A’ will not be considered in the second conditional statement looking for events of type ‘B’. Figure 4 – Example choice element Sometimes Performance Cannot Be Optimized It is important to note that the optimization strategy presented in this section may not be applicable to all choice elements. Sometimes, a when statement must be evaluated before the other statements because the conditions may overlap. An example of this (see figure 5) would be a choice element that routes vehicle events based on vehicle type and vehicle owner. If a vehicle is owned by the city, it should be processed in a separate route from the other vehicle types. In this case, it would not be possible to move the vehicle owner condition further down the list because it would never receive any data (assuming the city only owns cars, trucks, buses, and boats). Figure 5 – Example choice element that cannot be re-ordered/optimized Re-ordering for Optimization If your conditions are able to be re-ordered (they don’t have any overlapping conditions), it is important to order the when statements in the order they are statistically likely to occur. The first conditional statement should pass the most events, the second conditional statement should pass the second-most events, and so on. For the example in figure 4 above, a distribution of the expected event frequency is displayed in figure 6. Figure 6 – Optimized choice element If the event frequencies did not match this pattern, then the choice element's when clauses should be re-ordered in such a way that the event distribution met this pattern. Figure 7 displays an event frequency distribution that is not optimized on the left. In this case, the choice element’s when clauses should be re-ordered so that they are evaluated in the order displayed on the right. Figure 7 – Reorder to optimize a choice element Moving the Otherwise Up Front If you find yourself in the situation where most of the events entering a choice element are exiting via the otherwise route, you should consider devising a when clause that you can put in the first evaluation slot that will identify these events before they are passed to the rest of the conditional statements. An example of this is displayed below in figure 8. Figure 8 – Optimizing the otherwise in a choice element Considerations when working with choice elements There are many considerations when working with a choice element, including: At least one conditional statement Every choice element must have at least one when clause. A choice element with one when clause and the optional otherwise route disabled is functionally equivalent to a filter element. Otherwise is optional The otherwise route is optional. If an event does not pass the provided conditional statements in the choice element's evaluation section it is passed to the otherwise. If the otherwise is enabled, the event is allowed to pass through to the otherwise route. If the otherwise is not enabled, the event is dropped. Each choice element must have one or more parent elements Each choice element must have at least one parent element and that parent element must be either an Input element, a filter element, or a Processor element. A choice element can have multiple parent elements. A choice element’s parent cannot be another choice element A choice element cannot route to another choice element; meaning two choice elements cannot be placed in a row in a GeoEvent Service: The input to a choice element cannot be the output of another choice element. The target of a choice element cannot be another choice element. If a use case requires the target of one choice element be routed to a second choice element, use a No Operation Processor element in between the two choice elements. An alternative is to include the when clause(s) from the second choice element into the first choice element. Figure 9 shows the issue and both methods for getting around the issue. Figure 9 – Working around back-to-back choice elements Working with choice elements in a GeoEvent Service Adding a choice element to a GeoEvent Service To add and configure a choice element in a GeoEvent Service, follow the steps below. 1. In the service designer, drag and drop a choice element from the New Elements list onto the canvas. The choice element dialog will open. 2. Enter a Name for the new choice element. In a choice element, one or more conditional statements can be applied, in a specified order, to the event data. Each conditional statement has a unique name that identifies it as well as a number (starting with 1) that indicates the order in which the conditional statements will evaluate the event data. 3. Click Add to add a conditional statement. 4. In the Choice Properties dialog, follow the steps below to add a when clause. Enter a Name for the conditional statement. Click Add Expression to add and configure a when clause. Click Add Expression again to add and configure additional when clause(s). Click Ok to save the conditional statement. 5. Repeat steps 3 and 4 above to add additional conditional statements, as necessary. 6. Optionally, check the Otherwise checkbox to define an otherwise route. Checking the Otherwise checkbox enables an otherwise route. All events that do not meet the criteria defined by the when clause(s) will be passed to the otherwise route. Unchecking the Otherwise checkbox means any event records not passing the defined when clause(s) will be dropped. 7. Click Ok to save and add the choice element to the service designer canvas. 8. Connect the choice element to the other elements in the GeoEvent Service. Edit a choice element The choice element dialog provides options to review, edit, delete, and reorder configured conditional statements. Hover over to view the when clause(s) associated with a conditional statement. Click to open and edit a conditional statement’s when clause(s). Click to delete a conditional statement. Use to adjust the order the conditional statements evaluate the event data. Using filters in when clauses In a when clause, you have the option to use different types of filters to evaluate the event data. For details on the different types of filtering options, see the resources below. Attribute filters Spatial filters Property filters Create filters using tags Create filters using regular expressions
... View more
05-21-2021
12:25 PM
|
2
|
0
|
1077
|
POST
|
Yes, that will still work. You will want to field map into a new GeoEvent definition that has all the new fields you want to extract out of the string (id - int, timestamp - date, device_id - String, data - String). Then use a Field Calculator for each field you want to extract from the string, the expression is replaceAll(jsonStringField, '.*"rssi":(\d+),.*', '$1') (where rssi is replaced by the name of the value you are trying to get out) and putting the result into the respective existing field. id = replaceAll(jsonStringField, '.*"id":(\d+),.*', '$1') timestamp = replaceAll(jsonStringField, '.*"timestamp":(\d+),.*', '$1') device_id = replaceAll(jsonStringField, '.*"device_id":"(\w+)",.*', '$1') data = replaceAll(jsonStringField, '.*"data":"(\w+)",.*', '$1')
... View more
05-20-2021
02:35 PM
|
0
|
0
|
1884
|
POST
|
Hey @Michalak It is tough to say exactly what triggered it,but a quick guess is that the input you are using has a 'get new features only' option that you have turned on. When the GeoEvent Server service restarts (or the machine is rebooted) then the input my start up in a state where it doesn't know about any previous data and grabs all data from the source. Beyond that, we would need to see your configuratin and understand your data source before a better diagnosis can be provided.
... View more
05-20-2021
12:05 PM
|
1
|
1
|
343
|
POST
|
An alternative to the suggestion below of using a hierarchical JSON structure is to use regular expression in a standard field calculator to pull each field out (you would need 5 Field Calculators. Each one would would have an expresion like the following: replaceAll(jsonStringField, '.*"rssi":(\d+),.*', '$1') Just replace the field name (rssi) with whatever field you are working on. At GeoEvent Server version 10.9 you can add this function in a Field Mapper processor. Previous versions will require a Field Calclulator for each field you want to parse the value for.
... View more
05-19-2021
04:44 PM
|
0
|
0
|
215
|
Title | Kudos | Posted |
---|---|---|
1 | 02-05-2024 11:02 AM | |
1 | 09-14-2023 08:09 PM | |
2 | 05-13-2019 09:32 AM | |
1 | 01-20-2023 02:36 PM | |
1 | 01-20-2023 02:31 PM |