|
POST
|
Hello William -- The inbound connectors you configure assume that they will receive the latest avilable data from a sensor network in real-time or near-real-time. Fundamental assumptions are that data will arrive in temporal order, at some discrete frequency and periodicity. Data should not be sent in batches with data records potentially out-of-temporal-order. GeoEvent Server can receive batches of data, but the batch is assumed to be a collection of individual observations from discrete sensors, not a collection of observations from a single sensor. You might want to look into using SDK samples available on the GeoEvent Server Gallery to supplement your solution. The Delay Processor for GeoEvent Server or the Timetree Processor for GeoEvent Server may allow you to receive a collection of data observations, hold the data for a specified amount of time (e.g. "delay processing") and sort the data by TRACK_ID into a proper temporal order to guarantee processed event records reflect a first-in / first-out view of data collected from sensors in time order. If you need help working with these SDK samples, please open an incident with Esri Technical Support. Limited consulting is available through technical support. More in-depth help implementing a solution can be arranged through Esri Professional Services if needed. Hope this information helps -- RJ
... View more
10-13-2021
05:08 PM
|
0
|
0
|
844
|
|
POST
|
Hello Suzy -- The hierarchy in the JSON data structure you illustrated looks almost recursive. If this is a typical example of the JSON an input you have configured would expect to receive as its first data record, I think the inbound adapter is making its best guess at what the GeoEvent Definition ought to be. An inbound adapter's guess will be more accurate for simpler data structures -- the adapter will not recursively iterate through a data structure to further refine what it sees up-front. For example, in my illustration above I've added just a little white-space and formatting to your JSON example. The line I've designated "Ex. 01" in green is intended to be an array capable of holding zero or more data values. But the adapter, given the empty array in the illustration, does not know whether the array's data will (eventually) be a set of integer values, a set of string values, or a set of JSON elements with a more detailed sub-structure. Given the empty array, the inbound adapter makes a guess and assumes a data type of String and a cardinality for the data value of Many (indicated with the infinity symbol circled in green in the illustrated GeoEvent Definition). As long as the data values eventually received in that array can be implicitly cast to String the adapter will be able to parse and adapt data it (eventually) receives. Looking at another part of the data structure, designated "Ex. 02" and highlighted in orange, we see an "elements" array which contains JSON elements (as opposed to primitive String or Integer values). But the two JSON elements shown in the example have potentially different sub-structures. The first "attributes" array (boxed off in orange) is empty. As before the inbound adapter makes a guess and assumes an eventual data type of String setting the cardinality to Many (result circled in orange in the illustrated GeoEvent Definition). Again, the inbound adapter is not going to recuse more deeply into the data to see the other "attributes" (boxed off in blue) -- so it will not see that its first assumption is wrong and that "attributs" is probably not going to be an array of String values. The inbound adapter will not know that a key "attributes" found within "elements" is actually be intended to hold zero of more JSON elements with their own sub-structure. A quick test, removing the line I've designated "Ex. 02" above, confirms that a JSON inbound adapter receiving the JSON illustrated below will assume a data type Group for an "attributs" key found nested beneath a key "elements": I suspect the GeoEvent Definition in this second illustration is closer to what you were expecting. Hope this information helps -- RJ
... View more
10-13-2021
04:44 PM
|
1
|
0
|
2135
|
|
DOC
|
Hey Russell -- The pattern match you propose [0-9,a-z,A-Z] should work for a mix of letters and numbers. The pattern specifies any single character, lower-case or upper-case letter or digit. The repetition qualifier {1,} specifies one or more repetitions matching this pattern. You could try using the \w metacharacter which specifies "Any Alphanumeric character". But your pattern is essentially the same thing. I use the online utility https://regex101.com to develop and test my regular expression patterns. Another good site offering a RegEx tutorial is https://regexone.com There are different flavors of RegEx, so to be careful I would select 'Java 8' in the regex101.com web tool's left-hand options frame. That site is nice in that it explains why the pattern match is matching the way it does. All you need to recognize then is that the function expression you configure your GeoEvent Server's Field Calculator with has three parameters: The data field, the pattern to match, and the replacement for every occurrence of that pattern (in this case a single literal character 'x').
... View more
07-28-2021
10:13 AM
|
0
|
0
|
4631
|
|
POST
|
Hello @kavi88 ... I would say that GeoEvent Server is able to handle null value input. Attribute values can be null and there should not be a runtime exception generated that creates a fault in event record processing. That doesn't mean that you'll be able to calculate a derivative value if the input values are null or if attribute values cannot be used in the expression you configure a Field Calculator to use. Suppose you receive some simple event record like: { "myDouble": 3.14159, "scaleFactor": 3.1, "calcResult": null } A Field Calculator configured with an expression myDouble * scaleFactor will be able to write the value 9.738929 into an existing field calcResult. But if one or more of the attribute fields contain null values: { "myDouble": 3.14159, "scaleFactor": null, "calcResult": null } You should expect to see some sort of error. You cannot multiple a Double and a null value, or implicitly cast a null or a literal string to a numeric value to allow a Field Calculator to compute a value. We do try not to make up data in cases where invalid values are received. We wouldn't want, for example, to assume a location of 0.0 latitude / 0.0 longitude because lat and lon values pulled out of a data structure were null. Suppose, rather than computing a Double value we were simply trying to place two Double values into a descriptive string. An expression like the following: 'My Double is: ' + myDouble + ' and my Scale Factor is: ' + scaleFactor + '.' Written into a String attribute would calculate a value something like: "My Double is: 3.14159 and my Scale Factor is: 3.1." If a null value were received for the scaleFactor an error message like the following is logged: Expression ['My Double is: ' + myDouble + ' and my Scale Factor is: ' + scaleFactor + '.'] evaluation failed: EVALUABLE_EVALUATION_FAILED_CAUSE The error message above is what is produced at the 10.9.x release. It may be that Field Calculator is logging less readable error messages at an earlier release, which would explain why you are seeing messages talking about arg0:[NonGroup], arg1:[NonGroup]. I know we improved the error messages that Field Calculator was logging at some point, but I don't remember which s/w release has those changes. Regardless, if an expression uses attribute field(s) whose value(s) are null ... you should probably expect to see some sort of error logged and the computed result receive a null value. The problem you are trying to solve has several different places where something can go wrong. I have frequently encountered, for example, data in a rich, complex hierarchical structure not necessarily being 100% homogenous across all of the levels in the hierarchy. It could easily be the case, for example, that the "impacted_objects" for a "disruption" do not have a "stop point" defined. It may be that there is no value at a hierarchical path disruptions[idx].impacted_objects[idx].impacted_stops[idx].stop_point.coord.lat or if an attribute exists at that level in the data structure, its value is null. I would assume that after you use the serialized multicardinal field splitter processors to flatten out all of the levels in the data structure, you'll have to use a couple of filters to test whether valid lat and lon values can be retrieved and log a "disruption" identifier to a file when a "stop_point" location cannot be calculated rather than trying to calculate a string representation of a geometry using null values. - RJ
... View more
07-26-2021
12:09 PM
|
1
|
0
|
5730
|
|
POST
|
Hey @kavi88 -- When using a Field Calculator to construct a "Geometry" you are actually calculating a String representation of a Geometry. When I need to confirm the string calculation I will often configure a Field Calculator to write its string representation to a String attribute field and then map the String to a Geometry attribute field. You can configure a Field Calculator to write its string representation directly into a Geometry attribute field, but the single step means that you are asking for an implicit type cast from String -- the value calculated as a single-quoted literal -- to a Geometry. If the string value does not exactly match the required formatting for a Point geometry object, the Field Calculator's attempt to write its string into a Geometry field will fail. So, to Eric's point, you might want to route event records emitted from the GEOM_CONSTRUCTION Field Calculator you configured to a JSON File so that you can get a good look at the String the processor constructed for you, to make sure it matches the formatting of a Point geometry object. You can probably drop the two Field Calculator processors LatConverter and LonConverter from the event processing workflow. You can configure the MAPPING FIELDS Field Mapper to map your latitude and longitude attribute values from String to Double by simply mapping the attribute values into Double fields. This is just another implicit cast, like when using Field Calculator to compute a string representation of a geometry, but writing the computed string into a Geometry field. If I had to guess, the problem you're having is probably in the serialized event schema flattening. Placing five Multicardinal Field Splitter processors in series is more than I've ever had to do to simplify a hierarchical data structure. It's either that, or the string representation Point geometry object being calculated doesn't match the ArcGIS REST API specification of a Point geometry. As a debugging step, you might try using dot notation to pull a single pair of latitude and longitude values our of the hierarchical data structure, using a Field Mapper to map the entirety of the data structure down to an event record whose GeoEvent Definition has exactly two Double attributes (one named lat and one named lon). Then work with that very simple event record to debug the field calculation you need to perform to construct a JSON representation of a Point geometry object. disruptions[0].impacted_objects[0].impacted_stops[0].stop_point.coord.lat => lat disruptions[0].impacted_objects[0].impacted_stops[0].stop_point.coord.lon => lon I wrote the above without actual data to look at and test, so I am not 100% sure I have the notation correct. If you need help with this I would ask that you open a technical support incident with Esri Support. What I'm trying to do above is take the zero-th value from each group element whose cardinality is 'Many' (indicating the JSON element is a zero-based indexed list of values) to pull a single "stop point" coordinate's latitude and longitude out so that the values can be used in a Field Calculator. You'll still need to use the Multicardinal Field Splitters eventually so that you run calculations on all of the stop points, but the above can help you debug to make sure the string calculation of the Point geometry object is being done correctly. Hope this helps -- RJ cross-reference: JSON Data Structures - Working with Hierarchy and Multicardinality
... View more
07-23-2021
10:51 AM
|
2
|
0
|
5752
|
|
POST
|
Philip -- Your solution using an outbound connector which is essentially an No Operation component is a bit orthogonal to GeoEvent Server's design. What you're doing is one reason that we don't offer out-of-the-box processors with capabilities to invoke a GP Service, for example. We certainly could, as a GP service is as RESTful as other web services GeoEvent Server is interfacing with. But the question becomes do we want to block a processor node's flow as it waits on a response from a GP Service? This wouldn't be feasible when trying to process hundreds of event messages per second. Or do we allow the processor to invoke an asynchronous GP service task/job and have the processor send the logical equivalent of "nothing" or "process pending" along to an outbound connector? That's not consistent with GeoEvent Server's design. GeoEvent Server is fundamentally accepting data, adapting the data to produce individual event records, then processing each of those event records atomically (without retaining or caching data from an event record unless absolutely necessary), so that data from a processed event record can be routed along to an outbound connector for dissemination. Your solution appears to be developer-centric and highly customized. If I understand what you're saying you have a custom inbound adapter, a custom processor, and now a custom outbound connector. If the GeoEvent Server's Java SDK allows you to develop a solution using GeoEvent Server as a platform for event record processing -- that's great -- but I'm not sure that the product team can be of much help moving forward. I will offer that the multicardinality and hierarchy supported by a GeoEvent Definition is not specific to JSON. How data is ingested and adapted is not tied to a specific data format (e.g. JSON object format). Every event record has a GeoEvent Definition which describes the event record's data structure. This event definition applies only to the interior of an event record object. There is no mechanism which allows you to define a group, list, or hierarchy of multiple event records. A GeoEvent Definition can specify a data structure which includes a list of Java primitive values (e.g. Date, Double, Long, String, ...) and/or incorporate a non-primitive type Group which includes multiple primitive values as a sub-structure within the overall data structure. But this all sill describes the data structure of a single event record object. The hierarchy and multicardinality concepts discussed in the article you found do not apply to collections of multiple event records.
... View more
04-21-2021
10:38 AM
|
0
|
0
|
1362
|
|
POST
|
Hey Philip -- GeoEvent Server's processing of event data was designed to be atomic. Every event record is processed individually. Generally speaking a processor does not know anything about event records recently processed or event records in the pipeline about to be processed. It only knows what data is in the event record is has received that needs to be processed. There are exceptions, of course. A filter or processor needing to evaluate a Enter condition for example needs to know if the previous event for a given tracked asset (identified using the TRACK_ID tag) was "outside" or "disjoint" so that it can determine that the event record it just received, which is "inside" or "intersects" has entered the area of interest. You might look at the Timetree processor, a custom processor whose source code is available in a GitHub repository here, as an example of a custom processor designed to collect and cache a number of event records in order to perform some processing on a collection of received data records. But as you say, you'll have to design some sort of parameterization so the processor knows when to stop collecting data and start processing the collection. I don't think it's possible to configure a GeoEvent Definition such that the data structure represents an amalgamation of multiple event records. Since every event record must have an associated GeoEvent Definition specifying the event record's data structure - I don't think you'll be able to do what you're asking. But I'll check with a colleague and reply back if it turns out this is possible and something reasonably accomplished. -- RJ
... View more
04-20-2021
06:35 PM
|
0
|
2
|
1374
|
|
POST
|
Hey Adam -- The Field Mapper processor was designed to flatten a schema to make the event record's data structure compatible with the ArcGIS REST Services API used when sending processed event data as JSON to a feature service's addFeatures or updateFeatures endpoint. You cannot use Field Mapper ... or any of the out-of-the-box processors ... to write data to a hierarchical structure. It appears, from your illustration, that event data being ingested is already adapted using flattened data structure (e.g. the cardinality of every event record attribute is '1' and the data type is Date, Double, Long, String, (etc.) ... not Group). I think you'll want to consider developing a custom outbound adapter which is able to take a flat data structure and adapt it into a hierarchical data structure expected by the Web Hook you want to receive data you've processed through a GeoEvent Service.
... View more
04-20-2021
06:22 PM
|
0
|
0
|
7712
|
|
POST
|
Hey Adam -- Serializing a JSON Object (e.g. the collection and structure of key:value pairs in-between the outermost curl-braces) as a String can be done. It's not easy. You might take a look at the community thread How to switch positions on coordinates which illustrates a series of Field Calculator processors, each using a replaceAll( ) function with regular expression pattern matching, to perform some manipulation on a received String. The goal in that thread is to take the received data string and turn it into a JSON string representation of a polygon geometry. So it's not serializing JSON with all of its embedded double-quotes, square brackets, curl-braces, and commas that's a problem. I would recommend taking a step back to think about what you're trying to do. It could be very difficult to extract values from event record attributes for title, text, type, title (potential duplicate attribute name!), and value and insert them into a properly formatted JSON string so that you can send the String as a JSON Object to an external receiver using a Push JSON to an External Website outbound connector. That higher level challenge aside, the problem you're running into, I think, is that there's an embedded single quote in your data: If you remove that embedded single-quote you can, as you suggested, wrap the whole serialized JSON string in a pair of single quotes and copy/paste it into a Field Calculator processor's expression. The bigger challenge is going to be designing a GeoEvent Service that accepts data, extracts values from that data, and computes derivative values to place into a hierarchical JSON structure of this complexity. -- RJ
... View more
04-14-2021
06:27 PM
|
1
|
2
|
7750
|
|
POST
|
Hello Dinesh -- The first part of your question is relatively easy. If you were to poll a feature service to obtain a set of feature records whose associated geometry were a polygon modeling a "project boundary" you could route each event record through a GeoTagger processor to enrich the event record with the (unique identifier) names of point geofence imported from a feature service providing the point locations of towers. What you have now is a comma delimited list of towers that fall within a project boundary. The difficulty is three-fold. First, GeoEvent Server does not provide any sort of iterator to inspect individual items in a list. You don't know how many towers are expected to be in any given project boundary, so you cannot further enrich the "project boundary" event record with the "alert status" for each tower ... because you cannot iterate across the list of towers to query their alert status. You could use a Field Splitter processor from the GeoEvent Gallery to split a comma delimited list of towers in an enriched (geotagged) project boundary event record. This would produce separate event records, one for each tower in the project boundary. You could then enrich a second time to get the tower's alert status and add it to the project boundary event record. But each event record emitted from a Field Splitter is processed atomically (individually). This is the second challenge / limitation ... you cannot compare attributes from one event record with attributes in another event record. The third challenge, as I see it, is there is no easy way to compare one tower's alert status to another and pick the greater of the two, especially when you don't know how many towers there are. I've never tried, for example, to design bitwise arithmetic into a GeoEvent Service to logically OR two bit sequences 0x0100 and 0x0010 to produce 0x0110 and then determine the highest-order bit set in the sequence. The logical operations GeoEvent Server supports are much more general (e.g. determining if an event record's string is empty or null to set a Boolean result to 'true' or 'false' and then comparing that 'true' / 'false' value against another Boolean to determine what to do with the singular event record being processed. You might approach the problem using a GeoTagger as described above to get the name of point geofences in an area of interest, splitting the event record using a Field Splitter to produce several independent event records, and then use a Field Enricher to look-up the alert status for each event record's associated tower. You could then use an Update a Feature output to have GeoEvent Server make a REST request on a feature service to update the alert for an entire area (or project boundary) ... but you'll need some sort of database trigger to catch that request and only allow it to proceed if the alert value is equal-to or greater-than the feature record's current alert value. Otherwise, as I'm sure you realize, GeoEvent Server's serialized event processing stream will simply overwrite the project boundary feature record's alert status with the most recently processed tower's status. Hope this information helps you think through the analysis you want to perform. -- RJ
... View more
04-12-2021
04:49 PM
|
0
|
0
|
1268
|
|
POST
|
The ArcGIS Data Store should be installed and configured on a machine other than the one used to run GeoEvent Server and the ArcGIS Server beneath which GeoEvent Server is run. Especially when configuring the spatiotemporial big data store. Please refer to the following resources: https://www.esri.com/content/dam/esrisites/en-us/media/technical-papers/architecting-the-arcgis-system.pdf. (concept of workload separation) https://enterprise.arcgis.com/en/get-started/latest/windows/additional-server-deployment.htm#ESRI_SECTION1_F7B03953E7864058970E591E9D2CE859 (system architecture illustrations which show the base enterprise, GeoEvent Server, and spatiotemporial big data store all on separate machines)
... View more
04-01-2021
08:19 PM
|
1
|
0
|
8597
|
|
POST
|
@Ctal_GISsquad - Please see my reply to your question in the thread Converting between Date Formats Clicking here should take you directly to my reply in the thread.
... View more
02-26-2021
05:06 PM
|
1
|
1
|
2163
|
|
POST
|
If data you are receiving contains only a date value (e.g. 12/31/2021 ) without a time, this is not a pattern GeoEvent Server recognizes without you specifying a an Expected Date Format the inbound adapter can use to figure out how to parse a String as a Date. You would have to specify a value like MM/dd/yyyy when configuring your inbound connector. The connector will apply this pattern to all event record attributes whose data type is Date in the GeoEvent Definition used by the inbound connector. When I send the String value "12/31/2021" to my GeoEvent Server with the Expected Date Format configuration described above, the Date value my inbound adapter constructs for me from the received string is 1640937600000. This is an epoch value used by Java to represent date/time values. GeoEvent Server uses millisecond epoch values, which is why the value has 13 digits rather than only 10. If I ask GeoEvent Sever to cast its Date to a String I get a representation of the date which looks like "Fri Dec 31 00:00:00 PST 2021". Notice that the string has both a "date" and a "time" and includes the Time Zone for the expressed date/time value. In this case, the Date is expressed in the Pacific time zone. This is because an Expected Date Format pattern was specified -- which is required to handle the inbound string which does not match one of the few built-in expected patterns for a date/time value. The time zone handling is important to note because, in this case, the date/time is not in UTC units. GeoEvent Server assumes that the non-standard date/time must be a date/time local to my solution, so it uses the locale of my server (whose clock is configured to use the Pacific Time Zone). Focusing on your question, if you are receiving a string value which is somehow being adapted to produce the epoch date value 1640908800000 (which could also be represented as "Thursday, December 30, 2021 4:00:00 PM GMT-08:00" or "Thu Dec 30 16:00:00 PST 2021") and you need to truncate the value to be simply "Thursday, December 30" ... you have a couple of options. I strongly recommend you make sure you understand how the received data is actually being adapted, and check to verify how client applications are representing the value in web map pop-ups or web forms. A client application will likely represent a Java Epoch date/time value it receives, when querying a feature service for feature records for example, in whatever time zone the client web application is running. The value 1640908800000 already represents the date/time "Friday, December 31, 2021 12:00:00 AM" when a UTC value is assumed and web clients are likely going to try and represent an assumed UTC date/time in whatever time zone the web application is running. If you were to add or subtract some number of milliseconds from the epoch to drop the "time" portion and keep only the whole "date" value, your effort is likely going to have unintended consequences client-side. You could use a RegEx pattern match on a value toString(myDate) to isolate the whole hours portion of the "time" and then multiply this by 3,600,000 (which is 60 min x 60 sec x 1000 ms) and then subtract that from your Date using a Field Calculator. The eventual expression would be something like: myDate - (16 * 3600 * 1000) This assumes you are able to extract the value "16" from a string "Thu Dec 30 16:00:00 PST 2021" to know that you wanted to subtract 16 hours worth of milliseconds from the myDate attribute value. You also might want to look at some of the supported expressions for the Field Calculator processor. The function currentOffsetUTC() specifically computes the millisecond difference between your GeoEvent Server's locale and UTC. Since my server is configured to use the Pacific Time Zone, which is currently -08:00 hours behind UTC, the currentOffsetUTC() function returns a value -28800000, which is (8 hours x 60 minutes x 60 seconds x 1000 milliseconds). You might scale the computed value by some constant when performing date/time adjustment arithmetic, or more likely, shift an epoch Date from an assumed local time zone so that the value represents a UTC value. The advantage of using currentOffsetUTC() is that the function automatically recognizes changes in daylight savings, so you don't have to rely memory to update GeoEvent Services twice a year when a fixed constant value you might have hard-coded in an expression no longer reflects the observance of daylight savings time. See Also: What time is it? Well That Depends...
... View more
02-26-2021
04:34 PM
|
1
|
0
|
8175
|
|
POST
|
Hello Shital, There is no problem including multiple Field Calculator processors in a single GeoEvent Service. You sometimes have to "chain" a series of Field Calculators together to compute intermediate values and then perform calculations on those intermediate values. An example of this can be seen in the GeoNet thread How to switch positions on coordinates If you know that data you are receiving is in epoch seconds, you can scale the received Long integer value by multiplying by 1000 and write the computed result to a field whose type is Date. For example, illustrated below is a GeoEvent Service whose input receives a single long integer value. The GeoEvent Definition used by the input has two additional fields, another Long and a Date, who's values are adapted as null when no values are provided in the received data structure. The first Field Calculator multiplies dt_seconds (a Long) by 1000 and writes the result into a field dt_long (also a Long). The second Field Calculator uses the exact same expression but writes the result into a field dt_date which forces the Field Calculator to perform an implicit conversion from long integer to Date. I've chosen to show the input as JSON received over REST and the output as delimited text as that makes it clear what the data values are. Input: [{"dt_seconds": 1613600457}] Output: JsonReceiver,1613600457,1613600457000,2021-02-17T14:20:57.000-08:00 Note that the name of the GeoEvent Definition used by all nodes in the GeoEvent Service is JsonReceiver (the TEXT outbound adapter prepends that to the comma delimited values is produces). Also, the TEXT output can be configured to format Date values as ISO 8601 (as shown). You can use https://www.epochconverter.com to convert either the dt_seconds or the computed dt_long to show that either can be used by a system to represent the date/time shown formatted as an ISO 8601 string. I hope this helps -- RJ
... View more
02-17-2021
03:04 PM
|
1
|
0
|
3281
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 02-11-2026 11:38 AM | |
| 1 | 02-11-2026 10:38 AM | |
| 1 | 01-05-2023 11:37 AM | |
| 1 | 02-20-2025 03:50 PM | |
| 1 | 08-31-2015 07:23 PM |
| Online Status |
Offline
|
| Date Last Visited |
02-17-2026
02:45 PM
|