|
POST
|
Hello Nathan, I have confirmed with our developer who maintains the various processor implementations that what you are trying to do with the Union processor will not work. Currently the ${field_name} construct is only supported as part of a filter expression, or when using the GeoTagger processor. What we need, but do not have, is a Field Enricher capable of matching multiple feature records given a single primary key (e.g. a one-to-many enrichment). This is an enhancement request that has been in the product backlog for quite a while. There is a separate enhancement request to refactor all of the Geometry Overlay processors (e.g. Buffer, Convex Hull, Symmetric Difference, Union ... etc.) to allow dynamic substitution of event attribute field values. Thinking through the problem, I could imagine configuring a GeoTagger processor to logically "OR" two mutually exclusive conditions such as: INTERSECTS ${order_id}/.* DISJOINT ${order_id}/.* ... that's a start; now I have a comma separated list of the names of all geofences matching the received event record's order identifier. I could then use a Field Splitter processor (a custom processor available from our Gallery) to split the comma separated list of geofence names to obtain a series of individual event records. Each event record would have the received ${order_id} and the name of exactly one geofence associated with the ${order_id} of interest. Assuming that the ${meter_id} used as each geofence's name was unique I could then use a Field Enricher to enrich each event record in the series with the geometry of the feature record used to import the named geofence in the first place. That approach requires quite a lot of processing, though, and probably won't scale if you have a high volume of geofences or if the velocity at which you are receiving event records is more than just a few event records per second. And it still doesn't give you what you were looking for originally -- a single geometry representing a UNION of each point geometry associated with the ${order_id} attribute value originally received. Might it be an option for you to represent the locations of each ${meter_id} as a multi-point geometry rather than several individual point geometries? If each ${meter_id} were a vertex of a multi-point geometry then the geometry you get back when enriching an event record using ${order_id} to look-up a single multi-point feature would have the coordinates for each ${meter_id}. Without changing your data model, I don't think you'll be able to do what you want using the current release of GeoEvent Server, unless you develop a custom processor using the GeoEvent Server Java SDK. I would probably recommend looking at developing a custom Python script or Desktop GP Tool rather than trying to force GeoEvent Server to do what you want to do. Unfortunately the only two simple approaches I can think of, using GeoEvent Server, both have open / pending enhancement requests. - RJ
... View more
08-27-2018
06:24 PM
|
1
|
1
|
841
|
|
POST
|
Hey Dale - Following-up on your questions and additional information you sent to Esri Technical Support, hoping the information below will be helpful to someone else in the future. Please also review Using a partial GeoEvent Definition to update feature records which includes comments with illustrations showing how you might go about configuring your input to only poll feature records you have not marked as having already been processed. >> Some of our data tables are large, > 2.5 million records...so we'd like to avoid the workarounds mentioned in the blog post that involve retrieving the whole data set, filtering it etc. There’s one clarification I’d like to make, and an option you might consider. The workaround I suggest isn’t receiving the whole feature record set every poll and filtering it down to those records which match some specified criteria. The GeoEvent Server input is using the criteria you specify as the input’s Query Definition parameter to construct a WHERE clause the underlying RDBMS will use to determine which records to include in the record set returned to the client’s query (GeoEvent Server is acting as a client in this context). So the input you configure is still using a WHERE clause like it’s doing now, polling for feature records where date/time is greater-than some key value. The difference is that the input will query for feature records whose hasBeenProcessed attribute is clear (e.g. ‘0’) and the GeoEvent Service into which you incorporate this input will be responsible for setting the hasBeenProcessed attribute to ‘1’ for any record it routes through an event processing and notification workflow. Here's how you configure the solution, from within GeoEvent Server, using only out-of-the-box capability. First, you have to add a new attribute to your data’s schema, something like hasBeenProcessed, and re-publish the feature service. When the feature service is published make sure that new field is configured to apply a default value of zero when feature records are created (so exiting input web-forms or scripts don’t need to be modified). Of course you’ll have to explicitly set the new attribute value to zero if you’re importing existing feature records into the new RDBMS table. Then configure a GeoEvent Server input specifying its Query Definition parameter as hasBeenProcessed=0. All existing records will be polled on the first interval, assuming you’re setting all the unprocessed feature records attribute to zero, but as records have their hasBeenProcessed flag raised, they won’t be included in responses to future queries. Now, whatever GeoEvent Service you’re using to process the polled feature records, have it implement some business logic needed to verify that a record is complete and ready for processing. That’s an advantage to this approach – if your feature record polling relied only on date/time change and you found that one or more fields were null/empty and the record wasn’t ready for processing, you would have to somehow force an update to the date/time field (probably by editing the feature record) in order for that feature record to be included in a later query / poll request sent by the GeoEvent Server input. You’ll add a short event processing branch to your GeoEvent Service which includes a field mapper processor and a field calculator processor. The field mapper reduces the event schema to exactly two fields – the TRACK_ID field and the hasBeenProcessed field. The field calculator takes the field mapped record and sets the hasBeenProcessed attribute value to ‘1’. You then send this event record to an Update a Feature output. You’re using GeoEvent Server to update exactly one attribute of the same feature record set it is polling as input. The flagged feature record will now be excluded from the result feature set when the input next request a set of feature records to process. The approach of setting a hasBeenProcessed attribute is really quite powerful. It enables you to use GeoEvent Server filtering and processing to QA/QC event records ingest from a set of feature records before proceeding with event record notification. >> ...we'd really like that to happen in the database instead. Here’s something you might consider; you could register a spatial view as a feature service and allow GeoEvent Server to always poll for all features. As I indicated above, I do believe that requests made by a GeoEvent Server input are appropriately leveraging the power of the RDBMS to select records for processing. The input is not querying the full record set and filtering to discard records – it is making a request for any feature records which satisfy a WHERE clause, receiving and then processing only those records. But the input *is* limited to what can be expressed in a simple WHERE clause … The trick when using a spatial view is really subtle. The database is view is responsible for selecting database records which match some configured criteria. The view might execute a tabular JOIN or use SELECT statements to retrieve database records with relative date/time values – such as any record updated within the last five minutes. A GeoEvent Server input cannot do this using a simple WHERE clause. Using a spatial view, the RDBMS has the burden of preparing a (possibly highly dynamic) view of its data; GeoEvent is simply saying “give me whatever records you’ve got” when it polls the feature service endpoint it’s been configured to query. >> It seems like the only thing that's missing is we'd need to save the "most recent successful poll" date after each poll, and restore it on service restart. Is that correct, or are we missing something? Nope, there’s no confusion on your part here. It would be ideal if the GeoEvent Server input wrote the key value it was using either to a system file or into the Zookeeper distributed configuration store. We considered this capability rejected it because we were uncomfortable with how often the GeoEvent Server transport would have to update this key value. At a minimum the updates would introduce latency into the event record ingest workflow. There’s also a consideration as to where exactly the key value should be persisted and how a GeoEvent Server administrator would locate and clear that key value if, indeed, they wanted to clear it so the input would once again poll for all feature records. Administrators would have to understand that every Poll an ArcGIS Server for Features input they configured was potentially persisting a key value that was affecting which feature records were included in a response to a query … and that even something as heavy as restarting the GeoEvent Server would not clear some stubborn setting because persistence of the key value was, by design, rendering the input impervious to system restart. >> Can we extend from the existing ESRI transports and adaptors, or do we need to work directly off of the bare ESRI base InboundTransport and InboundAdaptor base classes, implementing the rest of the functionality ourselves as well? Unfortunately, no, the base out-of-the-box transports and adapters cannot be extended. Their source code is not provided, so you will have to re-implement your own FeatureService inbound transport and (probably) your own “Esri Feature” JSON inbound adapter in order to configure your own Poll an ArcGIS Server for Features connector. That is more work than I am generally comfortable suggesting a customer push onto their Java developers. - - - I can appreciate that some of your feature record sets are quite large – in excess of 2.5M records. Hopefully their schema is not locked and you have the freedom to add an attribute field which GeoEvent Server can use as a flag to say: “I’ve processed that one, don’t give it to me again.” - RJ
... View more
07-27-2018
06:46 PM
|
0
|
0
|
4541
|
|
POST
|
Hello Jack - Have you tried leaving the WKID property unspecified (deleting the default 4326 value from the field) and then importing feature records from a feature service to establish geofences? I don't have a feature set or feature service I can easily use to test this. Everything I have is associated with a well-known coordinate system, so the feature service's spatial reference has an associated WKID. However, I can tell you that if you delete all existing geofences, then stop and restart GeoEvent Server to force the in-memory quad-tree structure to be destroyed ... the first geofences you import should establish the spatial reference for all geofences loaded into the registry. Known limitations are that you cannot have geofences associated with more than one spatial reference, and you have to both delete existing geofences and restart GeoEvent Server if you want to change the spatial reference being used for your geofences. I do not think that it is possible to specify WKT when importing feature records as geofences to have GeoEvent Server maintain geometries with something other than a well-known coordinate system. The WKID parameter is intended to allow you to specify that GeoEvent should project the feature records it polls from the feature service for you, on-the-fly, to a well-known coordinate system, as the feature records are imported. I would be interested to observe the behavior if the spatial reference of the feature service used to import the feature records had no associated WKID. If the default value 4326 is removed, I'll bet that GeoEvent will fail-over and project the geometries to 4326 anyway (from the custom spatial reference). You should be able to see exactly what the geometry is for a geofence, once it has been imported, by clicking the pencil icon on the Site > GeoFences page to edit the geofence. The geofence's geometry will display as a JSON String which should include the spatial reference: - RJ
... View more
07-27-2018
06:32 PM
|
1
|
1
|
799
|
|
BLOG
|
Hey Scott - I'm not sure I follow what you're saying. Yes, there are changes in how the logging configuration is written out to the org.ops4j.pax.logging.cfg file in the C:\Program Files\ArcGIS\Server\GeoEvent\etc folder ... but the com.esri.ges.transport.featureService.FeatureServiceOutboundTransport logger hasn’t changed … that’s still the logger you should specify when requesting DEBUG level logging for the outbound feature transport. Maybe you are not seeing that logger because the IntelliSense has not observed any logged messages in response to event records sent to an outbound feature service transport? Here's a trick I’ve used to help the list the loggers I want to “watch”. Configure an input, output, and GeoEvent Service to do what I think I want done, turn on DEBUG logging on the ‘Root’ node long enough to send a few records through the GeoEvent Service. Set DEBUG logging back to INFO for the ‘Root’ node, then click Settings again on the View Log Messages window and begin typing the component specification for a particular logger (e.g. com.esri.ges.something). The IntelliSense now helps you by displaying the loggers associated with messages that have actually been written to the karaf.log while logging was set to DEBUG for ‘Root’. It's like I have to allow a few messages of type INFO / WARN / or DEBUG to be written before the IntelliSense will help me by listing loggers it knows about. - RJ
... View more
07-27-2018
05:15 PM
|
0
|
0
|
2372
|
|
POST
|
Julie - Apologies for the confusion. When I suggested using a flattened structure, that was after a getting a working hierarchical GeoEvent Definition to ingest the data. The bug we're trying to work around is that a hierarchical event definition, created by an inbound connector's adapter, cannot be edited after it has been created. You still have to use the hierarchical event definition with the configured input to ingest the data. After that you can use a Field Mapper to map the received event record to the simpler (flattened) structure, which you should also be able to edit within the GeoEvent Manager web application. You have to honor the structure used by the data provider to ingest the data; you cannot configure an input to ingest the data in structure you prefer to use. - RJ
... View more
07-27-2018
04:46 PM
|
0
|
2
|
5798
|
|
POST
|
Apologies that no one replied to your question. Glad you got it sorted and thank you for posting a sum. I'm not sure why the input or the GeoEvent Service you published had to be deleted and re-created. I have seen this behavior back at the 10.5 (10.5.1) release when I imported a configuration from an XML and then edited an input or output to change its name / label ... but it doesn't sound like you were doing anything like that. A debugging technique you might want to note, for the future, is that you can set the GeoEvent Server Logging to DEBUG on 'Root' for a very short period of time to try and capture what exactly is going on. The debug logs are going to be very verbose, so you'll have to open the log file in a text editor rather than relying on the logging interface in the GeoEvent Manager web application. You can find the kafaf.log beneath the following folder: C:\Program Files\ArcGIS\Server\GeoEvent\data\log In this case you would be looking for messages from the following logger: com.esri.ges.transport.featureService.FeatureServiceInboundTransport The debugging technique would be to turn off (e.g. stop) all inputs, GeoEvent Services, and outputs you have running, delete the logs (to clear out any old messages not related to this debug session), then set logging to DEBUG on 'Root'. Start the input and check the logs for anything that looks relevant. Maybe then start the GeoEvent Service and output and check the logs again. Be sure to set logging back to INFO for 'Root' as soon as you can so you do not continue to collect messages in the log file (particularly for every request over the HTTP wire - there will be a lot of those). In this case the DEBUG logs for the FeatureServiceInboundTransport were a little help. Here's a snapshot; setting the logging to DEBUG for only the one logger made it reasonable to use the GeoEvent Manager's log viewer: What I did for the test was leave the input's default query expression set to 1=1 to confirm that feature records were being polled. Then I changed the query expression to look for only records whose gf_imported attribute was equal to zero, so that I could see the warning that the query expression excluded all available features. For future issues, please consider submitting an incident with Esri Technical Support. Esri staff work to try and post responses to questions on the community forum, but forum submissions do not always get answered. An incident submitted to Esri Technical Support will always be assigned and someone will work to address the issue. (You might consider opening an incident referring to a forum item you have posted to request a forum response - that would give your issue maximum exposure.) - RJ
... View more
07-26-2018
05:41 PM
|
1
|
1
|
1759
|
|
POST
|
Hey Juliano - I think the problem is that the JSON being returned by OpenSky organizes the "states" not as an array, but as an array of arrays. The nested arrays are not named elements, so GeoEvent Server is unable to create a GeoEvent Definition for the data's structure. When you suggest that you want to access the data as states[5], I don't think you are indexing to the value you intend to access. Consider the following illustration: The reference states[5] does not access the value -111.4485 ... it references the entire array of values at the sixth position within "states" (zero-base indexing). When I queried the https://opensky-network.org/api/states/all endpoint just now, I got 5534 arrays organized within the states[ ] array. JSON allows a variable number of unnamed arrays to be nested within an array ... but unless every internal array has a name, GeoEvent Server won't be able to create a GeoEvent Definition. Every event attribute in an event definition must have a name. Hence the following - while completely valid JSON - cannot be ingested by GeoEvent Server. {"myLists":[["A","B","C"],["D","E","F"],["G","H","I"]]} If the data were reorganized to give each interior array a name: {"myLists":[
{"set1":["A","B","C"]},
{"set2":["D","E","F"]},
{"set3":["G","H","I"]}
]
} ... then GeoEvent Server would be able to create a GeoEvent Definition. Each interior array would have to be part of a named element, I think, for the JSON to be valid. The GeoEvent Definition above indicates that myLists is an array of several elements. Each named element in myLists contains an array, which is an array of several string values. The article w3schools.com - JSON Arrays does a good job of explaining this. You also might take a look at a blog I just posted: https://community.esri.com/community/gis/enterprise-gis/geoevent/blog/2018/07/25/json-data-structures-working-with-hierarchy-and-multicardinality?sr=search&searchId=9dc314e9-cc88-4684-a8f5-1a2c3d8ca49b&searchIndex=0 Hope this information helps (sorry that no one replied to you to date) - RJ
... View more
07-26-2018
04:57 PM
|
1
|
0
|
1820
|
|
BLOG
|
In the blogJSON Data Structures - Working with Hierarchy and Multicardinality I wrote about how data can be organized in a JSON structure, how to recognize data hierarchy and cardinality from a GeoEvent Definition, and how to access data values given a hierarchical, multi-cardinal, data structure. In this blog, we'll explore XML, another self-describing data format which -- like JSON -- has a specific syntax that organizes data using key/value pairs. XML is similar to JSON, but the two data formats are not interchangeable. What does XML support that JSON does not? One difference is that XML supports both attribute and element values whereas JSON really only supports key/value pairs. With JSON you generally expect data values will be associated with named fields. Consider the two examples below (credit: w3schools.com) <person sex="female">
<firstname>Anna</firstname>
<lastname>Smith</lastname>
</person> The XML in this first example above provides information on a person, "Anna". Her first and last name are provided as elements whereas her gender is provided as an attribute value. <person>
<sex>female</sex>
<firstname>Anna</firstname>
<lastname>Smith</lastname>
</person> The XML in this second example above provides the same information, except now all of the data is provided using element values. Both XML structures are valid, but if you have any influence with your data provider, it is probably better to avoid attribute values and instead use elements exclusively when ingesting XML data into GeoEvent Server. The inbound XML adapter can successfully translate XML which contains attribute values and node element values with some limitations we'll look at shortly. Here's a little secret: GeoEvent Server does not actually handle XML data at all. GeoEvent Server uses third party libraries to translate XML it receives to JSON. The JSON adapter is used interpret the data and create event records from the translated data. Because JSON does not support attribute values, all data values in an XML structure must be translated as elements. Consider the following illustration which shows how a block of XML data might be translated to JSON by GeoEvent Server: Notice the first line of the XML in the XML illustrated above/left. This declares the version and encoding being used which the libraries GeoEvent Server uses to translate the XML to JSON really like seeing as part of the XML data. Also, notice the JSON to the right of the XML sample organizes each event record's data as separate elements in a JSON array. Data for employee "James Albert (Emp #1234)" is represented in its own set of curl-braces as a single JSON element. There are three employee JSON elements in the array. Sometimes XML will include non-visible characters such as a BOM (byte-order mark). If the XML you are trying to ingest is not being recognized by an input you've configured, try copying the XML into a text editor which doesn't mask the sort of characters you might find at the beginning or end of a document. Saving the raw text after stripping out any hidden characters should help create a cleaner XML document. Other limitations to consider when ingesting XML There are several other limitations to consider when ingesting XML data into GeoEvent Server. Sometimes a block of JSON might pass an online JSON validator such as the one provided by JSON Lint but GeoEvent Server's inbound JSON adapter is not able to adapt the JSON to create an event record for processing. Esri Feature JSON and geoJSON are two examples which require special handling of arrays that don't have keys associated with them. Mixing and Matching Attributes and Element Values The following XML cannot be parsed: <place> <latitude units="Decimal Degrees">45.125</latitude> <longitude units="Decimal Degrees">-115.375</longitude> <height units="Long Integer">10</height> </place> The attribute units in each node need to be pushed down to become child-elements beneath the parent node. The parent node's value shown in bold text is lost. There are two ways to work around this known limitation. You could leave each parent node's value null and instead incorporate all of the data as node-level attributes: <place> <latitude units="Decimal Degrees" value="45.125"></latitude> <longitude units="Decimal Degrees" value="-115.375"></longitude> <height units="Long Integer" value="10"></height> </place> XML requires node-level attributes be enclosed in double-quotes. When tailoring your GeoEvent Definition you can specify that the latitude and longitude values be adapted as Double and the height be adapted as a Long to avoid bringing the data in as literal strings. Alternatively you could wrap each node’s value in a nested tag explicitly making it a child. When the parent node's attributes are pushed down to become children they will be siblings of the formal child elements named value: <place> <latitude units="Decimal Degrees"><value>45.125</value></latitude> <longitude units="Decimal Degrees"><value>-115.375</value></longitude> <height units="Long Integer"><value>10</value></height> </place> .line-spacing. Mixing and Matching Data Element Types Consider the following block of XML data which includes data on both "vehicles" and "personnel". <?xml version="1.0" encoding="utf-8"?>
<data>
<vehicles>
<vehicle make="Ford" model="Explorer">
<license_plate>4GHG892</license_plate>
</vehicle>
<vehicle make="Toyota" model="Prius">
<license_plate>6KLM153</license_plate>
</vehicle>
</vehicles>
<personnel>
<person fname="James" lname="Albert">
<employee_number>1234</employee_number>
</person>
<person fname="Mary" lname="Smith">
<employee_number>7890</employee_number>
</person>
</personnel>
</data> The self-describing nature of the XML makes it apparent to a reader which data elements are which, but an input in GeoEvent Server will have trouble identifying the multiple occurrences of the different data items if the inbound adapter's XML Object Name property is not specified. Here is the GeoEvent Definition the inbound adapter generates when its XML Object Name property is left unspecified and the XML data sample above is ingested into GeoEvent Server: In testing, the very first time the XML with the combination of "vehicles" and "personnel" was received and written out as JSON to a system text file, I observed only one person and one vehicle written to the output file. Worse yet, without changing the generated GeoEvent Definition or any of the input connector's properties, sending the exact same XML a second time produced an output file with "vehicles" and "personnel" elements that were empty. The blogJSON Data Structures - Working with Hierarchy and Multicardinality suggests that, at the very least, the cardinality specified by the generated GeoEvent Definition is not correct. The GeoEvent Definition also implies a nesting of groups within groups which won't work once an XML Object Name is specified. Let's explore how you might work around this issue using the configurable properties available in GeoEvent Server. First, ensure the XML input connector specifies which node in the XML should be treated as the root node by setting the XML Object Name property accordingly as illustrated below: Second, verify the GeoEvent Definition has the correct cardinality for the data sub-structure beneath the specified root node as illustrated below: By configuring these above properties accordingly, GeoEvent Server will only consider data within a sub-structure found beneath a "vehicles" root node and should make allowances that the sub-structure may contain more than one "vehicle". With this approach, there are two ramifications you might want consider. First, the inbound adapter is literally throwing half of the received data away by excluding data from any sub-structure found beneath the "personnel" nodes. This can be addressed by making a copy of the existing Receive XML on a REST Endpoint input and configuring this copy to use "personnel" as its XML Object Name. The copied input should also use a different GeoEvent Definition -- one which specifies "person" as an event attribute with cardinality Many and the attributes of a "person" (rather than a "vehicle") as illustrated below. Second, the event record being ingested has multiple vehicles (or people) as items in an array. You'll likely want to process each vehicle (or person) as individual event records. To address this, it's recommended you use a processor available on the ArcGIS GeoEvent Server Gallery, specifically the Multicardinal Field Splitter Processor. There are two different field splitter processors provided in the download, so make sure to use the processor that handles multicardinal data structures. A Multicardinal Field Splitter Processor, added to a GeoEvent Service illustrated below, will clone event records it receives and split the event record so that each record output has only one vehicle (or person). Notice that each event record output from the Multicardinal Field Splitter Processor includes an index at which the element was found in the original array. Conclusion The examples I've referenced in this blog are obviously academic. There's no good reason why a data provider would mashup people and vehicles this way in the same XML data structure. However, you might come across data structures which are not homogeneous and need to use one or more of the approaches highlighted in this blog to extract a portion of the data out of a data structure. Or you might need to debug your input connector's configuration to figure out why attribute or element values you know to exist in the XML being received are not coming through in the event records that output. Or maybe in the data you're receiving you expect multiple event records to be ingested and end up only observing a few -- or maybe only one -- event records being ingested. Hopefully the information provided will help you address these challenges when you encounter them. To summarize, below are the tips I highlighted in this article: Use the GeoEvent Definition as a clue to the hierarchy and cardinality GeoEvent Server is using to define each event record's structure. Specify the root node or element when ingesting XML or JSON; don't let the inbound adapter assume which node should be considered the root. If necessary, specify an interior node as the root node so only a subset of the data is actually considered. Avoid XML data which uses attributes. If you must use XML data with attributes, know that an attempt will be made to promote these as elements when the XML is translated to JSON. Encourage your data providers to design data structures whose records are homogeneous. This can run counter to database normalization instincts where data common to all records is included in a sub-section above each of the actual records. Sometimes simple is better, even when "simple" makes individual data records verbose. Make sure the XML you ingest includes a header specifying its version and encoding -- the libraries GeoEvent Server is using really like seeing this metadata. Also, watch out for hidden characters which are sometimes present in the data.
... View more
07-25-2018
06:47 PM
|
4
|
1
|
11087
|
|
BLOG
|
Check for understanding If the above blog content made sense, let's complicate the data we expect to receive by having the maintenance date/time values for each item reported as a list of values (rather than a single value). How would this change the GeoEvent Definition? Does the "calibrated" attribute's type change from Date to Group? Is every record in the "items" list required to have the same number of reported maintenance visits? Is it possible to extract the most recent calibration date from the list of date/time values for event notification? Here's a look at the proposed changes to the block of JSON you might expect to receive: {
"items": [{
"id": 3201,
"status": "online",
"calibrated": [1521135120000, 1521136416000, 1521137712000],
"location": {
"latitude": -117.125,
"longitude": 31.125
}
},
{
"id": 5416,
"status": "offline",
"calibrated": [1521638100000],
"location": {
"latitude": -113.325,
"longitude": 33.325
}
},
{
"id": 9823,
"status": "error",
"calibrated": [1522291320000, 152229261600],
"location": {
"latitude": -111.625,
"longitude": 35.625
}
}
]
} Only a couple of changes need to be made to the GeoEvent Definition to accommodate the JSON data illustrated above. First, the cardinality of the "calibrated" event attribute needs be changed from One to Many (since its value is now an array or list rather than a single date/time value). The data type, however, does not change. The values in the array can still be treated as Date; the element is not a Group element. Second, you would need to remove the TIME_START tag from the element. GeoEvent Server can handle lists of values as an array without requiring that every value in the array be a name/value pair, but tags can only be applied to named values, and the individual values in the new "calibrated" arrays do not have names. One advantage to handling each "calibrated" element as a variable length array is that individual records are not required to have the same number of maintenance date/time values recorded. GeoEvent Server does require that every value in the array be the same type (e.g. Long, String, Date, ..., or Group), but that's not an issue in this case as all the values are epoch long integer representations of date values in milliseconds. In the first illustration above, sensor 5416 has only been calibrated once, but the other two sensors have been calibrated multiple times. The disadvantage to handling each "calibrated" element as a variable length array is that there is no easy way to get the 'most recent' calibration date. It appears, looking at the data, that the last value in each list is the most recent calibration date. But you cannot use a Field Mapper Processor in GeoEvent Server, for example, to extract the last value from each event record's "calibration" list -- you don't know how many values will be in each array, so you cannot use an index to access a particular value. None of the processors available out-of-the-box with GeoEvent Server provide the ability to iterate over a list of values, or provide a count of the number of values in a list, so you cannot identify the last value in any particular list. Assuming extraction of the most recent calibration date was required, your best option in this case would be to use the ArcGIS GeoEvent Server SDK to develop a custom processor which implemented a list iterator, or work with the data provider to see if the data schema could be modified to provide the most recent calibration date as a data value whose cardinality was One rather than Many.
... View more
07-24-2018
06:26 PM
|
2
|
0
|
30633
|
|
BLOG
|
When speaking with customers who want to get started with ArcGIS GeoEvent Server, I'm often asked if GeoEvent Server has an input connector for a specific data vendor or type of device. My answer is almost always that we prefer to integrate via REST and the question you should be asking is: "Does the vendor or device offer a RESTful API whose endpoints a GeoEvent Server input can be configured to query?" Ideally, you want to be able to answer two integration questions: How is the data being sent to a GeoEvent Server input? How is the data formatted; what does the data's structure look like? For example, an input can be configured to accept data sent to a GeoEvent Server hosted REST endpoint. That answers the first question - integration will occur via REST with the vendor sending data as an HTTP/POST request to a GeoEvent Server endpoint. The second question, how is the data formatted, is the focus of this blog. What does a typical JSON data record look like? Typically, when a data vendor sends event data formatted as JSON, there will be multiple event records organized within a list such as this: {
"items": [{
"id": 3201,
"status": "",
"calibrated": 1521135120000,
"location": {
"latitude": -117.125,
"longitude": 31.125
}
},
{
"id": 5416,
"status": "offline",
"calibrated": 1521638100000,
"location": {
"latitude": -113.325,
"longitude": 33.325
}
},
{
"id": 9823,
"status": "error",
"calibrated": 1522291320000,
"location": {
"latitude": -111.625,
"longitude": 35.625
}
}
]
} There are three elements, or objects, in the block of JSON data illustrated above. It would be natural to think of each element as an event record with its own "id", "status", and "location". Each event record also has a date/time the item was last "calibrated" (expressed as an epoch long integer in milliseconds). What do we mean when we refer to a "multi-cardinal" JSON structure? The JSON data illustrated above is multi-cardinal because the data has been organized within an array. We say the data structure is multi-cardinal because its cardinality, in a mathematical sense of the number of elements in a group, is more than one. The array is enclosed within a pair of square brackets: "items": [ ... ] If the array were a list of simple integers the data would look something like: "values": [ 1, 3, 5, 7, 9 ] The data elements in the illustration above are not simple integers. Each item is bracketed within curl-braces which is how JSON identifies an object. For GeoEvent Server, it is important that both the array have a name and that each object within the array have a homogeneous structure, meaning that every event record should, generally speaking, use a common schema or collection of name/value pairs to communicate the item's data. What do we mean when we refer to a "hierarchical" JSON structure? The data elements in the array are themselves hierarchical. Values associated with "id", "status", and "calibrated" are simple numeric, string, or Boolean values. The "location" value, on the other hand, is an object which encapsulates two child values -- "latitude" and "longitude". Because "location" organizes its data within a sub-structure the overall structure of each data element in the array is considered hierarchical. It should be noted that the coordinate values within the "location" sub-structure can be used to create a point geometry, but "location" itself is not a geometry. This is evident by examining how a GeoEvent Definition is used to represent the data contained in the illustrated block of JSON. Different ways of looking viewing this data using a GeoEvent Definition In GeoEvent Server, if you were to configure a new Receive JSON on a REST Endpoint input, leaving the JSON Object Name property unspecified, selecting to have an GeoEvent Definition created for you, and specifying that the inbound adapter not attempt to construct a geometry from received attribute values, the GeoEvent Definition created would match the one illustrated below: Notice the cardinality of "items" is specified as Many (the infinity sign simply means "more than one"). Also, when the block of JSON data illustrated above is sent to the input via HTTP/POST, the input's event count only increments by one, indicating that only one event record was received. Also notice that, in this configuration, "items" is a Group element type. This implies that in addition to the structure being multi-cardinal, it's also organized as a group of elements, which in JSON is typically an array. Finally, notice that the "location" is also a Group element type. The cardinality of "location", however, is One not Many. This tells you that the value is a single element, not an array of elements or values. Accessing data values Working with the structure specified in the GeoEvent Definition illustrated above, if you wanted to access the coordinate values for "latitude" or "longitude" you would have to specify which latitude and longitude you wanted. Remember, the data was received as a single event record and "items" is a list or array of elements. Each element in the array has its own set of coordinate values. Consider the following expressions: items[2].location.longitude items[2].location.latitude The expressions above specify that the third element in the "items" list is the one in which you are interested. You cannot refer to items.location.latitude because you have not specified an index to select one of the three elements in the "items" array. The array's index is zero-based, which means the first item is at index 0, the second is at index 1, and so on. Ingesting this data as a single event record is probably not what you would want to do. It is unlikely that an arbitrary choice to use the third element's coordinates, rather than the first or second element in the list, would appropriately represent the items in the list. These three items have significantly different geographic locations, so we should find a way to ingest them as three separate event records. Re-configuring the data ingest When I first mentioned configuring a Receive JSON on a REST Endpoint input to allow the illustrated block of JSON to be ingested into GeoEvent Server for processing, I indicated that the JSON Object Name property should be left unspecified. This was done to support a discussion of the data's structure. If the illustrated JSON data were representative of data you wanted to ingest, you should specify an explicit value for the JSON Object Name parameter when configuring the GeoEvent Server input. In this case, you would specify "items" as the root node of the data structure. Specifying "items" as the JSON Object Name tells the input to handle the data as an array of values and to ingest each item from the array as its own event record. If you make this change to our input, and delete the GeoEvent Definition it created the last time the JSON data was received, you will get a slightly different GeoEvent Definition generated as illustrated below: The first thing you should notice, when the illustrated block of JSON data is sent to the input, is the input's event count increments by three -- indicating that three event records were received by GeoEvent Server. Looking at the new GeoEvent Definition, notice there is no attribute named "items" -- the elements in the array have been split out so that the event records could be ingested separately. Also notice the cardinality of each of the event record attributes is now One. There are no lists or arrays of multiple elements in the structure specified by this GeoEvent Definition. The "location" is still a Group which is fine; each event record should have (one) location and the coordinate values can legitimately be organized as children within a sub-structure. The updates to the structure specified in the GeoEvent Definition change how the coordinate values are accessed. Now that the event records have been separated, you can access each record's attributes without specifying one of several element indices to select an element from a list. You should now be ready to re-configure the input to construct a geometry as well as make some minor updates to the data types of each attribute in the GeoEvent Definition in order to handle "id" as a Long and "calibrated" as a Date. You also need to add a new field of type Geometry to the GeoEvent Definition to hold the geometry being constructed. Hopefully this blog provided some additional insight on working with hierarchical and multi-cardinal JSON data structures in GeoEvent Server. If you have ideas for future blog posts, let me know, the team is always looking for ways to make you more successful with the Real-Time & Big Data GIS capabilities of ArcGIS.
... View more
07-24-2018
05:26 PM
|
8
|
14
|
63157
|
|
IDEA
|
Good news to report on this one William Craft ... I believe the ability to add new attribute fields to an existing spatiotemporal big data Data Source is an enhancement coming with the 10.6.1 release. I'm going to tag Qingying Wu in this reply. She should be able to either comment here, or write-up a GeoEvent Server Blog which will show you how to do this. Stay tuned! - RJ
... View more
06-28-2018
06:36 PM
|
1
|
1
|
1432
|
|
IDEA
|
Good news to report on this one William Craft ... I believe the ability to add new attribute fields to an existing spatiotemporal big data Data Source is an enhancement coming with the 10.6.1 release. I'm going to tag Qingying Wu in this reply. She should be able to either comment here, or write-up a GeoEvent Server Blog which will show you how to do this. Stay tuned! - RJ
... View more
06-28-2018
06:36 PM
|
1
|
1
|
1992
|
|
POST
|
Hello Nathan - I am not familiar enough with ArcGIS Collector to comment on the behavior you are observing that OBJECTID do not increment along with the track points in the temporal order they were received. You're in the right GeoNet Subspace for questions on the application ... so I'll let someone from that team comment on whether or not what you're observing is considered a bug. From a GeoEvent perspective, I can tell you that https://community.esri.com/community/gis/enterprise-gis/geoevent/blog/2015/08/20/polling-feature-services-for-incremental-updates?sr=search&searchId=6e967e3e-5cd3-4dfd-a328-36704d6055d4&searchIndex=1 can be problematic when using OBJECTID as the key value for determining when new feature records have been added to the feature service you are polling for input. Most often we see the problem when multiple contributors are adding feature records to the geodatabase feature class. ArcGIS Server has a mitigation strategy which allocates OBJECTID in blocks of (I believe) 400 to each contributor to avoid race conditions in which two contributors ask for the next available OBJECTID, both are told that the ID 'X' is the next in line, and both attempt to create a new feature with the same OBJECTID. What you'll see is feature records being created with OBJECTID values 1, 2, 3, 4 .... then the ID values will appear to jump to 401, 402, 403, ... and maybe later to 801, 802, 803, ... The problem with this from a GeoEvent Server perspective is that the GeoEvent input is querying for feature records using the greatest-value of all the OBJECTID values observed in the last feature record set queried from the service. This means that once feature records with the 80x IDs start being created, the input will remember seeing values in the eight hundred range and will construct a query with a WHERE clause that excludes any feature records whose OBJECTID value is less than 803, 804, 805, ... Then, when one of the other contributors operating within the block of OBJECTID values allocated to his/her operations, creates new feature records 6, 7, 8, 9, ... none of these are polled by the GeoEvent input because it is using a key value in the 800 range when querying for "new" feature records based on OBJECTID. You'll be much better-off using a date/time field when querying using the GeoEvent input's "incremental update" capability. But even then, if ArcGIS Collector is adding feature records out-of-order, or is adding them as a batch without setting the date/time each track point was actually collected, you'll have trouble using the GeoEvent Server's polling input to identify feature records which are both "new" and get them in the order they were originally collected. Also remember that the geodatabase only honors date/time resolution to the second, so even if ArcGIS Collector specifies sub-second / millisecond precision, that precision will be lost. The date/time values of the persisted feature records will either be truncated or rounded to the nearest whole second (I don't remember which). I recommend taking a look at the blog I referenced above. Depending on how you've chosen to disseminate the event records you've processed, polling for incremental updates can cause you other problems when the GeoEvent Server service is restarted - which will happen if an administrator restarts the ArcGIS Server service or when the server is rebooted. Hope this information is helpful - RJ
... View more
06-22-2018
12:41 PM
|
2
|
1
|
1330
|
|
DOC
|
I encountered the problem Jimmy Dobbins describes in his post Failed to configure the server machine...not a local server machine and wanted to add some information without burying it beneath his answer to the issue. Every year, for the International User Conference, Esri's IST staff prepare an image for hundreds of computers in the Showcase. Why I've never encountered this issue before, I have no idea, but this year I was seeing the following error in the ArcGIS Server Manager web application when attempting to 'Create Site': Failed to create the site. Failed to configure the server machine 'WHATEVER-NAME.ESRI.COM'. Server machine 'WHATEVER-NAME.ESRI.COM' is not a local server machine. It turns our that there is an XML file, which the ArcGIS Server Manager creates, which contains information on the machine. The machine image, of course, picks up this file, and when the machine to which the image is applied is assigned a different name, ArcGIS Server site creation fails. The solution is to either edit the XML file to update the machine name to match the localhost's current name ... or stop the ArcGIS Server service, delete (or simply rename) the XML file, then restart ArcGIS Server and launch the Server Manager web application (which should recreate the XML file with the correct machine name). Here's the file path and name: C:\Program Files\ArcGIS\Server\framework\etc\machine-config.xml Illustration attached - RJ
... View more
06-21-2018
12:11 PM
|
5
|
9
|
8976
|
|
POST
|
Vladimir Strinski The illustration you included in your post from 11-Oct-2017 is part of an unrelated issue introduced in the 10.6 release. GeoEvent Definitions which were not suitable for use when publishing a 'Store Latest' feature service (or any feature service GeoEvent Server would use to add/update features) was triggering a sticky validation message -- by which I mean even if you selected a different GeoEvent Definition, one that was suitable and had a field tagged TRACK_ID, the GeoEvent Manager web application's panel still displayed the error message that GeoEvent Definition <name> is missing a TRACK_ID tag. Esri Support was tracking this as issue BUG-000113943 ... which we've also addressed as part of the 10.6.1 product release. - RJ
... View more
06-06-2018
03:57 PM
|
2
|
0
|
1952
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 01-05-2023 11:37 AM | |
| 1 | 02-20-2025 03:50 PM | |
| 1 | 08-31-2015 07:23 PM | |
| 1 | 05-01-2024 06:16 PM | |
| 1 | 01-05-2024 02:25 PM |
| Online Status |
Offline
|
| Date Last Visited |
Wednesday
|