BLOG
|
Josh Joyner's original post has been updated to show the simplified user configuration interface for Release 7 of the Waze connector ( August 25, 2023 ) mentioned by Gregory Christakos above. --- --- You will likely immediately notice, when sending traffic alert and traffic jam event records to an output to either broadcast feature records (using a stream service) or add/update feature records in a geodatabase (using a feature service), that feature records accumulate over time which end up cluttering your display. When using a stream layer in a web map to display data, you cannot configure the stream layer to periodically refresh the display. Stream layers do not query data from a geodatabase and do not offer an automatic refresh to remove feature records you might consider "old" or "stale" from the web map display and query to obtain a fresh set of feature records. Your only option, as an analyst, is to periodically clear the stream layer's display interactively when you decide the web map's display needs to be refreshed. Select the option to "Clear previous observations" from the stream layer's context menu to remove all displayed feature records for the layer and then wait for a fresh set of feature records to be processed through GeoEvent Server. Alternatively, you could use a feature layer rather than a stream layer to display your data. The only real advantage to using a stream layer in this case would be that feature records are added to the display immediately after being processed by GeoEvent Server. The inherent delay in receiving data from the Waze API -- typically no faster than every 120 seconds -- means that configuring an automatic refresh for your feature layer of every 30 seconds will reasonably display feature records as they are processed in near real-time. An advantage of using a geodatabase to store the accumulating feature record set is that you can configure a hosted feature layer view to show you only the most recent traffic alerts and traffic jams. This means that as a web map remains open its display will not become cluttered with stale alerts and jams. From your Enterprise portal, open the detail page for your hosted feature layer and select Create View Layer. Choose the option to "Create a view of this layer". With the hosted feature layer selected in the Create View Layer configuration panel, click Next to define the filters and fields you want included in the hosted feature layer view. Click the expansion arrow to add a filter to the view and click 'Add new' : You will need a date/time field which is updated every time the feature record used to model a traffic alert or traffic jam is updated. This is not part of the data schema included by the Waze API. Refer to the illustrations below for how to use GeoEvent Server to enrich event data records with the needed information. For now, you need to know that a filter expression can be configured for a hosted feature layer view which selects only feature records whose date/time is within the last several minutes : To complete your hosted feature layer view configuration, click the expansion arrows to collapse the 'Filter' and 'Layer definitions' panels then click 'Next'. Enter a title for your hosted feature layer view, something like "Traffic_Jams_Latest", enter tags and a summary if you wish, and then click Create. You can now add your hosted feature layer view to a web map. When you configure the feature layer (in the web map) to automatically refresh every 30 seconds (e.g. 0.5 minutes) the view will retrieve and display only the most recent feature records from the hosted feature layer. Note that you will still be required to periodically administer your geodatabase to remove / delete old feature records as data accumulates in the hosted feature layer. --- --- Using GeoEvent Server to add the received_dt attribute This is done using the same Field Mapper processor you need to configure to flatten a traffic alert or traffic jam event record schema before sending the simplified / flattened data record to an Update a Feature output. First, you need to recognize that integrating the Esri Sample Waze Connector into your GeoEvent Server added four GeoEvent Definitions to your deployment. There is a GeoEvent Definition named WazeAlert and one named WazeAlert-Flat. The latter omits the X/Y coordinate values nested beneath location and also drops the imageId and the imageURL attributes. New attributes longitude and latitude are added for the coordinate values. A similar pair of GeoEvent definitions are loaded into your configuration for WazeJam and WazeJam-Flat. You need to create a new pair of GeoEvent Definitions and add a new field which you can use to write a Date value each time a traffic alert or traffic jam event record is processed. You can easily do this by copying the existing WazeAlert-Flat and WazeJam-Flat GeoEvent Definitions and then adding the needed received_dt attribute to your copy of the event definition. Make sure you specify the data type for the new attribute is a Date. When configuring your Field Mapper to flatten the traffic jams schema, use the built-in function currentTime() to retrieve the local server's current time and write that, as an epoch date value, into the new attribute field received_dt. Your field mapping should be from WazeJam (the event definition used to adapt the raw data from the Waze API) to the flattened schema WazeJam-ReceivedDT (the copy of the flattened schema to which you added the received_dt attribute). Hopefully this information helps manage the traffic alerts and traffic jams you process through GeoEvent Server when adding and updating feature records in a geodatabase for display using hosted feature layers and hosted feature layer views.
... View more
Wednesday
|
0
|
0
|
29
|
DOC
|
This article provides additional information and examples for the Spatial Filters help topic. Please refer to the illustrations in the online help topic for details on additional spatial operators you can use when configuring a conditional expression. GeoEvent Server allows you to configure conditional expressions for both attribute filters and spatial filters. The Incident Detector is an example of a processor that uses the same conditional expressions as a Filter. The processor is configured with an opening condition specifying when a new incident should be created for monitoring and/or when an existing incident's duration and geometry should be updated. An opening condition is required; the closing condition is optional and if left unspecified will be the logical negation of the opening condition. When using the spatial conditions 'Enter' and 'Exit' you should be aware of a subtle difference between how the Incident Detector processor behaves vs. how a spatial Filter behaves. A Filter does not change anything about an event record it receives. It allows an event record to pass through the filter as long as the data satisfies the filter's spatial condition. An Incident Detector processor fundamentally changes both the attribute structure and geometry of the event records it receives. Most solutions configure the processor to emit a point geometry, but this is not the same geometry the processor received. The geometry in event records emitted by the processor has been updated based on the ongoing nature of the incident being monitored. That is why the processor can accept point geometries but output point, multipoint, or polyline geometries. The criteria used to evaluate an 'Enter' condition or an 'Exit' condition are the same as described in the spatial filter help topic. However, an event record passing through a Filter configured with an 'Exit' condition does not change. The geometry will be located outside the polygon the tracked asset has exited. Incident Detector will update the vertex of the geometry to the last-known position inside the polygon the tracked asset is exiting. Thus an 'Ended' incident will locate the tracked asset at the last position inside the polygon. Consider the illustrations below. A Filter configured with an 'Enter' condition will allow only the data point labeled 'Enter' to pass through the filter. A separate Filter configured with an 'Exit' condition will allow only the data point labeled 'Exit' to pass through the filter. An Incident Detector must evaluate both an opening and a closing condition. Looking at the same two tracked assets, an Incident Detector processor will open separate incidents for the first track point inside the area of interest from each tracked asset. The processor will update the duration and geometry of its incident and emit an incident with the status 'Ended' when it sees the tracked asset has exited the area of interest. The last vertex of the geometry updated by the processor will be the last observed position inside the area of interest. This is an attempt to match the reported duration the tracked asset was inside the area with the several observations the processor has of the asset actually inside the area. The behavior illustrated above is how the Incident Detector has reported incidents based on 'Enter' and 'Exit' spatial criteria for several releases prior to the 11.x release series. Solutions which need to monitor the duration of a condition should use an Incident Detector and watch for incident event records to be emitted with 'Started', 'Ongoing', and 'Ended' status. These solutions should also be aware of the behavior described above, that an 'Ended' incident will report the last-observed location inside an area of interest. A Filter, which does not monitor or report the duration of a condition, will not change any aspect of an event record it receives and will locate the data record at its actual last-observed location outside an area of interest.
... View more
03-20-2025
04:04 PM
|
0
|
0
|
136
|
POST
|
The issue you are seeing with feature services needing to be republished is not something I have seen recently. I have not been able to reproduce the issue personally and I cannot say whether or not work has been put into scope for ArcGIS Server. If you want to open an incident with Esri Technical Support they can work with you to capture a reproducible case and then document the bug. If you do open a technical support incident, please make sure the incident refers to an upgrade of ArcGIS Enterprise and ArcGIS Server with emphasis on web service availability at REST after the product upgrade.
... View more
03-07-2025
12:51 PM
|
0
|
1
|
117
|
POST
|
Thank you @JeffSilberberg for reviewing the problem and considering a solution. @MeleKoneya and I are working off-line on this. Our current suspicion is that data in the first enrichment table (retrieving username and callSign) is getting call signs from prior day(s) operations such that that the same callSign value might be assigned to more than one EMT in the target hosted feature layer (using the username as the unique identifier). Then, when a secondary enrichment process retrieves status data from the CAD system, the first feature record found with a particular callSign is updated and the dashboard shows an an EMT's status incorrectly (correct real-time status, wrong EMT). We are going to try clearing out the assigned callSign each morning before daily operations, so that each day's assigned call signs are assigned to a single EMT. Then evaluate to see if status values from the CAD are able to use the unique call sign to update each EMT's status.
... View more
02-20-2025
03:50 PM
|
1
|
0
|
210
|
POST
|
Happy to try and help Seth. If we specify the name of an interior node for the input to use as an XML Object Name, no, there is no way to capture data above / outside of the substructure of the specified node. The XML Object Name is intended to be used to specify the node to use as the "root" when parsing the XML, so any XML above or outside of that "root" node is necessarily ignored. The data outside the "root" node won't be included in the adaptation run by the input connector, so the data won't be available for processing as part of a GeoEvent Service. That means that we have to work with <Incident> as the actual root node. The multicardinal hierarchy of the XML makes it pretty tricky to create a GeoEvent Definition. You can allow the input to create a best first-guess at what the GeoEvent Definition ought to be (mostly to save you a bunch of mouse clicks creating a GeoEvent Definition from scratch). But if you work from a managed GeoEvent Definition created for you, you will have to copy and make some specific edits to your copy of the GeoEvent Definition before you can use it. There are a couple of behaviors you will need to work around when you then reconfigure your input connector to use your copy of the GeoEvent Definition. GeoEvent Server handles JSON better than XML I'm afraid. --- --- --- Try this first: Configure your input to create a GeoEvent Definition for you, making sure to leave the XML Object Name unspecified. When looking for a root node, rather than using one you specify, the input seems to insist that it be allowed to both recognize <Incident> as the root of the XML but then create a GeoEvent Definition which ignores <Incident> as a node in the data being received. Copy whatever GeoEvent Definition the input generates for you, then edit your copy to match the illustration below. When you edit your input to not generate a GeoEvent Definition for you, you can specify your copy of the GeoEvent Definition (the one you edited) but then you must also specify the input connector use <Incident> without the brackets as the XML Object Name. You should make sure to delete the managed GeoEvent Definition created by the input, mostly so that you don't accidently reference it later when configuring your solution. You should have a GeoEvent Definition you created which matches the illustration below, with an input configured to use that GeoEvent Definition (specifying Incident as the XML Object Name), and be able to receive and adapt the XML shown below before continuing. ( ! ) Both <Exposure> and <ExposureUser> need to be edited to specify they are multicardinal groups. Here is the input configuration : You can use any REST API you are familiar with (a lot of folks like to use Postman) to HTTP/POST the XML below to your input's endpoint shown as the URL property in the illustration above. <Incident>
<Other>
<UniqueIncidentIdentifier>ABC12345</UniqueIncidentIdentifier>
</Other>
<Exposures>
<Exposure>
<ExposureUnit>7894389</ExposureUnit>
<ExposureUser>
<ExposureUserID>10223</ExposureUserID>
<ExposureUserName>Alpha</ExposureUserName>
</ExposureUser>
<ExposureText>First exposure unit description</ExposureText>
<Exposure_Dttm>1739482095261</Exposure_Dttm>
</Exposure>
<Exposure>
<ExposureUnit>7894395</ExposureUnit>
<ExposureUser>
<ExposureUserID>10628</ExposureUserID>
<ExposureUserName>Bravo</ExposureUserName>
</ExposureUser>
<ExposureUser>
<ExposureUserID>10545</ExposureUserID>
<ExposureUserName>Charlie</ExposureUserName>
</ExposureUser>
<ExposureText>Second exposure unit description</ExposureText>
<Exposure_Dttm>1739482438936</Exposure_Dttm>
</Exposure>
</Exposures>
</Incident> --- --- --- Now we need to work on your original question, which is how to detect when the XML received does not have any content within the <Exposures></Exposures> structure. This could get a little messy. I think we are going to have to look at attributes we expect will have null values and simply assume that when we see the null values that the reported <Incident> has no <Exposure> data. You cannot use any processor to write to an event record structure with hierarchy, but you can configure a filter to look into an event record's hierarchy using "dot notation" to peek at specific attribute values. The filter expression for each Filter is very similar. The first one toggles the pulldown to the left of the expression to specify 'Not' where the second "No Exposure" Filter uses the same expression without the logical 'Not' ... You might assume that when Exposures.Exposure isn't defined, or is null, that a reference into data beneath that -- to look at Exposures.Exposure.ExposureUnit for example -- would result in some sort of null reference and generate an error or exception. Lucky for us error handling in the implementation of a Filter element allows us to drill down into null data without generating any such exception. --- --- --- Here, then, is a JSON representation of data allowed to pass through the upper Filter, because the incident has one or more Exposure nodes with an ExposureUnit whose data is not null : [{
"Other" : {
"UniqueIncidentIdentifier" : "ABC12345"
},
"Exposures" : {
"Exposure" : [ {
"ExposureUnit" : "7894389",
"ExposureUser" : [ {
"ExposureUserID" : "10223",
"ExposureUserName" : "Alpha"
} ],
"ExposureText" : "First exposure unit description",
"Exposure_Dttm" : "1739482095261"
}, {
"ExposureUnit" : "7894395",
"ExposureUser" : [ {
"ExposureUserID" : "10628",
"ExposureUserName" : "Bravo"
}, {
"ExposureUserID" : "10545",
"ExposureUserName" : "Charlie"
} ],
"ExposureText" : "Second exposure unit description",
"Exposure_Dttm" : "1739482438936"
} ]
}
}] And here is a JSON representation of data allowed to pass through the lower Filter when the incident has no data within its <Exposures></Exposures> structure. The array Exposure is empty. There are no data elements in the array, so there is no ExposureUnit string with a value which is not null. [{
"Other" : {
"UniqueIncidentIdentifier" : "ABC67890"
},
"Exposures" : {
"Exposure" : [ ]
}
}] I think as long as <Incident> has a cardinality of 1, and we can assume that a data substructure <Exposures></Exposures> will always exist -- even if there is nothing in the substructure -- the above will work for detecting incidents with no exposure. -- RJ
... View more
02-13-2025
03:21 PM
|
0
|
0
|
256
|
POST
|
Hello @CoffeeforClosers -- There really isn't any logic you can configure an inbound connector to use to check raw data the connector has received and determine how many records exist -- or if no data exists -- for a particular data structure. I think we can still work with the scenario you describe though. ( Thank you for the detailed data structure. ) I tried configuring an out-of-the-box Receive XML on a REST Endpoint input to allow me to send different blocks of XML data to test how the data is adapted. This approach should work just as well using a Poll an External Website for XML connector to periodically query a web service. The key was to specify the name of an XML node the input can look for when adapting the data. When you specify a value for the XML Object Name parameter the input will parse through the XML document looking for instances of that node and adapt each instance it finds as a separate data record. In this case I think we want to direct the input to look for <Exposure> nodes. You would enter this into the input connector's configuration without the angle brackets as illustrated here: You'll notice that I configured the input to use a GeoEvent Definition the user owns rather than allow the input to try and create and maintain a GeoEvent Definition. Avoiding the use of a "managed GeoEvent Definition" in this case is a best practice. Here is the GeoEvent Definition "CoffeeforClosers_XML" : Each <Exposure> record found (there can be zero or more in the XML document) should have at least the four attributes described in the event definition above. ExposureUnit is a simple string (cardinality: 1). The ExposureText and Exposure_Dttm are also adapted as a simple string and date respectively (both cardinality: 1). The ExposureUser has to be configured as a 'Group' because it has attributes ExposureUserID and ExposureUserName nested within it. It also has to be configured as cardinality 'Many' because each <Exposure> record is expected to have one or more <ExposureUser> nodes. I tested three different blocks of XML to verify that specifying the input look for <Exposure> would not have a problem if the XML document were <Exposures></Exposures> with no interior content. In each case we are expecting the input connector will scan through the received XML looking for blocks with a "root" node <Exposure>. It will then try to apply the GeoEvent Definition it was configured to use to adapt each <Exposure> block's interior content as a separate event record. Here is the JSON that would be written out for the second and third test (the first test has no data for the input to adapt, so no data record processing would be performed -- but the inbound adapter is able to handle this without producing an error). I hope this helps with what you were trying to do. The only piece missing would be if you need GeoEvent Server to produce some notification or alert, for example, when an XML document is received with only the nodes <Exposures></Exposures> and no interior content. I don't think we are going to be able handle that case since there is no way to configure the input connector to process the absence of data, and there is no way to check the count of records in a received data structure as data is being adapted. -- RJ
... View more
02-13-2025
10:55 AM
|
0
|
2
|
279
|
POST
|
Hello @PierreloupDucroix -- The Poll an ArcGIS Server for Features input uses token authentication, but cannot be configured to use your GP Service. This type of input uses credentials you have entered into a registered server connection to request an authentication token from the ArcGIS Server (or the Enterprise portal) and then uses that token to make authenticated requests. The token returned from the ArcGIS token service gets incorporated into requests the input sends when querying ArcGIS feature services. This is not something you specifically configure; it is inherent in the inbound connector's implementation. This is also our only input that supports paging as it is able to assume the data paging strategy used by ArcGIS Server feature services. You are correct -- the Poll an External Website for JSON input does not have a mechanism implemented to obtain an authentication token, either from an external web service or (in your case) a GP Service. Even if the "external" server were actually an internal server running ArcGIS Server, this type of inbound connector can only be configured with a URL (and optionally query parameters to pass with the URL) to send single requests to a web service for data. Basically, if you can copy/paste a URL into a web browser, this type of GeoEvent Server input can be configured to make the same query at some specified interval. It cannot query a web service to first acquire an authentication token and then incorporate that token into a second query for data. Nor can it make multiple requests to "page" through a large collection of data records. These sort of multi-step requests for authentication and/or request pagination are not configurable using any out-of-the-box input. Generally speaking an outbound connector (or "output") is able to make requests on a web service, but is not going to block or wait for any sort of response. That is why we don't have an out-of-the-box output to send requests, for example, to a GP Service. The GP Service should be allowed to run asynchronously and return a job number to the requesting server. As the termination of an event processing workflow, an output is not going to do anything with a job number returned from a GP Service. A custom processor might be developed to perform a blocking / synchronous operation and make calls to a GP Service, but this is a very bad idea. You should never have a GeoEvent Server element you've configured or developed invoke a GP Service synchronously and allow the HTTP request to block until the GP Service has completed its operation(s). GeoEvent Server was architected to process hundreds to potentially thousands of event records each second. A GeoEvent Service that performs a blocking operation with a GP Service runs orthogonal to GeoEvent Server's design. The advice is the same if you were considering developing a custom inbound transport invoke a GP Service as part of an authentication workflow. You do not want an input to make requests to a GP Service as a blocking operation as part of data ingestion and adaptation prior to event record processing. Rather than refer you to blogs on using GeoEvent Server's Java SDK (which is what you would use to develop a custom inbound / outbound transport or adapter -- or a custom processor) I'm going to recommend a different pattern you can follow to obtain authentication via a GP Service. It is not always possible to send subscription requests to a web service provider so that the web service will periodically push data to your GeoEvent Server. My recommendation for authentication workflows is to handle the authentication externally to GeoEvent Server. Consider developing a "bridge" between the data provider and your GeoEvent Server. The advantage to this approach is that you can develop the "bridge" in any language you are comfortable using -- Python script, PowerShell, ASP.NET, as an Amazon web services Lambda function ... etc. Your bridge will handle token acquisition and request authentication. The bridge can then query the web service you were thinking to configure a Poll an External Website for JSON input to perform initially. The bridge can obtain the authentication needed to authorize its queries and to pass a token (for example) along with its requests for data. You then develop a way for your bridge to relay the data to a GeoEvent Server hosted REST endpoint as an HTTP/POST request. This allows you to configure a Receive JSON on a REST Endpoint input to receive the data. Developing this sort of bridge between a data provider and GeoEvent Server can be very useful. It enables you to send multiple requests to page through large data collections because you can develop the bridge to conform to whatever paging strategy an external web service might use. The bridge can handle necessary authentication via whatever authorization mechanism is being used. The bridge has an opportunity to "clean" the data before relaying it to GeoEvent Server if, for example, there are characters in attribute field names like '$' that GeoEvent Server does not support. You can also use the bridge to implement a throttling mechanism and relay large amounts of data to GeoEvent Server in reasonable batches of so-many-records-per-second. -- RJ
... View more
05-03-2024
12:59 PM
|
0
|
0
|
424
|
POST
|
Hello @jauhari_mani -- Please see my reply to @RipaliBatra in the thread: GeoEvent-Poll an External Website for JSON. Given the rich hierarchical structure of the JSON from the https://waqi.info web service you really should not rely on the GeoEvent Sampler for a good representation of the data. The sampler struggles when given complex hierarchical JSON. A better approach is to create one or more Write to a JSON File outputs and use them to log the event records emitted from different processors along your configured event processing workflow. -- RJ
... View more
05-03-2024
10:04 AM
|
0
|
0
|
558
|
POST
|
Have you read through the examples in the thread: Timestamps received from a sensor feed display differently in GeoEvent Sampler, ArcGIS REST Services queries, and ArcGIS Pro I think your expression replaceAll(myField, '(\d+)[/](\d+)[/](\d+)', '$3-$1-$2T00:00:00') may be taking the value "3/7/2024 12:39:42.784 PM" and producing the value "2024-3-7T00:00:00 12:39:42.784 PM". Two problems with that are the month and day lost their leading zeros and the ISO 8601 format you're trying to create has unwanted characters from the original time appended to the tail of the string. If you are OK with GeoEvent Server assuming the value "3/7/2024 12:39:42.784 PM" is local server time, you could edit the GeoEvent Definition your input is using adapt the date/time string to specify the value be adapted as a Date and specify the input apply MM/dd/yyyy hh:mm:ss.SSS aa as the Expected Date Format parameter. Using this formatting string to "teach" the input how to parse your date/time string, I was able to adapt the string "3/7/2024 12:39:42.784 PM" as an epoch value 1709843982784. I could then use a Field Mapper to cast this Date value to a String "Thu Mar 07 12:39:42 PST 2024" to double-check the data was adapted properly. -- RJ
... View more
05-02-2024
06:16 PM
|
0
|
0
|
585
|
POST
|
@Moi_Nccncc -- You are asking if an event record you've received, containing a polygonal area, can be enriched with the geometry of a point geofence to produce an event record with two Geometry attributes? An event record in GeoEvent Server is allowed to have more that one field whose type is Geometry, but feature records require that their geometry be referenced by name as 'geometry' -- so you will have to first transfer the event record's geometry to some other named attribute field to avoid overwriting the event record's geometry with a feature record's geometry. If you were to use a GeoTagger processor to enrich the event record (polygon geometry) with the name of a single geofence (point geometry) that shares a spatial relationship with the event record (e.g. a point geofence which intersects the event record's polygon) you could use a Field Mapper to make sure attribute fields you want to pull into the event record exist, then use a Field Enricher configured to write to 'Existing Fields' to fetch those named attribute values from whatever feature record you used to register the geofence originally. The workflow I think you're looking for would be something like: GeoTagger (to get the name of one specific geofence you can use as a key for an attribute join) Field Mapper (to map the event record's geometry to a field named something other than 'geometry'). Also to add fields to the event record which exist in the feature record schema whose data you want to import. It is a best practice to configure Field Mapper to write to 'Existing Fields' rather than allow the processor to create a managed GeoEvent Definition for you. Field Enricher (to pull attribute data from an identified feature record into the existing attribute fields) If multiple point features intersect an event record's polygon, the GeoTagger will find multiple registered geofences which satisfy its spatial relationship (e.g. Intersects Any). The resulting comma separated list of geofence names you get from the GeoTagger means that you do not have a single unique value to use as the key for the attribute join the Field Enricher needs to perform its operation. You could use a Field Splitter to split the comma delimited string and produce several individual event records, each with exactly one named geofence, then enrich each of those event records with attributes you pull from the feature record(s) you originally used to register your geofences. I would strongly recommend that you ask your Customer Service Representative about options to connect with a technical advisor or Professional Services Delivery consultant who can help you with the sort of operations you are attempting to configure GeoEvent Server to perform. -- RJ
... View more
05-02-2024
05:58 PM
|
0
|
0
|
449
|
POST
|
@Moi_Nccncc -- You should submit this as a new request to Esri Technical Support so an analyst can look at the problem with you. GeoEvent Server uses Apache Kafka as an event record broker. Kafka uses on-disk topic queues to hold event records a producer needs to convey to a consumer. In this case the inbound connector (input) is the producer and the GeoEvent Service you have configured to use that input is the consumer. It is strange that you are not seeing anything in the GeoEvent Server's karaf.log system log. You would normally see ERROR messages that an event record could not be delivered within a default timeout of 60,000 milliseconds (60 seconds) and the event record was lost. There might be WARN messages logged around the error condition. I would also expect to see error messages with keywords or phrases like "timeout exception" and/or "expiring records". There are two JVM which support a running GeoEvent Server. One is for the GeoEvent-Gateway (which manages the Kafka message broker and Zookeeper configuration store). The other is for GeoEvent Server proper. GeoEvent Server depends on the GeoEvent-Gateway. If something is wrong with the Apache Kafka on-disk topic queues, or with the GeoEvent-Gateway service or the JVM the service runs, you can see what you describe where an input is able to receive (ingest) and adapt data to create event records, but the Kafka message broker is unable to route the instantiated event record to a GeoEvent Service for processing. Problems with Kafka message brokering usually require an administrator administratively reset GeoEvent Server. Uninstalling and re-installing won't work if the Kafka and Zookeeper files left behind in the Windows %ProgramData% folder are corrupted or unusable. -- RJ
... View more
05-02-2024
04:53 PM
|
0
|
0
|
729
|
POST
|
@Moi_Nccncc -- You cannot use GeoTagger (or any of the configurable processors really) to perform a spatial operation between two geofences. The geometry processors all take an event record and use the event record's geometry as an argument to a spatial condition. The processor uses specified geofences as a second argument when evaluating the spatial condition. You can do what I think you want to do, however. You already stipulate that point features from previous AVL locations are registered as geofences. What you want to do is configure an input to either query to retrieve (e.g. Poll an ArcGIS Server for Features) event records whose polygon you want to use to count intersecting points -- or arrange to receive via HTTP/POST (e.g. Receive JSON on a REST Endpoint) the polygon you want to use. GeoTagger is able to take an event record's polygon geometry and evaluate an Intersects Any condition to pull the names of intersecting point geofences into the event record being processed. This will give you a comma delimited list of (point) geofence names in an attribute field you specify (e.g. geo_tags) whose location intersects the event record's polygon. geo_tags: "Newkirk,Tecolotito,Logan,Pastura,Conchas Dam,Fort Sumner,Melrose" You then configure a Field Calculator with an expression to replace any substring between two comma with an empty string. You are essentially removing all of the geofence names from the comma delimited list and leaving only the commas. You can do this with a replaceAll( ) function call which supports Regular Expression pattern matching. replaceAll(geo_tags, '[^,]*', '') The regular expression pattern in the replaceAll( ) expression above matches "any single character which is not a comma" (e.g. [^,] ) zero or more times (e.g. [^,]* ) and replaces each matching substring with an empty string. Now you simply count the commas, remembering to add one for the final geofence name in the list. The result of this expression gets written into an attribute field whose type is a Long integer. length(replaceAll(geo_tags, '[^,]*', '')) + 1 When mocking up a solution to check syntax, I used a Receive JSON on a REST Endpoint input to model receiving a new AVL point location, ran that event record through a Create Buffer processor to create a 100km geodesic polygon, then used a GeoTagger to get the names of the point geofences which intersect the received AVL point location's buffered AOI. Strip the geofence names from the comma delimited list and count the commas. -- RJ
... View more
05-02-2024
04:15 PM
|
0
|
0
|
399
|
POST
|
@SpatialSean -- Apologies for joining this discussion late. I don't think what you want to do can be done out-of-the-box using the available configurable processors. None of the processors support hierarchical output. There was strong bias toward flattening an event record's data structure in order to make adaption easier when using an output such as Add a Feature to add an event record's data as a new feature in a geodatabase. We never implemented a workflow which will take a flat data schema and allow you to recombine elements to place them into lists or groups. Practically speaking, given anything other than the simplest of event record data structures, developing your own custom processor is the best approach. I mean, you could try to configure a series of out-of-the box Field Calculator processors to take input like: {
"key_01": "November",
"key_02": "Alpha",
"key_03": 1714613432
} And serialize it into a JSON string like: { "keys": [{ "key_01": "November" }, { "key_02": "Alpha" }, { "key_03": 1714613432 } ]} But that's really impractical. The expression needed to do this would look something like: '{' + ' "keys": ' +
' [{' +
' "key_01": ' + '"' + key_01 + '"' + ' }, ' +
'{' +
' "key_02": ' + '"' + key_02 + '"' + ' }, ' +
'{' +
' "key_03": ' + key_03 + ' } ' +
']}' Using simple string concatenation to construct a raw JSON string with more than just a few attribute values is going to become unmanageable very quickly. Especially when you get to the JSON string representation of a polygon with its embedded brackets, commas, and quoted keys. Even if you condense some of what I have above and combine values I've separated as literal strings into fewer single-quoted literal strings ... I was trying to make the illustration somewhat readable ... an out-of-the-box approach just isn't feasible. Wanted to comment to confirm limitations -- RJ
... View more
05-01-2024
07:08 PM
|
0
|
1
|
1175
|
POST
|
Hello @RipaliBatra -- Given the rich hierarchical structure of the JSON you are receiving from the https://waqi.info web service you really cannot rely on the GeoEvent Sampler for a good representation of the data. The sampler struggles when given complex hierarchical JSON. A better approach is to create one or more Write to a JSON File outputs and use them to log the event records emitted from different processors along your configured event processing workflow. I have a GeoEvent Server 11.1 deployment that I used to test your feed. I was able to allow a Receive JSON on a REST Endpoint input to generate a GeoEvent Definition for me. A sample of the data I sent to my GeoEvent Server input and the generated GeoEvent Definition are shown below {Fig 1] and [Fig 2]. Note that I specified the input use data as the root of the JSON structure when adapting the received JSON. Note: Be careful when relying on auto-generated GeoEvent Definitions. An input will make its "best guess" as to what the GeoEvent Definition ought to be based on the first data record it receives. But the generated event definition will often use Double for data values received as epoch long integer values (for example). You have to review the generated GeoEvent Definition and verify that each array and element will be adapted properly for the data you expect to receive. I was able to configure a Field Mapper to pull specific values out of the hierarchical JSON: Note the expressions being used to access the data: Attributions is an array, so we have to provide the index of the element we want to access from that array. attributions[1].name accesses the second element in the array, the one with the named string "World Air Quality Index Project". City is the name of an element, so we do not access it with ordinal values like we do when accessing the array. city.name is sufficient to retrieve the string "B R Ambedkar University, Lucknow, India". The latitude and longitude coordinates in the city element, however, are in an array nested within an element. When pulling these as separate values we have to specify their ordinal positions in the array: The latitude is city.geo[0] The longitude is city.geo[1] The iaqi values are grouped within an element. Accessing them is fairly straightforward: iaqi.co.v iaqi.no2.v iaqi.pm10.v In my Field Mapper illustration I only pulled the string for the "day", but if we wanted to be a little more creative we could build a string from available values for a given day. The string value "Day: 2024-05-01 (Avg/Min/Max: 29.0 / 14.0 / 50.0)" could be build using the following expression to pull individual values and append them together -- taking care to use the toString( ) function to explicitly cast Double values to String values when appending them to literal strings: 'Day: ' + toString(forecast.daily.o3[2].day) + ' (Avg/Min/Max: ' + toString(forecast.daily.o3[2].avg) + ' / ' + toString(forecast.daily.o3[2].min) + ' / ' + toString(forecast.daily.o3[2].max) +')' Because the daily forecast values for "o3", "pm10", and "pm25" all have the same elemental structure you will not be able to use a Multicardinal Field Splitter processor to collapse or flatten these three arrays. It looks like these arrays are allowed to contain a variable number of items. The array o3 has 8 elements whereas the arrays pm10 and pm25 both have 9 elements. There is no iterator or looping mechanism available in any of GeoEvent Server's processors, so you will very likely have to stick with extracting essential information from the JSON using field calculation expressions like I show above. Hope this information helps, and a special Thank You to @Gene_Sipes for jumping in to help with this complicated hierarchical JSON. -- RJ
... View more
05-01-2024
06:16 PM
|
1
|
0
|
1324
|
DOC
|
I'd like thank Cameron Everhart, one of our technical consultants from Professional Services Delivery for working on this particular challenge with me. A customer was receiving data with a polygon specified as a collection of Longitude / Latitude coordinate values. The collection was received as a string and they wanted to coerce that String into a Geometry. Using GeoEvent Server Field Calculator processors to evaluate a series of nested replaceAll( ) functions, we were able to do just that. The string manipulation made possible with regular expression pattern matching supported by the replaceAll( ) function is incredibly powerful. We start with the following input. Note that each coordinate value is separated by a single space and each coordinate pair is separated by a <comma><space> character sequence: Our solution uses regular expressions to match patterns in the input string and three Field Calculator processors, configured with replaceAll( ) expressions, to manipulate the input string. Note that we are using the "regular" Field Calculator processor, not the Field Calculator (Regular Expression) version of that processor. Our goal is to transform the string illustrated above into an Esri Feature JSON string representation of a polygon geometry. You will want to review the Common Data Types > Geometry Objects topic in the ArcGIS developer help to understand how polygons can be represented as a JSON string. You will also want to review the "String functions for Field Calculator Processor" portion of the help topic for the GeoEvent Server's Field Calculator processor . When represented as Esri Feature JSON strings, polygons specify coordinate values within a structure of nested arrays. One thing we need to do is identify and replace all of the <comma> <space> character sequences in our input string with a literal ],[ character sequence. We can do this with the following regular expression pattern match and literal string replacement: RegEx Pattern: ', '
Replacement: '],[' Incorporating this into a replaceAll( ) expression, we can configure a Field Calculator to evaluate the expression. The reference "polygon" in this expression identifies the attribute field holding the input string in the event record being processed: replaceAll(replaceAll(polygon, ', ', '],['), 'POLYGON ', '') Notice that the expression invokes the replaceAll( ) function twice. The result from the "inner" function call (replacing all literal <comma><space> with a literal ],[) is used as input to an "outer" function call which replaces the unwanted literal string POLYGON (with its trailing space) at the front of the string with an empty string. This first expression, with its nested calls to replaceAll( ), performs the following string manipulation: -- Input --
POLYGON ((-114.125 33.375, -116.125 32.375, -115.125 31.375, -113.125 31.375, -112.125 32.375))
-- Output --
((-114.125 33.375],[-116.125 32.375],[-115.125 31.375],[-113.125 31.375],[-112.125 32.375)) Next, we take the output from this first expression and configure a second Field Calculator to replace the literal <space> between each pair of coordinates with a <comma>. The ArcGIS developer Feature JSON string specification for a polygon requires that the coordinates of each vertex are expressed as a comma delimited pair of values (X,Y). The second Field Calculator expression replaceAll(polygon, ' ', ',') takes the input string illustrated below and produces the indicated output string: -- Input --
((-114.125 33.375],[-116.125 32.375],[-115.125 31.375],[-113.125 31.375],[-112.125 32.375))
-- Output --
((-114.125,33.375],[-116.125,32.375],[-115.125,31.375],[-113.125,31.375],[-112.125,32.375)) A final pair of regular expression pattern matches can now be used to replace the (( at the front of the string and the )) at the end of the string with the required array bracketing and spatial reference specification. The first pattern match uses a ^ metacharacter to anchor the pattern match to the start of the input string. The target of this match is the double parentheses at the beginning of the string. The second pattern match targets the double parentheses appearing at the end of the input string. The $ metacharacter is used here to anchor the pattern match to the end of the string. Our final expression will appear more complicated at first, mostly because the literal replacement strings are a little longer. I'll try to pull the expression apart to make it easier to understand. replaceAll(replaceAll(polygon, '^\(\(', '{"rings": [[['), '\)\)$', ']]], "spatialReference": {"wkid": 4326}}') The inner replaceAll( ) has our first regular expression pattern: replaceAll(polygon, '^\(\(', '{"rings": [[[') Back-slash characters are used to 'escape' the pair of parentheses in the pattern. This is required to specify that the parentheses are literally rounded parentheses and not the start of what regular expressions refer to as a capturing group. This pattern and replacement is adding the required array bracketing and "rings" specification to the string representation of the polygon. The outer replaceAll( ) has our second regular expression pattern: replaceAll( . . . '\)\)$', ']]], "spatialReference": {"wkid": 4326}}') Back-slash characters are again used to 'escape' the pair of parentheses in the pattern. The pattern and replacement in this case appends the required closing brackets for the polygon coordinate array and includes an appropriate spatial reference for the polygon's coordinates. The expression nests an "inner" call to replaceAll( ) within a second "outer" invocation. The result from the "inner" function call, manipulating the beginning of the string, is used as input to the "outer" function call which manipulates the end of the string. The expression could be simplified, perhaps, by using separate Field Calculator processors to handle each pattern and replacement, but the leading and trailing parentheses anchored to the beginning and end of the string seemed to beg for the replacements to be done serially. Here is the string manipulation being performed in this final step: -- Input --
((-114.125,33.375],[-116.125,32.375],[-115.125,31.375],[-113.125,31.375],[-112.125,32.375))
-- Output --
{\"rings\": [[[-114.125,33.375],[-116.125,32.375],[-115.125,31.375],[-113.125,31.375],[-112.125,32.375]]], \"spatialReference\": {\"wkid\": 4326}} We can now use a Field Mapper processor to cast this final String value into a Geometry. Here is an illustration of a GeoEvent Service with the three Field Calculator processors and Field Mapper processor routing output to a stream service. The stream service was used to verify the produced string could be successfully cast to a Geometry and displayed as a polygon feature on a web map. If you found this article helpful, you might want to check out these other threads which highlight what you can do with the Field Calculator processor, the Field Mapper processor and regular expression pattern matching. How to switch positions on coordinates GeoEvent 10.9: Using expressions in Field Mapper Processors
... View more
04-24-2024
06:12 PM
|
3
|
1
|
694
|
Title | Kudos | Posted |
---|---|---|
1 | 02-20-2025 03:50 PM | |
1 | 08-31-2015 07:23 PM | |
1 | 05-01-2024 06:16 PM | |
1 | 01-05-2024 02:25 PM | |
1 | 01-09-2024 09:04 AM |
Online Status |
Offline
|
Date Last Visited |
Wednesday
|