POST
|
Hello @PierreloupDucroix -- The Poll an ArcGIS Server for Features input uses token authentication, but cannot be configured to use your GP Service. This type of input uses credentials you have entered into a registered server connection to request an authentication token from the ArcGIS Server (or the Enterprise portal) and then uses that token to make authenticated requests. The token returned from the ArcGIS token service gets incorporated into requests the input sends when querying ArcGIS feature services. This is not something you specifically configure; it is inherent in the inbound connector's implementation. This is also our only input that supports paging as it is able to assume the data paging strategy used by ArcGIS Server feature services. You are correct -- the Poll an External Website for JSON input does not have a mechanism implemented to obtain an authentication token, either from an external web service or (in your case) a GP Service. Even if the "external" server were actually an internal server running ArcGIS Server, this type of inbound connector can only be configured with a URL (and optionally query parameters to pass with the URL) to send single requests to a web service for data. Basically, if you can copy/paste a URL into a web browser, this type of GeoEvent Server input can be configured to make the same query at some specified interval. It cannot query a web service to first acquire an authentication token and then incorporate that token into a second query for data. Nor can it make multiple requests to "page" through a large collection of data records. These sort of multi-step requests for authentication and/or request pagination are not configurable using any out-of-the-box input. Generally speaking an outbound connector (or "output") is able to make requests on a web service, but is not going to block or wait for any sort of response. That is why we don't have an out-of-the-box output to send requests, for example, to a GP Service. The GP Service should be allowed to run asynchronously and return a job number to the requesting server. As the termination of an event processing workflow, an output is not going to do anything with a job number returned from a GP Service. A custom processor might be developed to perform a blocking / synchronous operation and make calls to a GP Service, but this is a very bad idea. You should never have a GeoEvent Server element you've configured or developed invoke a GP Service synchronously and allow the HTTP request to block until the GP Service has completed its operation(s). GeoEvent Server was architected to process hundreds to potentially thousands of event records each second. A GeoEvent Service that performs a blocking operation with a GP Service runs orthogonal to GeoEvent Server's design. The advice is the same if you were considering developing a custom inbound transport invoke a GP Service as part of an authentication workflow. You do not want an input to make requests to a GP Service as a blocking operation as part of data ingestion and adaptation prior to event record processing. Rather than refer you to blogs on using GeoEvent Server's Java SDK (which is what you would use to develop a custom inbound / outbound transport or adapter -- or a custom processor) I'm going to recommend a different pattern you can follow to obtain authentication via a GP Service. It is not always possible to send subscription requests to a web service provider so that the web service will periodically push data to your GeoEvent Server. My recommendation for authentication workflows is to handle the authentication externally to GeoEvent Server. Consider developing a "bridge" between the data provider and your GeoEvent Server. The advantage to this approach is that you can develop the "bridge" in any language you are comfortable using -- Python script, PowerShell, ASP.NET, as an Amazon web services Lambda function ... etc. Your bridge will handle token acquisition and request authentication. The bridge can then query the web service you were thinking to configure a Poll an External Website for JSON input to perform initially. The bridge can obtain the authentication needed to authorize its queries and to pass a token (for example) along with its requests for data. You then develop a way for your bridge to relay the data to a GeoEvent Server hosted REST endpoint as an HTTP/POST request. This allows you to configure a Receive JSON on a REST Endpoint input to receive the data. Developing this sort of bridge between a data provider and GeoEvent Server can be very useful. It enables you to send multiple requests to page through large data collections because you can develop the bridge to conform to whatever paging strategy an external web service might use. The bridge can handle necessary authentication via whatever authorization mechanism is being used. The bridge has an opportunity to "clean" the data before relaying it to GeoEvent Server if, for example, there are characters in attribute field names like '$' that GeoEvent Server does not support. You can also use the bridge to implement a throttling mechanism and relay large amounts of data to GeoEvent Server in reasonable batches of so-many-records-per-second. -- RJ
... View more
05-03-2024
12:59 PM
|
0
|
0
|
303
|
POST
|
Hello @jauhari_mani -- Please see my reply to @RipaliBatra in the thread: GeoEvent-Poll an External Website for JSON. Given the rich hierarchical structure of the JSON from the https://waqi.info web service you really should not rely on the GeoEvent Sampler for a good representation of the data. The sampler struggles when given complex hierarchical JSON. A better approach is to create one or more Write to a JSON File outputs and use them to log the event records emitted from different processors along your configured event processing workflow. -- RJ
... View more
05-03-2024
10:04 AM
|
0
|
0
|
372
|
POST
|
Have you read through the examples in the thread: Timestamps received from a sensor feed display differently in GeoEvent Sampler, ArcGIS REST Services queries, and ArcGIS Pro I think your expression replaceAll(myField, '(\d+)[/](\d+)[/](\d+)', '$3-$1-$2T00:00:00') may be taking the value "3/7/2024 12:39:42.784 PM" and producing the value "2024-3-7T00:00:00 12:39:42.784 PM". Two problems with that are the month and day lost their leading zeros and the ISO 8601 format you're trying to create has unwanted characters from the original time appended to the tail of the string. If you are OK with GeoEvent Server assuming the value "3/7/2024 12:39:42.784 PM" is local server time, you could edit the GeoEvent Definition your input is using adapt the date/time string to specify the value be adapted as a Date and specify the input apply MM/dd/yyyy hh:mm:ss.SSS aa as the Expected Date Format parameter. Using this formatting string to "teach" the input how to parse your date/time string, I was able to adapt the string "3/7/2024 12:39:42.784 PM" as an epoch value 1709843982784. I could then use a Field Mapper to cast this Date value to a String "Thu Mar 07 12:39:42 PST 2024" to double-check the data was adapted properly. -- RJ
... View more
05-02-2024
06:16 PM
|
0
|
0
|
409
|
POST
|
@Moi_Nccncc -- You are asking if an event record you've received, containing a polygonal area, can be enriched with the geometry of a point geofence to produce an event record with two Geometry attributes? An event record in GeoEvent Server is allowed to have more that one field whose type is Geometry, but feature records require that their geometry be referenced by name as 'geometry' -- so you will have to first transfer the event record's geometry to some other named attribute field to avoid overwriting the event record's geometry with a feature record's geometry. If you were to use a GeoTagger processor to enrich the event record (polygon geometry) with the name of a single geofence (point geometry) that shares a spatial relationship with the event record (e.g. a point geofence which intersects the event record's polygon) you could use a Field Mapper to make sure attribute fields you want to pull into the event record exist, then use a Field Enricher configured to write to 'Existing Fields' to fetch those named attribute values from whatever feature record you used to register the geofence originally. The workflow I think you're looking for would be something like: GeoTagger (to get the name of one specific geofence you can use as a key for an attribute join) Field Mapper (to map the event record's geometry to a field named something other than 'geometry'). Also to add fields to the event record which exist in the feature record schema whose data you want to import. It is a best practice to configure Field Mapper to write to 'Existing Fields' rather than allow the processor to create a managed GeoEvent Definition for you. Field Enricher (to pull attribute data from an identified feature record into the existing attribute fields) If multiple point features intersect an event record's polygon, the GeoTagger will find multiple registered geofences which satisfy its spatial relationship (e.g. Intersects Any). The resulting comma separated list of geofence names you get from the GeoTagger means that you do not have a single unique value to use as the key for the attribute join the Field Enricher needs to perform its operation. You could use a Field Splitter to split the comma delimited string and produce several individual event records, each with exactly one named geofence, then enrich each of those event records with attributes you pull from the feature record(s) you originally used to register your geofences. I would strongly recommend that you ask your Customer Service Representative about options to connect with a technical advisor or Professional Services Delivery consultant who can help you with the sort of operations you are attempting to configure GeoEvent Server to perform. -- RJ
... View more
05-02-2024
05:58 PM
|
0
|
0
|
310
|
POST
|
@Moi_Nccncc -- You should submit this as a new request to Esri Technical Support so an analyst can look at the problem with you. GeoEvent Server uses Apache Kafka as an event record broker. Kafka uses on-disk topic queues to hold event records a producer needs to convey to a consumer. In this case the inbound connector (input) is the producer and the GeoEvent Service you have configured to use that input is the consumer. It is strange that you are not seeing anything in the GeoEvent Server's karaf.log system log. You would normally see ERROR messages that an event record could not be delivered within a default timeout of 60,000 milliseconds (60 seconds) and the event record was lost. There might be WARN messages logged around the error condition. I would also expect to see error messages with keywords or phrases like "timeout exception" and/or "expiring records". There are two JVM which support a running GeoEvent Server. One is for the GeoEvent-Gateway (which manages the Kafka message broker and Zookeeper configuration store). The other is for GeoEvent Server proper. GeoEvent Server depends on the GeoEvent-Gateway. If something is wrong with the Apache Kafka on-disk topic queues, or with the GeoEvent-Gateway service or the JVM the service runs, you can see what you describe where an input is able to receive (ingest) and adapt data to create event records, but the Kafka message broker is unable to route the instantiated event record to a GeoEvent Service for processing. Problems with Kafka message brokering usually require an administrator administratively reset GeoEvent Server. Uninstalling and re-installing won't work if the Kafka and Zookeeper files left behind in the Windows %ProgramData% folder are corrupted or unusable. -- RJ
... View more
05-02-2024
04:53 PM
|
0
|
0
|
461
|
POST
|
@Moi_Nccncc -- You cannot use GeoTagger (or any of the configurable processors really) to perform a spatial operation between two geofences. The geometry processors all take an event record and use the event record's geometry as an argument to a spatial condition. The processor uses specified geofences as a second argument when evaluating the spatial condition. You can do what I think you want to do, however. You already stipulate that point features from previous AVL locations are registered as geofences. What you want to do is configure an input to either query to retrieve (e.g. Poll an ArcGIS Server for Features) event records whose polygon you want to use to count intersecting points -- or arrange to receive via HTTP/POST (e.g. Receive JSON on a REST Endpoint) the polygon you want to use. GeoTagger is able to take an event record's polygon geometry and evaluate an Intersects Any condition to pull the names of intersecting point geofences into the event record being processed. This will give you a comma delimited list of (point) geofence names in an attribute field you specify (e.g. geo_tags) whose location intersects the event record's polygon. geo_tags: "Newkirk,Tecolotito,Logan,Pastura,Conchas Dam,Fort Sumner,Melrose" You then configure a Field Calculator with an expression to replace any substring between two comma with an empty string. You are essentially removing all of the geofence names from the comma delimited list and leaving only the commas. You can do this with a replaceAll( ) function call which supports Regular Expression pattern matching. replaceAll(geo_tags, '[^,]*', '') The regular expression pattern in the replaceAll( ) expression above matches "any single character which is not a comma" (e.g. [^,] ) zero or more times (e.g. [^,]* ) and replaces each matching substring with an empty string. Now you simply count the commas, remembering to add one for the final geofence name in the list. The result of this expression gets written into an attribute field whose type is a Long integer. length(replaceAll(geo_tags, '[^,]*', '')) + 1 When mocking up a solution to check syntax, I used a Receive JSON on a REST Endpoint input to model receiving a new AVL point location, ran that event record through a Create Buffer processor to create a 100km geodesic polygon, then used a GeoTagger to get the names of the point geofences which intersect the received AVL point location's buffered AOI. Strip the geofence names from the comma delimited list and count the commas. -- RJ
... View more
05-02-2024
04:15 PM
|
0
|
0
|
271
|
POST
|
@SpatialSean -- Apologies for joining this discussion late. I don't think what you want to do can be done out-of-the-box using the available configurable processors. None of the processors support hierarchical output. There was strong bias toward flattening an event record's data structure in order to make adaption easier when using an output such as Add a Feature to add an event record's data as a new feature in a geodatabase. We never implemented a workflow which will take a flat data schema and allow you to recombine elements to place them into lists or groups. Practically speaking, given anything other than the simplest of event record data structures, developing your own custom processor is the best approach. I mean, you could try to configure a series of out-of-the box Field Calculator processors to take input like: {
"key_01": "November",
"key_02": "Alpha",
"key_03": 1714613432
} And serialize it into a JSON string like: { "keys": [{ "key_01": "November" }, { "key_02": "Alpha" }, { "key_03": 1714613432 } ]} But that's really impractical. The expression needed to do this would look something like: '{' + ' "keys": ' +
' [{' +
' "key_01": ' + '"' + key_01 + '"' + ' }, ' +
'{' +
' "key_02": ' + '"' + key_02 + '"' + ' }, ' +
'{' +
' "key_03": ' + key_03 + ' } ' +
']}' Using simple string concatenation to construct a raw JSON string with more than just a few attribute values is going to become unmanageable very quickly. Especially when you get to the JSON string representation of a polygon with its embedded brackets, commas, and quoted keys. Even if you condense some of what I have above and combine values I've separated as literal strings into fewer single-quoted literal strings ... I was trying to make the illustration somewhat readable ... an out-of-the-box approach just isn't feasible. Wanted to comment to confirm limitations -- RJ
... View more
05-01-2024
07:08 PM
|
0
|
1
|
778
|
POST
|
Hello @RipaliBatra -- Given the rich hierarchical structure of the JSON you are receiving from the https://waqi.info web service you really cannot rely on the GeoEvent Sampler for a good representation of the data. The sampler struggles when given complex hierarchical JSON. A better approach is to create one or more Write to a JSON File outputs and use them to log the event records emitted from different processors along your configured event processing workflow. I have a GeoEvent Server 11.1 deployment that I used to test your feed. I was able to allow a Receive JSON on a REST Endpoint input to generate a GeoEvent Definition for me. A sample of the data I sent to my GeoEvent Server input and the generated GeoEvent Definition are shown below {Fig 1] and [Fig 2]. Note that I specified the input use data as the root of the JSON structure when adapting the received JSON. Note: Be careful when relying on auto-generated GeoEvent Definitions. An input will make its "best guess" as to what the GeoEvent Definition ought to be based on the first data record it receives. But the generated event definition will often use Double for data values received as epoch long integer values (for example). You have to review the generated GeoEvent Definition and verify that each array and element will be adapted properly for the data you expect to receive. I was able to configure a Field Mapper to pull specific values out of the hierarchical JSON: Note the expressions being used to access the data: Attributions is an array, so we have to provide the index of the element we want to access from that array. attributions[1].name accesses the second element in the array, the one with the named string "World Air Quality Index Project". City is the name of an element, so we do not access it with ordinal values like we do when accessing the array. city.name is sufficient to retrieve the string "B R Ambedkar University, Lucknow, India". The latitude and longitude coordinates in the city element, however, are in an array nested within an element. When pulling these as separate values we have to specify their ordinal positions in the array: The latitude is city.geo[0] The longitude is city.geo[1] The iaqi values are grouped within an element. Accessing them is fairly straightforward: iaqi.co.v iaqi.no2.v iaqi.pm10.v In my Field Mapper illustration I only pulled the string for the "day", but if we wanted to be a little more creative we could build a string from available values for a given day. The string value "Day: 2024-05-01 (Avg/Min/Max: 29.0 / 14.0 / 50.0)" could be build using the following expression to pull individual values and append them together -- taking care to use the toString( ) function to explicitly cast Double values to String values when appending them to literal strings: 'Day: ' + toString(forecast.daily.o3[2].day) + ' (Avg/Min/Max: ' + toString(forecast.daily.o3[2].avg) + ' / ' + toString(forecast.daily.o3[2].min) + ' / ' + toString(forecast.daily.o3[2].max) +')' Because the daily forecast values for "o3", "pm10", and "pm25" all have the same elemental structure you will not be able to use a Multicardinal Field Splitter processor to collapse or flatten these three arrays. It looks like these arrays are allowed to contain a variable number of items. The array o3 has 8 elements whereas the arrays pm10 and pm25 both have 9 elements. There is no iterator or looping mechanism available in any of GeoEvent Server's processors, so you will very likely have to stick with extracting essential information from the JSON using field calculation expressions like I show above. Hope this information helps, and a special Thank You to @Gene_Sipes for jumping in to help with this complicated hierarchical JSON. -- RJ
... View more
05-01-2024
06:16 PM
|
1
|
0
|
875
|
DOC
|
I'd like thank Cameron Everhart, one of our technical consultants from Professional Services Delivery for working on this particular challenge with me. A customer was receiving data with a polygon specified as a collection of Longitude / Latitude coordinate values. The collection was received as a string and they wanted to coerce that String into a Geometry. Using GeoEvent Server Field Calculator processors to evaluate a series of nested replaceAll( ) functions, we were able to do just that. The string manipulation made possible with regular expression pattern matching supported by the replaceAll( ) function is incredibly powerful. We start with the following input. Note that each coordinate value is separated by a single space and each coordinate pair is separated by a <comma><space> character sequence: Our solution uses regular expressions to match patterns in the input string and three Field Calculator processors, configured with replaceAll( ) expressions, to manipulate the input string. Note that we are using the "regular" Field Calculator processor, not the Field Calculator (Regular Expression) version of that processor. Our goal is to transform the string illustrated above into an Esri Feature JSON string representation of a polygon geometry. You will want to review the Common Data Types > Geometry Objects topic in the ArcGIS developer help to understand how polygons can be represented as a JSON string. You will also want to review the "String functions for Field Calculator Processor" portion of the help topic for the GeoEvent Server's Field Calculator processor . When represented as Esri Feature JSON strings, polygons specify coordinate values within a structure of nested arrays. One thing we need to do is identify and replace all of the <comma> <space> character sequences in our input string with a literal ],[ character sequence. We can do this with the following regular expression pattern match and literal string replacement: RegEx Pattern: ', '
Replacement: '],[' Incorporating this into a replaceAll( ) expression, we can configure a Field Calculator to evaluate the expression. The reference "polygon" in this expression identifies the attribute field holding the input string in the event record being processed: replaceAll(replaceAll(polygon, ', ', '],['), 'POLYGON ', '') Notice that the expression invokes the replaceAll( ) function twice. The result from the "inner" function call (replacing all literal <comma><space> with a literal ],[) is used as input to an "outer" function call which replaces the unwanted literal string POLYGON (with its trailing space) at the front of the string with an empty string. This first expression, with its nested calls to replaceAll( ), performs the following string manipulation: -- Input --
POLYGON ((-114.125 33.375, -116.125 32.375, -115.125 31.375, -113.125 31.375, -112.125 32.375))
-- Output --
((-114.125 33.375],[-116.125 32.375],[-115.125 31.375],[-113.125 31.375],[-112.125 32.375)) Next, we take the output from this first expression and configure a second Field Calculator to replace the literal <space> between each pair of coordinates with a <comma>. The ArcGIS developer Feature JSON string specification for a polygon requires that the coordinates of each vertex are expressed as a comma delimited pair of values (X,Y). The second Field Calculator expression replaceAll(polygon, ' ', ',') takes the input string illustrated below and produces the indicated output string: -- Input --
((-114.125 33.375],[-116.125 32.375],[-115.125 31.375],[-113.125 31.375],[-112.125 32.375))
-- Output --
((-114.125,33.375],[-116.125,32.375],[-115.125,31.375],[-113.125,31.375],[-112.125,32.375)) A final pair of regular expression pattern matches can now be used to replace the (( at the front of the string and the )) at the end of the string with the required array bracketing and spatial reference specification. The first pattern match uses a ^ metacharacter to anchor the pattern match to the start of the input string. The target of this match is the double parentheses at the beginning of the string. The second pattern match targets the double parentheses appearing at the end of the input string. The $ metacharacter is used here to anchor the pattern match to the end of the string. Our final expression will appear more complicated at first, mostly because the literal replacement strings are a little longer. I'll try to pull the expression apart to make it easier to understand. replaceAll(replaceAll(polygon, '^\(\(', '{"rings": [[['), '\)\)$', ']]], "spatialReference": {"wkid": 4326}}') The inner replaceAll( ) has our first regular expression pattern: replaceAll(polygon, '^\(\(', '{"rings": [[[') Back-slash characters are used to 'escape' the pair of parentheses in the pattern. This is required to specify that the parentheses are literally rounded parentheses and not the start of what regular expressions refer to as a capturing group. This pattern and replacement is adding the required array bracketing and "rings" specification to the string representation of the polygon. The outer replaceAll( ) has our second regular expression pattern: replaceAll( . . . '\)\)$', ']]], "spatialReference": {"wkid": 4326}}') Back-slash characters are again used to 'escape' the pair of parentheses in the pattern. The pattern and replacement in this case appends the required closing brackets for the polygon coordinate array and includes an appropriate spatial reference for the polygon's coordinates. The expression nests an "inner" call to replaceAll( ) within a second "outer" invocation. The result from the "inner" function call, manipulating the beginning of the string, is used as input to the "outer" function call which manipulates the end of the string. The expression could be simplified, perhaps, by using separate Field Calculator processors to handle each pattern and replacement, but the leading and trailing parentheses anchored to the beginning and end of the string seemed to beg for the replacements to be done serially. Here is the string manipulation being performed in this final step: -- Input --
((-114.125,33.375],[-116.125,32.375],[-115.125,31.375],[-113.125,31.375],[-112.125,32.375))
-- Output --
{\"rings\": [[[-114.125,33.375],[-116.125,32.375],[-115.125,31.375],[-113.125,31.375],[-112.125,32.375]]], \"spatialReference\": {\"wkid\": 4326}} We can now use a Field Mapper processor to cast this final String value into a Geometry. Here is an illustration of a GeoEvent Service with the three Field Calculator processors and Field Mapper processor routing output to a stream service. The stream service was used to verify the produced string could be successfully cast to a Geometry and displayed as a polygon feature on a web map. If you found this article helpful, you might want to check out these other threads which highlight what you can do with the Field Calculator processor, the Field Mapper processor and regular expression pattern matching. How to switch positions on coordinates GeoEvent 10.9: Using expressions in Field Mapper Processors
... View more
04-24-2024
06:12 PM
|
3
|
1
|
514
|
POST
|
@Jay_Gregory -- While this use case might not be a great fit for GeoEvent Server, the ArcGIS Online product for real-time data processing, ArcGIS Velocity, supports both real-time analytics and scheduled batch analytics. There is a Calculate Distance tool in the ArcGIS Velocity proximity toolset which can be configured to join a calculated distance onto processed data records in order to "enrich" them with the linear distance to the closest comparison feature. Maybe something to consider for the future ...
... View more
01-09-2024
09:20 AM
|
0
|
0
|
520
|
POST
|
@Justin_Greco -- I was able to check with one of the developers and confirm that the original GeoTab implementation did not anticipate that an organization might have different administrative units or groups each with their own GeoTab account. The latest release of the connector (Release 11 - July 5, 2023) does not support multiple 'databases' within the cached data. You could post a comment to the connector's page in the GeoEvent Server Gallery requesting an enhancement, but I cannot say when or if the enhancement would be picked-up for development.
... View more
01-09-2024
09:04 AM
|
1
|
1
|
473
|
POST
|
@Jay_Gregory -- I don't think this is going to be possible using the geometry processors available out of the box for GeoEvent Server. The Intersector processor, for example, would be a better fit if you had a number of inbound event records with polyline geometry and wanted to know what portion(s) of each polyline intersect the wildfire burn area (given a polygon for the wildfire's area). The Difference Creator processor, on the other hand, would be used to clip or remove some portion of a wildfire's burn area that intersects an event record’s geometry -- not really useful if the inbound event record has a point or polyline geometry. I would expect you would want to run this sort of analysis as polygon vs. polygon. I cannot convince myself that the Symmetric Difference Creator processor will be of any help here. Given that the wildfire locations are provided as point locations (not polyline perimeters or polygonal burn areas) and the lack of a 'Find Nearest' tool, there is really no opportunity to perform a useful intersection analysis if the facility data is also point locations. A point can be evaluated to see if it intersects an area, but two points are very rarely (like "never") going to intersect one another at the exact same coordinates. You could create elliptic buffers for the wildfires, or the facilities, or both I suppose. But especially for the wildfires, a geodesic buffer would be a very poor model of a fire's behavior and direction of expansion. There simply is no way to take terrain or weather into account when creating the buffer. And then you would have to run the analysis iteratively using rings of buffers to determine which facilities are within: (a) 2km of a fire; (b) 5km of a file; (c) 10km of a fire? Seems like a pretty poor substitute for trying to find the nearest point to another point. This also doesn't feel like a great fit for real-time data processing in general. If you were able to query every hour to get an updated polygon model for a wildfire's burn area, that would be one thing. But I wouldn't think a given wildfire's point location would be likely to change in real-time, and how many new wildfires are going to be posted in an hour, or even within an 8-hour shift? It just seems you would be better off bringing the two point layers in to a web map and using the available 'Analysis' tools to run a 'Find Nearest' from the Proximity set of operations at several different times during your day.
... View more
01-08-2024
06:56 PM
|
0
|
1
|
535
|
POST
|
@Moi_Nccncc -- To expand on what @Amir-Sarrafzadeh-Arasi suggests, yes, you want to use geofences to capture the buffered areas around the point locations of the trucks from your AVL. You need to do this in two steps: Use a Buffer Creator processor to construct an elliptical buffer around the last-known / latest-reported position of a truck. Allow the processor to replace the truck's point location (geometry) with the computed polygon. Then push the event record with its polygon geometry out as a feature record so that a Geofence Synchronization Rule can use the feature record to add / update a geofence in the GeoEvent Server's registry of known areas. In a second GeoEvent Service, use a GeoTagger processor to identify which geofences (elliptical areas) a given Truck's point location intersects. The name of each polygon geofence should be the TRACK_ID (e.g. vehicle name or identifier) of the truck whose location was used to create the buffered area. Now, you have a few things to consider. First is the rate at which AVL data is coming into your GeoEvent Server. Using a feature service to store feature records which are queried to update geofences will introduce significant latency in getting the geofences updated. The alternative is to use a stream service to broadcast the feature records with their elliptical geometries. A Geofence Synchronization Rule can be configured to subscribe to a stream service so the data is "pushed" into the geofence registry rather than requiring the synchronization rule to "poll" or query the feature records from a geodatabase. You also have to recognize the fundamental race condition (when receiving an updated AVL point location) between using that point location to both update a polygon geofence and processor the point location to determine if it intersects any other geofences you've created / updated from other truck point locations. You have to accept that the last-known / latest-reported position of a truck can only be compared to established geofences and allow some time for the geofence synchronization to make those updates to the geofence registry before trying to determine if the "latest" point location intersects a geofence. How you choose to output the polygon buffer feature records matters. It will take some time to create an elliptical buffer, write the constructed geometry out as a feature record to a feature service, and a synchronization rule to then retrieve that feature record and update the GeoEvent Server's geofence registry. If you use a feature service for your synchronization you must not set the synchronization rule's Refresh Interval too aggressively. You cannot, for example, expect GeoEvent Server's Geofence Synchronization Rule to query a feature service every second and update the geofence registry -- not when are also expecting to create and update those polygon feature records and ingest, adapt and process the AVL location data records to determine intersections with the geofences. The default for geofence synchronization using a feature service is to query the feature service once every 15 minutes. You might set the Refresh Interval to run as quickly as once a minute, but I would not set it to run any more frequently than that. I would probably use a stream service to drive the geofence synchronization to minimize the latency in updating the geofence registry. A final consideration is that a vehicle's point location will most likely intersect that same vehicle's buffered location (e.g. geofence). The first GeoEvent Service is creating the polygon buffers and driving updates into the geofence registry. The second GeoEvent Service is receiving the same point locations and using the established geofences to determine intersections. You will probably want to use something like a Field Splitter processor to split the comma delimited list of geofence(s) names produced by your GeoTagger and then Filter to discard any event record where the geofence name matches the truck's TRACK_ID. You only want to keep event records where a truck's point location intersects some other truck's buffered location.
... View more
01-08-2024
04:01 PM
|
1
|
0
|
557
|
POST
|
@DanaDebele -- The fields Shape__Length (applicable for feature records with a polyline geometry) and Shape_Area (applicable for feature records with a polygon geometry) contain geodatbase managed attribute values. Their values are computed by the geodatabase when editing the feature geometry. You should not include them in a GeoEvent Definition as there are no values you can query or calculate which will transfer to the output feature record when attempting to add / update feature records with an Add a Feature or an Update a Feature output. Feature attributes such as Shape__Area and Shape__Length are similar to attributes like objectid, or oid and globalid -- you will see them listed in a feature service's schema when reviewing the feature service specification in the ArcGIS REST Services Directory, but they are not attribute fields you specifically add to a feature service or whose values you modify when editing feature records via a web map. You cannot write to (or overwrite) an object identifier or global identifier value using GeoEvent Server. You also cannot write or overwrite a shape's geometrically computed area or length. You should remove these attributes from your GeoEvent Definition so they are not included in the data sent to an outbound connector. Once removed you should see their original values preserved as you use GeoEvent Server to update specific attribute field(s) you want to use to indicate things like an e-mail notification has been sent for a particular feature record. You can review the blog article Using a partial GeoEvent Definition to update feature records for additional discussion on this. One of our Esri Support analysts, Nicole, recently added a comment to the article to detail a solution she configured which does pretty much what you want to do -- set an attribute field value to indicate a feature record has been processed (or in your case, that a notification e-mail has been sent).
... View more
01-08-2024
03:11 PM
|
0
|
0
|
291
|
POST
|
@JeffYang2023 -- I don't know if this will help. In an unrelated discussion looking at one of GeoEvent Server's processors which can potentially create a large number of threads, I was told that a user can only create so many threads. The commands below were run on a Mac, so I assume there are similar commands you can run within your Linux (?) environment. ulimit -u 5568 sysctl kern.num_threads kern.num_threads: 40960 sysctl kern.num_taskthreads kern.num_taskthreads: 8192 Does this mean that a process owner on a given machine can only instantiate around 5500 threads, while the kernel can handle perhaps 41000 threads? I'm not sure, but the limit for a number of "task" threads is much less, and all three are orders of magnitude less that the 2,000,000 event records you observe being ingest before seeing the DefaultReactiveExecutor engine log the exception. I was pointed to this article on stackoverflow which suggests that you may have reached a limit on the number of open files, if the process thread(s) are consuming a large number of file descriptors / handles. the article suggests punching the ulimit up from 5k to 65k ... but I really don't know what the ramifications of that might be for overall system stability. You also might take a look at this article from mastertheboss.com which walks through some suggestions for addressing the Java OutOfMemoryError. What input have you configured to receive the sensor data? I'm assuming it is either the Receive JSON on a WebSocket or the Subscribe to an External WebSocket for JSON input? Is the data being ingest by the input in one extremely large batch? Or is the data coming in as several batches with some period of time between batches (and it is not until you eventually reach 2 Million records you see the error)? I'm wondering if the issue is that hundreds-of-thousands of data records received all at once as a single massive batch of data is causing the issue, versus a potential resource leak in the inbound connector you're using where you can receive 250 data records each second and it takes over 2 hours to receive a sufficient number of data records to trigger the exception. === I would advise that if you need to use the -Xms and -Xmx switches to increase the JVM RAM allocation that you set their values the same. This disables dynamic memory allocation. The way Java works, if you set a minimum of 1g and a maximum of 16g, when Java determines the JVM needs to be resized it will instantiate a new (larger) JVM and copy the running state into the new instance. This isn't as much of a problem if the system creates a 4g instance to copy over a currently running 2g instance (temporarily consuming 6 gigs). But if it were trying to dynamically scale 12g to 16g? That's a lot of temporary memory being consumed to copy data from one JVM to another. It is reportedly more stable, if you really need that much memory, to allocate a minimum (-Xms) to 16g and set the maximum (-Xmx) to 16g as well -- to prevent the dynamic resizing.
... View more
01-08-2024
10:52 AM
|
0
|
0
|
545
|
Title | Kudos | Posted |
---|---|---|
1 | 08-31-2015 07:23 PM | |
1 | 05-01-2024 06:16 PM | |
1 | 01-05-2024 02:25 PM | |
1 | 01-09-2024 09:04 AM | |
1 | 01-08-2024 04:01 PM |
Online Status |
Offline
|
Date Last Visited |
05-24-2024
12:20 AM
|